<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / China and the World Roundtable

          Are we losing the ability to think due to LLMs?

          By Virginia Dignum | China Daily | Updated: 2025-06-24 06:13
          Share
          Share - WeChat
          SONG CHEN/CHINA DAILY

          In the age of large language models (LLMs) and generative AI, we are witnessing an unprecedented transformation in how knowledge is produced, disseminated and consumed. These tools can summarize dense texts, write code, draft legal contracts, or respond to philosophical questions in seconds.

          LLMs, we are told, make us more efficient, simplify complex work, automate mundane tasks and allow us to focus on what matters. But as we marvel at their capabilities, a pressing concern emerges: Are these models genuinely boosting efficiency, or are they subtly eroding our capacity for independent thought, judgment and critical reflection?

          Efficiency is not a neutral term. It reflects values, what we choose to prioritize, what we define as valuable, and what we are willing to sacrifice. The current narrative around generative AI treats efficiency as synonymous with progress. It suggests that the faster something is done, the better. But faster is not always better. And not everything that can be automated should be.

          The popular belief is that LLMs "free up" cognitive bandwidth. That is, they allow humans to delegate repetitive thinking to machines and reserve their energy for more reflective tasks. But the opposite is often true. As more intellectual labor — writing, summarizing and decision-making for example — is handed over to AI, the less we will engage with it ourselves. Instead of reserving our thoughtfulness for higher tasks, we will increasingly lose the opportunities, and perhaps even the ability, to think critically.

          An apt example is the increasing synthetic content online. Not only are images and text being fabricated by machines, but so too often are the public reactions to them. Content no longer spreads because it presents the truth or is relevant, but because of its emotional pull. Fake images spark fake outrage in comments, which then fuel real engagement from users who cannot distinguish between what is human and what is AI generated.

          The result is a synthetic discourse loop that simulates social consensus. "Everyone is talking about it," we hear, when in fact no one is — until the content, and the reaction to it, are manufactured to serve the profit-driven strategy of platforms. Their goal is not informed conversation, but to draw continued attention, which translates into short-term revenue.

          This is not just a technical challenge of detecting what's real. It's an epistemological crisis. When falsehoods are propped up by simulated reactions and amplified by algorithms optimized for attention, the notion of public discourse itself becomes unstable. Our sense of what others believe is no longer based on shared experience or deliberation, but on machine-curated illusions. In such an environment, critical thinking doesn't just decline, it is structurally discouraged.

          So what do we really mean by "efficiency"? If it means shortcutting the time it takes to write a report, perhaps we have succeeded. But if it means replacing the intellectual effort that creates depth, coherence and reflection, then it's not a gain; it's a loss. The moment we accept LLMs as thought substitutes, rather than thought aids, we begin to erode the very conditions under which human reasoning thrives: questioning, dialogue, uncertainty and contradiction.

          This is particularly dangerous at a time when democratic values are at stake, when critical reflection and informed disagreement are essential. The legitimacy of democratic processes relies on citizens engaging with ideas, evaluating claims, and forming judgments. But when engagement is replaced by reaction to machine-generated one-liners, that is, content crafted for manipulation rather than understanding, our political agency is undermined. We don't just risk being misled; we risk no longer knowing what it means to evaluate truth for ourselves.

          There is a temptation to see LLMs as neutral tools. But they are not. They are shaped by the data they are trained on, the goals of their developers, and the market incentives that drive their deployment. Their outputs reflect a history of biases, omissions and assumptions that are often invisible to users. And the more seamlessly these outputs integrate into our workflows, the more easily they escape scrutiny. In this way, the danger is not only what the AI says, but that we stop asking how it came to say it.

          To call this "efficiency" is to ignore what is actually happening: a transfer of epistemic authority from humans to machines, without the structures of accountability and transparency that should accompany such a shift. We are being asked to trust a system we cannot interrogate, on the basis that it sounds plausible and delivers quickly.

          But speed is not the same as understanding. And plausibility is not truth.

          Instead of fetishizing efficiency, we need to refocus on resilience: the capacity of individuals and societies to question, adapt and resist manipulation. This means investing in AI literacy — not just how to use the tools, but how to critique them. It means recognizing that no AI can replace the ethical, cultural and contextual dimensions of human reasoning. It means being willing to slow down, to question the output, and to value the effort of thinking as much as the result.

          Governments, tech companies, and citizens each have a role to play. Regulation is necessary, but it is not sufficient. The foundation of responsible AI is not technical compliance; it is ethical intent. That begins with "question zero": When should AI be used? Not every problem needs an AI solution, and not every deployment leads to benefit. Responsible AI is not to put AI-first, it's to put people-first. It starts by asking why, not by rushing to deploy AI. Tech developers must embed responsibility into the very design of systems, not as an afterthought but as a guiding principle.

          More important, individuals must be empowered to question AI outputs, understand their implications, and resist the normalization of passive dependence. Only by centering human judgment and agency can we ensure AI serves society, rather than reshaping it to fit commercial imperatives.

          There is no turning back the presence of LLMs in our lives. But we can choose how to live with them. The question is not whether they will think for us, but whether we will let them define what it means to think at all. Efficiency, in the true sense, should not be about doing more with less thought. It should be about doing better, with deeper attention, stronger ethics and sustained human insight. Anything less is not progress. It is surrender.

          The author is a professor of computer science and the director of the AI Policy Lab at Ume? University, Sweden.

          The views don't necessarily reflect those of China Daily.

          If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 国产精品毛片一区二区| 九九热精彩视频在线免费| 国产日韩av二区三区| 无码人妻精品一区二区三区下载| 国产成熟妇女性视频电影| 无码一区二区三区av在线播放| 亚洲午夜精品国产电影在线观看| 日本熟妇浓毛| 自拍第一区视频在线观看| 国产91麻豆精品成人区| 国产精品免费重口又黄又粗| 亚洲精品一区二区动漫| 国产大陆av一区二区三区| 国产成人免费| 国产绿帽在线视频看| 亚洲天堂欧洲| 男女性高爱潮免费网站| 狠狠色丁香婷婷综合尤物| 久久国产自偷自偷免| 精品99在线观看| 人人人澡人人肉久久精品| 国产精品白丝一区二区三区| 国产一区二区三中文字幕| 精品超清无码视频在线观看| 黄又色又污又爽又高潮| 精品人妻av区乱码| 国产初高中生在线视频| 亚洲成av人片一区二区| 久久精品国产久精国产| 亚洲中文字幕日产无码成人片| 人妻丰满熟妇av无码区| 亚洲人成小说网站色在线| 日韩好片一区二区在线看| 神马视频| 国产主播精品福利午夜二区| 99久久精品费精品国产一区二 | 国产精品视频一区二区三区无码 | 国产性色的免费视频网站| 亚洲欧美在线一区中文字幕| av无码精品一区二区乱子| 中文字幕在线日韩|