<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / Chinese Perspectives

          What is artificial intelligence's greatest risk?

          By DONG TING | China Daily | Updated: 2025-09-13 10:30
          Share
          Share - WeChat
          A visitor interacts with a robot equipped with intelligent dexterous hands at the 2025 World AI Conference (WAIC) in East China's Shanghai, July 29, 2025. [Photo/Xinhua]

          Risk dominates current discussions on AI governance. This July, Geoffrey Hinton, a Nobel and Turing laureate, addressed the World Artificial Intelligence Conference in Shanghai. His speech bore the title he has used almost exclusively since leaving Google in 2023: "Will Digital Intelligence Replace Biological Intelligence?" He stressed, once again, that AI might soon surpass humanity and threaten our survival.

          Scientists and policymakers from China, the United States, European countries and elsewhere, nodded gravely in response. Yet this apparent consensus masks a profound paradox in AI governance. Conference after conference, the world's brightest minds have identified shared risks. They call for cooperation, sign declarations, then watch the world return to fierce competition the moment the panels end.

          This paradox troubled me for years. I trust science, but if the threat is truly existential, why can't even survival unite humanity? Only recently did I grasp a disturbing possibility: these risk warnings fail to foster international cooperation because defining AI risk has itself become a new arena for international competition.

          Traditionally, technology governance follows a clear causal chain: identify specific risks, then develop governance solutions. Nuclear weapons pose stark, objective dangers: blast yield, radiation, fallout. Climate change offers measurable indicators and an increasingly solid scientific consensus. AI, by contrast, is a blank canvas. No one can definitively convince everyone whether the greatest risk is mass unemployment, algorithmic discrimination, superintelligent takeover, or something entirely different that we have not even heard of.

          This uncertainty transforms AI risk assessment from scientific inquiry into strategic gamesmanship. The US emphasizes "existential risks" from "frontier models", terminology that spotlights Silicon Valley's advanced systems.

          This framework positions American tech giants as both sources of danger and essential partners in control. Europe focuses on "ethics" and "trustworthy AI", extending its regulatory expertise from data protection into artificial intelligence. China advocates that "AI safety is a global public good", arguing that risk governance should not be monopolized by a few nations but serve humanity's common interests, a narrative that challenges Western dominance while calling for multipolar governance.

          Corporate actors prove equally adept at shaping risk narratives. OpenAI's emphasis on "alignment with human goals" highlights both genuine technical challenges and the company's particular research strengths. Anthropic promotes "constitutional AI" in domains where it claims special expertise. Other firms excel at selecting safety benchmarks that favor their approaches, while suggesting the real risks lie with competitors who fail to meet these standards. Computer scientists, philosophers, economists, each professional community shapes its own value through narrative, warning of technical catastrophe, revealing moral hazards, or predicting labor market upheaval.

          The causal chain of AI safety has thus been inverted: we construct risk narratives first, then deduce technical threats; we design governance frameworks first, then define the problems requiring governance. Defining the problem creates causality. This is not epistemological failure but a new form of power, namely making your risk definition the unquestioned "scientific consensus". For how we define "artificial general intelligence", which applications constitute "unacceptable risk", what counts as "responsible AI", answers to all these questions will directly shape future technological trajectories, industrial competitive advantages, international market structures, and even the world order itself.

          Does this mean AI safety cooperation is doomed to empty talk? Quite the opposite. Understanding the rules of the game enables better participation.

          AI risk is constructed. For policymakers, this means advancing your agenda in international negotiations while understanding the genuine concerns and legitimate interests behind others'.

          Acknowledging construction doesn't mean denying reality, regardless of how risks are defined, solid technical research, robust contingency mechanisms, and practical safeguards remain essential. For businesses, this means considering multiple stakeholders when shaping technical standards and avoiding winner-takes-all thinking.

          True competitive advantage stems from unique strengths rooted in local innovation ecosystems, not opportunistic positioning. For the public, this means developing "risk immunity", learning to discern the interest structures and power relations behind different AI risk narratives, neither paralyzed by doomsday prophecies nor seduced by technological utopias.

          International cooperation remains indispensable, but we must rethink its nature and possibilities. Rather than pursuing a unified AI risk governance framework, a consensus that is neither achievable nor necessary, we should acknowledge and manage the plurality of risk perceptions. The international community needs not one comprehensive global agreement superseding all others, but "competitive governance laboratories" where different governance models prove their worth in practice. This polycentric governance may appear loose but can achieve higher-order coordination through mutual learning and checks and balances.

          We habitually view AI as another technology requiring governance, without realizing it is changing the meaning of "governance" itself. The competition to define AI risk isn't global governance's failure but its necessary evolution: a collective learning process for confronting the uncertainties of transformative technology.

          The author is an associate professor at the Center for International Security and Strategy, Tsinghua University.

          The views don't necessarily represent those of China Daily.

          If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 香港特级三A毛片免费观看| 亚洲欧美日韩综合久久久| 国产情精品嫩草影院88av| 狠狠躁夜夜躁人人爽天天bl| 草裙社区精品视频播放| 久久久精品免费国产四虎| 亚洲一级特黄大片一级特黄| 欧美国产日韩久久mv| 国产女同一区二区在线| 国产精品久久久久影院亚瑟| 亚洲综合一区二区三区在线| 国产精品中文一区二区| 亚洲AⅤ波多系列中文字幕| 丰满高跟丝袜老熟女久久| 中文字幕第55页一区| 97精品尹人久久大香线蕉| 国产成人午夜福利在线小电影| 国产AⅤ天堂亚洲国产AV| gogo无码大胆啪啪艺术| 久久伊人色| 国产麻豆放荡av激情演绎| 色成人亚洲| 图片区小说区av区| 亚洲色大成网站WWW永久麻豆| 日韩在线视频线观看一区| 无码天堂亚洲国产AV| 蜜国产精品JK白丝AV网站| 东京热人妻无码一区二区av| 闷骚的老熟女人15p| 成人精品日韩专区在线观看| 亚洲一本大道在线| 精品国产一区二区三区卡| 黄页网址大全免费观看| 亚洲色大成网站www永久男同| 亚洲国产v高清在线观看| 性夜夜春夜夜爽夜夜免费视频 | 51妺嘿嘿午夜福利| 亚洲午夜性猛春交XXXX| 精品国精品无码自拍自在线| 国产亚洲精品VA片在线播放 | 欧美成年性h版影视中文字幕 |