<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / Global Lens

          How humans can control risks arising from AI

          By Joseph Sifakis | China Daily | Updated: 2025-12-03 07:35
          Share
          Share - WeChat
          Jin Ding/China Daily

          Artificial Intelligence represents a technological paradigm shift distinct from any previous inventions. Unlike tools that extend physical capabilities, AI rivals humanity's core trait: the ability to produce and apply knowledge. This grants it profound power to reshape individual identity, economic structures and social organization. Consequently, its immense potential benefits are matched by significant risks, necessitating a comprehensive, global strategy for its governance. A reductionist debate pitting efficiency against safety is inadequate; we must instead adopt a holistic view of AI's forms, applications and future evolution.

          Much public discourse focuses on artificial general intelligence (AGI), a vague concept promising human-level performance across all cognitive tasks. In addition to the fact that the term itself is vague because it is impossible to determine the extent of cognitive tasks that humans are capable of, it ignores essential characteristics of human intelligence. To approach human intelligence, it is not enough for machines to outperform humans in specific tasks. What matters is the capacity for autonomy, i.e., the ability to understand the world and act adaptively to achieve goals by combining a wide variety of skills as needed.

          There is a considerable gap between the conversational AI systems we have today and autonomous systems capable of replacing human operators in complex organizations, as envisaged by the internet of things. For example, autonomous driving systems, smart grids, smart factories, smart cities, or autonomous telecommunications systems are highly complex, often critical systems composed of agents, each pursuing their own goals (individual intelligence) while coordinating to achieve the overall system goals (collective intelligence).

          The technical obstacles to achieving this vision of autonomy are immense and far exceed the current capabilities of machine learning, as illustrated by the setbacks suffered by the autonomous vehicle industry, some of which had promised full autonomy by 2020. Current AI agents are limited to low-risk, digital tasks. To be trustworthy in critical roles, future AI systems must possess robust reasoning capabilities, pursue goals rationally in accordance with technical, legal and ethical standards, and achieve a level of reliability that is currently considered a "pipe dream".

          The core of the problem lies in the inherent limitations of obtaining solid reliability guarantees. While supremely effective at generating knowledge from data, AI systems are non-explainable, making it virtually impossible to achieve the high levels of reliability demanded for safety-critical applications. This means AI safety cannot be guaranteed by the rational certification processes used for traditional systems such as elevators or airplanes.

          In addition to technical properties such as safety, AI systems are designed to mimic humans and must therefore meet human-centric cognitive properties. Many studies deal with "responsible AI", "aligned AI", and, in particular, "ethical AI". However, most are superficial and lack scientific basis because, unlike safety, these properties are difficult to grasp technically as they depend on complex cognitive processes that are poorly understood in humans. An AI that passes a final medical exam does not possess the understanding or responsibility of a human doctor. Creating AI systems that truly respect social norms and demonstrate responsible collective intelligence remains a major challenge.

          The risks posed by AI can be categorized into three interconnected areas. The technological risks for AI systems are amplified and transformed by their "black box" characteristics, including new safety/security risks that are poorly understood. Existing risk-management principles require high reliability in high criticality areas. If strictly applied, they would rightly exclude current AI systems from high-criticality applications. The development of global technical standards, a pillar of our modern civilization, is essential to build trust. However, this effort is hampered by technical limitations as well as open opposition from Big Tech and US authorities, who argue that standards stifle innovation and advocate for insufficient self-certification by developers.

          Anthropogenic (human) risks differ from technological risks as they result from human-induced hazards, caused entirely or predominantly by human activities and involving misuse, abuse or compliance failures. In autonomous driving, skill atrophy, overconfidence and mode confusion are examples of misuse. Compliance risks are linked to manufacturer governance, which prioritizes commercial expansion at the expense of safety and transparency. Tesla's "Full Self-Driving" system, which requires active human supervision despite its name, exemplifies the dangers of the gap between marketing promises and the technical reality.

          Finally, AI involves considerable systemic risks which are long-term or large-scale disruptions to social, economic, cultural, environmental and governance systems. While some risks, such as technological monopolies, job displacement and environmental costs, have been identified, the other risks are poorly understood. A critical yet less appreciated risk is cognitive outsourcing — the delegation of intellectual work to machines, which can lead to erosion of critical thinking, weakening of personal responsibility and homogenization of thought. Raising collective awareness of these insidious cognitive deficits is vital for their mitigation.

          To address this complex risk landscape, we must develop a comprehensive, human-centric vision for AI that moves beyond the narrow, monolithic goal of AGI promoted by tech giants. This vision should honestly assess the current weaknesses of AI technology and mobilize international research to explore new avenues for AI application across science, industry and services. Furthermore, on an ideological level, we must reject the mentality of "acting quickly and breaking the rules" as it leads to technical debt and long-term fragility. It often goes hand in hand with the dogma of "technological determinism" which denies the role of human action in defining the role of technology in society.

          China is well placed to contribute to this new vision, which is not only about creating the most powerful AI, but focuses on serving society. The country has a strong industrial base, with industrial sectors that need increasingly intelligent products and services. The development of global standards and regulations will be crucial for implementing this vision. Working together with other nations, China can play an active role in helping redress the global balance of power, harmonizing AI development with reliability and safety. Initiatives such as the "China AI Safety and Development Association" and the "World AI Cooperation Organization" reflect early efforts toward this essential direction.

          The author is a recipient of the Turing Award (known as the Nobel Prize for Computing) and founder of the Verimag laboratory in Grenoble.

          The views don't necessarily reflect those of China Daily. 

          If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 国内精品伊人久久久久影院对白| 亚洲中文字幕乱码一二三区| 电视剧在线观看| 18禁超污无遮挡无码网址| 一本伊大人香蕉久久网手机| 国产精品麻豆成人av| 又大又紧又粉嫩18p少妇| 干老熟女干老穴干老女人| 日韩av日韩av在线| 亚洲性图日本一区二区三区| 久久精品国产午夜福利伦理| 亚洲男女羞羞无遮挡久久丫 | 久久精品99国产精品日本| 亚洲一区二区三区自拍麻豆| 2021国产精品一区二区在线| 国产成人拍精品视频午夜网站| 国产成人高清亚洲综合| 日韩视频福利| 天天射—综合中文网| 欧美色欧美亚洲高清在线观看| 亚洲中文字幕人成影院| 午夜福利免费区在线观看| 久久综合国产一区二区三区 | 啊┅┅快┅┅用力啊岳网站| 开心五月婷婷综合网站| 亚洲精品自拍在线视频| 女人被狂躁的高潮免费视频| 影音先锋中文字幕无码资源站| 国产白丝网站精品污在线入口| 久久久久亚洲av成人网址| 亚洲精品国产精品不乱码| 国产不卡精品一区二区三区| 精品视频不卡免费观看| 高潮毛片无遮挡高清视频播放| 亚洲中文字幕巨乳人妻| 欧美一a级做爰片大开眼界| 最新亚洲av日韩av二区| 久久精品熟女亚洲av麻| 熟女在线视频一区二区三区| 国产网友愉拍精品视频手机| 国产普通话刺激视频在线播放|