<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / Global Lens

          China, US can compete and cooperate on AI

          By Daniel Castro | CHINA DAILY | Updated: 2025-11-24 07:35
          Share
          Share - WeChat
          WANG XIAOYING/CHINA DAILY

          Both the United States and China have made artificial intelligence (AI) a national priority. While Washington has launched its AI Action Plan to accelerate AI innovation and adoption across the US economy, Beijing has identified AI as a central component of its strategy for developing "new quality productive forces", a goal emphasized in the upcoming 15th Five-Year Plan (2026-30) period. The two countries are competing head-to-head for technological leadership, not only in model development but also in critical AI-enabled applications such as biotechnology, advanced materials and robotics.

          This competition to lead the next technological frontier is natural and healthy because it drives progress, efficiency and scientific discovery. But competition in capabilities does not preclude cooperation on guardrails.

          AI safety — the effort to ensure that systems work as intended and do not create dangerous spillover effects — should be an area for limited but deliberate cooperation between the world's two AI superpowers. The reason is straightforward: some AI failures stay within borders, but others do not.

          An unsafe autonomous vehicle is a domestic problem. If a self-driving car malfunctions in Shenzhen or San Francisco, the damage is local. Each country can handle those risks through its own regulations and liability systems. The same goes for biased algorithms, privacy issues or the use of deepfakes in domestic politics.

          But certain categories of AI risk have negative externalities that cross borders. A model that makes it easy to design a biological or chemical weapon or automate cyberattacks doesn't just endanger the country it was built in — it endangers everyone across the globe. These are strategic safety issues, not commercial or consumer concerns. Neither the US nor China benefits if the other side makes a mistake in handling them. A major misuse or technical failure would invite global backlash, pressure for sweeping restrictions and potentially duplicative testing requirements by third countries that slow progress for both sides.

          That's why both countries should cooperate, maybe not on AI regulation, but on research and data related to risk detection, evaluation and incident response. Understanding how cutting-edge frontier models can be repurposed for harmful applications, or how they can fail in ways that cascade through digital systems, requires substantial experimentation and technical analysis. Both sides already invest in this kind of work domestically. Joint efforts and more information sharing could reduce redundancy, improve coverage and clarify which risks require containment measures.

          This does not mean shared rules or harmonized laws. The US and China will continue to take different policy paths based on their own institutions and political systems. But the underlying science of AI safety — how models behave, how they can be stress-tested and how incidents can be identified and analyzed — does not need to be duplicated in isolation. Shared baselines make everyone's work more efficient and reduce unnecessary fragmentation.

          There are models for this kind of cooperation. During the Cold War, US and former Soviet Union scientists engaged in lab-to-lab collaboration on nuclear material security and reactor safety. The two governments remained geopolitical rivals, but their scientific institutions found opportunities to share technical methods to prevent accidents. The logic was simple: when the safety risks affect everyone, preventing accidents is in everyone's interest. The same logic applies to AI. As these systems become more capable and widely available, ensuring their safety becomes a matter of shared security, not national preference.

          A practical path forward would begin with shared incident tracking and vulnerability reporting. When an AI system violates safety expectations, such as producing malicious code, those events should be documented and communicated through technical research channels. Researchers can compare data on failure modes, benchmark evaluation tools and identify where new testing methods are needed.

          Another step would be joint red-team exercises — controlled tests where researchers deliberately probe advanced models for misuse potential. These could be conducted under academic or multilateral frameworks with strict intellectual property protections. Cooperation could extend to research on detection and containment techniques — how to prevent models from being modified to bypass safeguards, how to identify model leaks and how to evaluate the security of model hosting environments. None of this work requires trust or political alignment, only technical competence and coordination.

          Many global AI governance initiatives mistakenly assume that countries will converge on a common approach. History suggests otherwise. Nations have long made different moral and legal choices in emerging technologies, such as genetically modified crops, gene editing and stem-cell research. There was never a single global treaty governing those technologies. Instead, countries adopted their own rules.

          This approach balances realism with responsibility. The US and China will continue to compete for leadership in AI innovation and commercial deployment in global markets. But they can also recognize that preventing cross-border harm from unsafe AI systems is in the interest of both nations. Neither country benefits if accidents undermine confidence in the technology itself.

          Strategic AI safety — ensuring that advanced AI remains stable, predictable and secure — should be treated as a shared goal, much like nuclear reactor safety or pandemic surveillance. The competition to build more capable systems will continue. But cooperation to prevent cross-border harm is simply common sense.

          The author is vice-president of the Information Technology and Innovation Foundation in the US and director of its Center for Data Innovation.

          The views don't necessarily reflect those of China Daily.

          If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 亚洲国产精品午夜福利| 激情四射激情五月综合网| 天天综合网站| 国产毛多水多高潮高清| 欧美性猛交xxxx乱大交极品| 亚洲综合在线日韩av| 国产亚洲精品俞拍视频| 四虎亚洲精品高清在线观看| 深夜av免费在线观看| av午夜福利亚洲精品福利| 久久亚洲精品11p| 激情文学一区二区国产区| 好吊视频一区二区三区人妖| 国产精品自拍午夜福利| 亚洲精品一区二区三区免| 人妻精品久久无码区| 日韩熟妇中文色在线视频| 日韩精品成人区中文字幕| 18禁国产一区二区三区| 成人午夜大片免费看爽爽爽| 亚洲精品国男人在线视频| 欧美丰满熟妇xxxx性| 国产精品福利一区二区久久| 日本55丰满熟妇厨房伦| 日本一区二区三区视频版| 国语精品自产拍在线观看网站| 国产资源精品中文字幕| 欧美成人一卡二卡三卡四卡| 欧美综合婷婷欧美综合五月| 精品久久久无码人妻中文字幕| 国产AV无码专区亚洲AV漫画| 久久人人97超碰国产精品| 青柠在线观看免费高清在线观看| 色综合视频一区二区三区| 亚洲av网站首页在线观看| 亚洲日韩中文字幕在线播放| 乱人伦xxxx国语对白| 成全观看高清完整版免费动漫电影| 性xxxxfreexxxxx牲性| 国产亚洲日韩一区二区三区| 色欲国产精品一区成人精品|