<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Latest News

          Experts examine AI's potential risks

          Seminar discusses global challenges and the need for shared governance

          By Yifan Xu in Washington | China Daily | Updated: 2026-02-14 11:25
          Share
          Share - WeChat

          A seminar hosted by the Center for Strategic and International Studies on Tuesday in Washington provided one of the first in-depth public examinations of the International AI Safety Report 2026, the second edition of this comprehensive global assessment.

          As both the report and participants highlighted, the urgent need for stronger international coordination has emerged as a central theme in addressing global risks posed by rapidly advancing artificial intelligence.

          The report, chaired by Turing Award recipient Professor Yoshua Bengio of Universite de Montreal and the Mila-Quebec AI Institute, drew on contributions from over 100 AI experts nominated or supported by more than 30 countries and international organizations, including China.

          "The original idea behind the report was … to build a shared evidence base to inform decision-making about AI technologies," said lead writer Stephen Clare.

          "There were a lot of questions they were facing about AI … and not a lot of consensus on what the actual technical realities of the technology were."

          The AI Safety Summit 2023 was the world's first major intergovernmental conference on frontier AI risks. Held in November 2023 at Bletchley Park in the United Kingdom, it produced the landmark Bletchley Declaration, signed by 28 countries, including China and the US.

          The declaration recognized shared concerns about potential catastrophic AI risks and committed signatories to international cooperation on research, risk assessment and mitigation, directly laying the foundation for the ongoing International AI Safety Report series.

          "Since that time, we have a lot more empirical evidence we can actually rely upon. And we're able to discuss a lot more concrete cases of AI impacts, more evaluations, and more data we can actually use in the report," Clare said.

          Stephen Casper, who led the technical safeguards section of the seminar, provided a detailed explanation of frontier model development stages and protections, noting that "different types of safeguards and risk management techniques apply at different parts in the life cycle".

          He said progress has been made in creating multiple layers of defenses, but highlighted persistent governance gaps. For closed models, Casper said, there is "a pretty rich toolkit" for making them safe. But he warned that for the open models, the main bottlenecks are ones like "risk management and risk governance" rather than only open technical problems involving safeguards.

          The report repeatedly underscored the international nature of AI risks. "AI risks exhibit a trans-boundary nature, with harms often crossing borders due to AI systems developed in one jurisdiction but deployed globally," the report said.

          "Open-weight models cannot be recalled once released. Their safeguards are easier to remove, and actors can use them outside of monitored environments, making misuse harder to prevent and trace."

          According to the report, China accounted for 24.2 percent of notable models in 2024, with open-weight advancements such as DeepSeek R1, Alibaba's Qwen series, Tencent's Hunyuan-Large, and Moonshot AI's Kimi models narrowing the capability gap with leading closed models to less than one year in some cases.

          Framework highlighted

          China's AI Safety Governance Framework 2.0 (2025) was highlighted for providing structured guidance on risk categorization and countermeasures across the AI life cycle, alongside voluntary commitments by 17 Chinese companies in the sector coordinated by the AI Industry Alliance of China.

          "The pace of AI progress raises daunting challenges. However, working with the many experts that produced this report has left me hopeful," Bengio said.

          The International AI Safety Report 2026 does not make specific policy recommendations, it says. The contributors say it aims to synthesize scientific evidence to support informed policymaking and provide a shared evidence base for decision-making.

          "Regardless of where you stand on various policy questions, I think a priority for policymakers is … trying to understand better this situation, potentially quite wild situation that we're in," Clare concluded.

          Today's Top News

          Editor's picks

          Most Viewed

          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 无码一区二区三区av免费| 亚洲 制服 丝袜 无码| 爱豆传媒md0181在线观看| 亚洲熟女精品一区二区| 亚洲av噜噜一区二区| 国产精品亚洲国际在线看| 最新的国产成人精品2020| 宝贝几天没c你了好爽菜老板| 色欲国产一区二区日韩欧美| 国产日韩综合av在线| 国产精品自拍视频免费看| 国产精品一线天在线播放| 国产片AV国语在线观看手机版| 久久天天躁夜夜躁狠狠820175| 亚洲国产精品无码久久电影| 强奷乱码中文字幕| 99久久亚洲精品影院| 最近中文字幕国产精品| 少妇脱了内裤在客厅被| 国产精品国产三级国产a| 欧美精品亚洲精品日韩专| 天天综合亚洲色在线精品| 五月激情社区中文字幕| 少妇办公室好紧好爽再浪一点| 亚洲中文字幕乱码免费| 四虎国产精品永久在线下载| 偷拍亚洲一区二区三区| 亚洲国产成人午夜在线一区| 狠狠色婷婷久久综合频道日韩| 精品国产乱一区二区三区| 色老头亚洲成人免费影院| 9999国产精品欧美久久久久久| 1000部拍拍拍18勿入免费视频| 国产经典三级在线| 欧美熟妇乱子伦XX视频| 日韩最新在线不卡av| 国产精品久久蜜臀av| 国产普通话刺激视频在线播放| 国产精品多p对白交换绿帽| 亚洲中文字幕乱码免费| 亚洲精品成人片在线观看精品字幕|