<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Business
          Home / Business / Finance

          Special attention needed to ensure AI safety, US professor says

          By Mike Gu in Kong Hong | chinadaily.com.cn | Updated: 2025-01-14 19:00
          Share
          Share - WeChat
          US computer science professor Stuart Russell talks to the media at the 2025 Asia Financial Forum (AFF) in Hong Kong on Tuesday. MIKE GU / CHINA DAILY

          Stuart Russell, a distinguished professor of computer science at the University of California, Berkeley, emphasized the need for special attention to the safety of artificial intelligence (AI) during its development, when participating in a group interview at the 2025 Asia Financial Forum (AFF) held in Hong Kong.

          For safety reason, AI systems need to have behavioral red lines, Russell said. "The problem with general-purpose AI is that it can go wrong in so many ways that you can't easily write down what it means to be safe. What you can do is write down some things that you definitely don't want the systems to do. These are the behavioral red lines," he explained to the reporters why building behavioral red lines for AI is important.

          "We definitely don't want AI systems to replicate themselves without permission. We definitely don't want them to break into other computer systems. We definitely don't want them to advise terrorists on how to build biological weapons," Russell said.

          He added that it is hoped that AI development will always be under human control, rather than becoming uncontrollable.

          This is why it is crucial to generate behavioral red lines at the early stages of AI development, especially with the help of governments, Russell said.

          "So, you can make a list of things that you definitely don't want to do. It is quite reasonable for governments to say that before you can put a system out there, you need to show us that it's not going to do these things," he said.

          Russell pointed out that AI gives rise to new forms of cybercrime. Currently, criminals are using AI to craft targeted emails by analyzing social media profiles and accessing personal emails, he said. This allows AI to generate messages that reference ongoing conversations, impersonating someone else, he added.

          Russell, however, stated that AI also boosts the defense of crimes. "On the other side, we have AI defenses. I'm part of a team in various universities in California working together to use AI as a defense to screen emails against fishing attacks, to look at the activities of algorithms operating within the network, and to see which ones are possibly engaging in various activities", he said.

          When asked about AI competition between countries, Russell said, "I think, in general, competition is healthy". However, he emphasized that excessive competition in AI should be approached with caution, as it could jeopardize AI safety. "Safety failures damage the entire industry. For example, if one airline doesn't pay enough attention to safety and airplanes start crashing, that damages the whole industry," he said.

          AI cooperation, based on safety, is both allowable and economically sensible, Russell said. "In collaboration with several AI researchers from the West and China, we've been running a series of dialogues on AI Safety, specifically to encourage cooperation on safety. Those have been quite successful. The behavioral red lines I mentioned earlier are a result of those discussions," he said.

          Regarding AI cooperation between China and the United States, Russell stated that both countries now place a strong emphasis on ensuring AI safety.

          "I think there's at least as much interest in that direction in China as there is in the US. Several senior Chinese politicians have talked about AI safety and are aware of the risks to humanity from uncontrolled AI systems. So, I really hope that we can cooperate on this dimension," he said.

          "The US and China have agreed not to allow AI to control the launch of nuclear weapons, which I think is sensible," he added.

          mikegu@chinadailyhk.com

          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          CLOSE
           
          主站蜘蛛池模板: 国厂精品114福利电影免费| 99在线 | 亚洲| 亚洲一区二区成人| 人妻精品动漫h无码| 乱人伦中文字幕成人网站在线| 日本黄网站三级三级三级| 国产精品一区中文字幕| 波多结野衣一区二区三区| 蜜臀精品一区二区三区四区| 国产精品毛片一区视频播| h无码精品动漫在线观看| 久久精品国产91精品亚洲| 亚洲中文无码成人影院在线播放| 377P欧洲日本亚洲大胆| 人妻系列无码专区无码专区| 蜜臀av无码一区二区三区| 亚洲欧美日韩色图| 日本中文字幕一区二区三| 国产在线无码精品无码| 精品国产成人亚洲午夜福利| 漂亮的保姆hd完整版免费韩国| 丝袜美腿亚洲综合第一区| 久久亚洲av午夜福利精品一区| Y111111国产精品久久久| 中文字幕日韩欧美就去鲁| 无码专区中文字幕无码| 777米奇色狠狠俺去啦| 久久国产精品色av免费看| 亚洲精品国产免费av| 在线观看mv的免费网站| 国产成人无码免费视频在线| 国产人妻鲁鲁一区二区| 国产色无码精品视频免费| 国产高清自产拍av在线| 亚洲综合网中文字幕在线| 真人免费一级毛片一区二区| 91精品国产蜜臀在线观看| 欧美成人www免费全部网站| 久久国产精品99久久蜜臀| 国产蜜臀在线一区二区三区| 被黑人巨大一区二区三区|