<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / Chinese Perspectives

          What can be governed in AI and what won't be

          By XUE ZHAOFENG | China Daily | Updated: 2025-11-29 09:51
          Share
          Share - WeChat
          JIN DING/CHINA DAILY

          Regulators looking at artificial intelligence should begin with a simple but often forgotten truth: not everything can be governed. Some forces — technological progress, shifts in costs and benefits, and the enduring aspects of human nature — push so strongly that swimming against them is costly, and often futile. This is not an argument against regulation; it is a call for candid, marginal cost-and-benefit-minded regulation that defends public goods while accepting what has effectively become irreversible.

          Economists draw a useful distinction between welfare, the things people truly want, and toil — the work they do to obtain those things. Frédéric Bastiat's timeless lesson — people want light, not candles — is instructive here. When electricity made illumination abundant and cheap, we celebrated the gains even as candle-makers adjusted. AI is doing for routine cognitive labor what electricity once did for illumination: delivering widespread welfare gains even as some workers face displacement and uncertainty.

          To be sure, AI also has its limits. Current systems generate probabilistic outputs, not deductive proofs, so they can be inconsistent and occasionally even hallucinate. Biased or imperfect training data amplify errors while proprietary and copyrighted material limits the access to knowledge. Besides, machines do not experience human values or empathy. Yet many of these problems can be addressed. Researchers are combining probabilistic models with symbolic checks to reduce hallucinations. Licensed access to specialized databases raises reliability, while machines can free humans to focus on value judgments, creativity and empathy. History shows people do not wait for tools to be perfect before adopting them — they adopt those that offer convenience.

          Privacy is another area where full control is unlikely. Whether personal information remains hidden or gets exposed ultimately depends on its use-value. As transaction costs fall and the value of personalized services rises, information tends to flow toward higher-value uses. Only those living off the grid still retain the old notion of complete privacy. Most people accept privacy tradeoffs to enjoy digital services. Governments, often citing public safety, also seek access. Under commercial and public pressures, citizens have grown semi-transparent, and AI's analytical power accelerates that trend. Privacy protection remains essential, but regulators should choose targeted, high-value protection rather than chase anachronistic total secrecy.

          The drive to differentiate among individuals — for hiring, underwriting or marketing — is similarly hard to stop. Societies have long invested in distinctions because they create value and lower costs: education screens talent, medical examinations reduce adverse selection in insurance while analytics sharpen matching. If AI improves precision, incentives to use it will persist. The real risk here is erroneous discrimination — excluding people on spurious correlations. Legal rules, litigation, reputational penalties and competition can deter such abuses. But the underlying push toward finer distinctions is not something regulation will easily roll back.

          Human dependence on tools and delegated decision-making is also irreversible. Delegation saves time and cognitive effort, but someone must still bear responsibility for the outcomes. AI alters decision processes but not the need to allocate responsibility. In practice, liability will be apportioned to those best able to prevent harm — providers, users, insurers and regulators — following familiar law-and-economics logic. The regulatory focus should therefore be on efficient responsibility allocation, not on forbidding the use of helpful tools.

          Likewise, it is difficult to stop attempts to capture transient profit in financial markets through algorithmic trading. In reasonably efficient markets, price movements reflect information arrival, and short-lived arbitrage rewards speed. Trying to ban the race for speed would be arbitrary and could be counterproductive. Instead, regulation should aim to prevent actions that distort price discovery or entrench insiders.

          Where regulation can be decisively beneficial is in protecting truth and safety. Falsehoods have accompanied every new communication medium since the printing press. In domains where accuracy matters — medicine, infrastructure, public safety — society will and should pay for reliable information. Technical measures such as provenance, watermarking, traceability and robust reputational systems, combined with legal standards, can raise the cost of deception and help trustworthy providers stand out.

          These examples are illustrative, not exhaustive. New technologies inevitably bring both nuisance and progress. Concerns about the erosion of privacy and misinformation are real, but personal displeasure should not be conflated with structural reality. The salient question for policymakers is not whether to regulate, but where regulation will yield net social benefits and where it will merely struggle against a rising tide. That pragmatic clarity must guide those who defend sovereignty, protect public goods, and seek to harness AI's undeniable welfare gains.

          Beyond law and technology, governance rests on social choices. Sensible AI governance should recognize limits: it should protect where protection matters most, and adapt where change is relentless.

          The author is adjunct professor at the National School of Development, Peking University and author of Economics Lecture Notes (Graphic Edition).

          The views don't necessarily reflect those of China Daily.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 亚洲中文精品人人永久免费| 国产免费久久精品44| 日韩午夜福利视频在线观看| 国产美女白丝袜精品_a不卡| 国语精品一区二区三区| 波多野结衣久久一区二区| 蜜桃mv在线播放免费观看视频| 亚洲人午夜射精精品日韩| 青青草一区二区免费精品| 中文字幕第一页国产| 男女性杂交内射女bbwxz| 亚洲国产精品综合久久20| 亚洲精品入口一区二区乱| www欧美在线观看| 国产精品女生自拍第一区| 天堂亚洲免费视频| 国产福利无码一区二区在线| 野花社区www视频日本| 亚洲va中文字幕无码| 天堂av网一区二区三区| 成人3d动漫一区二区三区| 成人福利国产午夜AV免费不卡在线 | 不卡一区二区三区四区视频| 亚洲av久久精品狠狠爱av| 国产自产对白一区| 久久精品国产亚洲av热一区| 美女胸18下看禁止免费视频| 亚洲国产一区二区在线| 边做边爱完整版免费视频播放| 天堂av成人网在线观看| 久久人妻无码一区二区三区av| 欧美黑人巨大xxxxx| 久久精品人人做人人爽97| 亚洲AV成人片在线观看| 岛国大片在线免费播放| 她也色tayese在线视频| 国产超碰无码最新上传| 亚洲欧洲国产成人综合不卡| 国产午精品午夜福利757视频播放| 一边摸一边叫床一边爽av| 日韩中文字幕有码av|