<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / Chinese Perspectives

          What can be governed in AI and what won't be

          By XUE ZHAOFENG | China Daily | Updated: 2025-11-29 09:51
          Share
          Share - WeChat
          JIN DING/CHINA DAILY

          Regulators looking at artificial intelligence should begin with a simple but often forgotten truth: not everything can be governed. Some forces — technological progress, shifts in costs and benefits, and the enduring aspects of human nature — push so strongly that swimming against them is costly, and often futile. This is not an argument against regulation; it is a call for candid, marginal cost-and-benefit-minded regulation that defends public goods while accepting what has effectively become irreversible.

          Economists draw a useful distinction between welfare, the things people truly want, and toil — the work they do to obtain those things. Frédéric Bastiat's timeless lesson — people want light, not candles — is instructive here. When electricity made illumination abundant and cheap, we celebrated the gains even as candle-makers adjusted. AI is doing for routine cognitive labor what electricity once did for illumination: delivering widespread welfare gains even as some workers face displacement and uncertainty.

          To be sure, AI also has its limits. Current systems generate probabilistic outputs, not deductive proofs, so they can be inconsistent and occasionally even hallucinate. Biased or imperfect training data amplify errors while proprietary and copyrighted material limits the access to knowledge. Besides, machines do not experience human values or empathy. Yet many of these problems can be addressed. Researchers are combining probabilistic models with symbolic checks to reduce hallucinations. Licensed access to specialized databases raises reliability, while machines can free humans to focus on value judgments, creativity and empathy. History shows people do not wait for tools to be perfect before adopting them — they adopt those that offer convenience.

          Privacy is another area where full control is unlikely. Whether personal information remains hidden or gets exposed ultimately depends on its use-value. As transaction costs fall and the value of personalized services rises, information tends to flow toward higher-value uses. Only those living off the grid still retain the old notion of complete privacy. Most people accept privacy tradeoffs to enjoy digital services. Governments, often citing public safety, also seek access. Under commercial and public pressures, citizens have grown semi-transparent, and AI's analytical power accelerates that trend. Privacy protection remains essential, but regulators should choose targeted, high-value protection rather than chase anachronistic total secrecy.

          The drive to differentiate among individuals — for hiring, underwriting or marketing — is similarly hard to stop. Societies have long invested in distinctions because they create value and lower costs: education screens talent, medical examinations reduce adverse selection in insurance while analytics sharpen matching. If AI improves precision, incentives to use it will persist. The real risk here is erroneous discrimination — excluding people on spurious correlations. Legal rules, litigation, reputational penalties and competition can deter such abuses. But the underlying push toward finer distinctions is not something regulation will easily roll back.

          Human dependence on tools and delegated decision-making is also irreversible. Delegation saves time and cognitive effort, but someone must still bear responsibility for the outcomes. AI alters decision processes but not the need to allocate responsibility. In practice, liability will be apportioned to those best able to prevent harm — providers, users, insurers and regulators — following familiar law-and-economics logic. The regulatory focus should therefore be on efficient responsibility allocation, not on forbidding the use of helpful tools.

          Likewise, it is difficult to stop attempts to capture transient profit in financial markets through algorithmic trading. In reasonably efficient markets, price movements reflect information arrival, and short-lived arbitrage rewards speed. Trying to ban the race for speed would be arbitrary and could be counterproductive. Instead, regulation should aim to prevent actions that distort price discovery or entrench insiders.

          Where regulation can be decisively beneficial is in protecting truth and safety. Falsehoods have accompanied every new communication medium since the printing press. In domains where accuracy matters — medicine, infrastructure, public safety — society will and should pay for reliable information. Technical measures such as provenance, watermarking, traceability and robust reputational systems, combined with legal standards, can raise the cost of deception and help trustworthy providers stand out.

          These examples are illustrative, not exhaustive. New technologies inevitably bring both nuisance and progress. Concerns about the erosion of privacy and misinformation are real, but personal displeasure should not be conflated with structural reality. The salient question for policymakers is not whether to regulate, but where regulation will yield net social benefits and where it will merely struggle against a rising tide. That pragmatic clarity must guide those who defend sovereignty, protect public goods, and seek to harness AI's undeniable welfare gains.

          Beyond law and technology, governance rests on social choices. Sensible AI governance should recognize limits: it should protect where protection matters most, and adapt where change is relentless.

          The author is adjunct professor at the National School of Development, Peking University and author of Economics Lecture Notes (Graphic Edition).

          The views don't necessarily reflect those of China Daily.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 国产欧美VA天堂在线观看视频| 日韩av综合中文字幕| 东方av四虎在线观看| 免费看黄色片| 国内少妇毛片视频| 免费无码精品黄av电影| 人妻久久久一区二区三区| 在线国产极品尤物你懂的| 人人妻人人添人人爽日韩欧美| 亚洲中文字幕一区二区| 亚洲成A人一区二区三区| 水蜜桃视频在线观看免费18| 免费人成网站免费看视频| 忘忧草在线社区www中国中文| 亚洲国产一线二线三线| 激情五月日韩中文字幕| 无码av中文字幕一区二区三区| 国产一区国产精品自拍| 男人天堂av免费观看| 色偷偷亚洲av男人的天堂| h动态图男女啪啪27报gif| 中文字幕在线观看国产双飞高清| 亚洲一区黄色| 啦啦啦啦在线视频免费播放6| av中文字幕在线二区| 日韩欧美一卡2卡3卡4卡无卡免费2020| 女人高潮被爽到呻吟在线观看| 国产精品一区自拍视频| 成人中文在线| 色一伊人区二区亚洲最大| 人妻熟妇乱又伦精品视频中文字幕| 伊人色合天天久久综合网| 野花韩国高清bd电影| 无码人妻丰满熟妇区毛片18| 精品视频福利| 久久久精品2019中文字幕之3| 亚洲精品一区二区三天美| 亚洲精品人成网线在线| 四虎在线成人免费观看| 国产一区二区三区激情视频| 亚洲精品综合网中文字幕|