<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Comment

          What can be governed in AI and what won't be

          By XUE ZHAOFENG | China Daily | Updated: 2025-11-29 00:00
          Share
          Share - WeChat
          JIN DING/CHINA DAILY

           

          Regulators looking at artificial intelligence should begin with a simple but often forgotten truth: not everything can be governed. Some forces — technological progress, shifts in costs and benefits, and the enduring aspects of human nature — push so strongly that swimming against them is costly, and often futile. This is not an argument against regulation; it is a call for candid, marginal cost-and-benefit-minded regulation that defends public goods while accepting what has effectively become irreversible.

          Economists draw a useful distinction between welfare, the things people truly want, and toil — the work they do to obtain those things. Frédéric Bastiat's timeless lesson — people want light, not candles — is instructive here. When electricity made illumination abundant and cheap, we celebrated the gains even as candle-makers adjusted. AI is doing for routine cognitive labor what electricity once did for illumination: delivering widespread welfare gains even as some workers face displacement and uncertainty.

          To be sure, AI also has its limits. Current systems generate probabilistic outputs, not deductive proofs, so they can be inconsistent and occasionally even hallucinate. Biased or imperfect training data amplify errors while proprietary and copyrighted material limits the access to knowledge. Besides, machines do not experience human values or empathy. Yet many of these problems can be addressed. Researchers are combining probabilistic models with symbolic checks to reduce hallucinations. Licensed access to specialized databases raises reliability, while machines can free humans to focus on value judgments, creativity and empathy. History shows people do not wait for tools to be perfect before adopting them — they adopt those that offer convenience.

          Privacy is another area where full control is unlikely. Whether personal information remains hidden or gets exposed ultimately depends on its use-value. As transaction costs fall and the value of personalized services rises, information tends to flow toward higher-value uses. Only those living off the grid still retain the old notion of complete privacy. Most people accept privacy tradeoffs to enjoy digital services. Governments, often citing public safety, also seek access. Under commercial and public pressures, citizens have grown semi-transparent, and AI's analytical power accelerates that trend. Privacy protection remains essential, but regulators should choose targeted, high-value protection rather than chase anachronistic total secrecy.

          The drive to differentiate among individuals — for hiring, underwriting or marketing — is similarly hard to stop. Societies have long invested in distinctions because they create value and lower costs: education screens talent, medical examinations reduce adverse selection in insurance while analytics sharpen matching. If AI improves precision, incentives to use it will persist. The real risk here is erroneous discrimination — excluding people on spurious correlations. Legal rules, litigation, reputational penalties and competition can deter such abuses. But the underlying push toward finer distinctions is not something regulation will easily roll back.

          Human dependence on tools and delegated decision-making is also irreversible. Delegation saves time and cognitive effort, but someone must still bear responsibility for the outcomes. AI alters decision processes but not the need to allocate responsibility. In practice, liability will be apportioned to those best able to prevent harm — providers, users, insurers and regulators — following familiar law-and-economics logic. The regulatory focus should therefore be on efficient responsibility allocation, not on forbidding the use of helpful tools.

          Likewise, it is difficult to stop attempts to capture transient profit in financial markets through algorithmic trading. In reasonably efficient markets, price movements reflect information arrival, and short-lived arbitrage rewards speed. Trying to ban the race for speed would be arbitrary and could be counterproductive. Instead, regulation should aim to prevent actions that distort price discovery or entrench insiders.

          Where regulation can be decisively beneficial is in protecting truth and safety. Falsehoods have accompanied every new communication medium since the printing press. In domains where accuracy matters — medicine, infrastructure, public safety — society will and should pay for reliable information. Technical measures such as provenance, watermarking, traceability and robust reputational systems, combined with legal standards, can raise the cost of deception and help trustworthy providers stand out.

          These examples are illustrative, not exhaustive. New technologies inevitably bring both nuisance and progress. Concerns about the erosion of privacy and misinformation are real, but personal displeasure should not be conflated with structural reality. The salient question for policymakers is not whether to regulate, but where regulation will yield net social benefits and where it will merely struggle against a rising tide. That pragmatic clarity must guide those who defend sovereignty, protect public goods, and seek to harness AI's undeniable welfare gains.

          Beyond law and technology, governance rests on social choices. Sensible AI governance should recognize limits: it should protect where protection matters most, and adapt where change is relentless.

          The author is adjunct professor at the National School of Development, Peking University and author of Economics Lecture Notes (Graphic Edition).

          The views don't necessarily reflect those of China Daily.

          Today's Top News

          Editor's picks

          Most Viewed

          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 一区二区三区国产在线网站视频| 激情综合色综合久久综合| 无码日韩av一区二区三区| 日韩一区二区三区日韩精品| 老师穿超短包臀裙办公室爆乳| 亚洲精品乱码久久久久久自慰| 亚洲国产精品综合久久20| 一本久道综合色婷婷五月| 欧美人禽zozo动人物杂交| 久久亚洲精品情侣| 日韩AV高清在线看片| 2021国产成人精品国产| 高清自拍亚洲精品二区| 国产一级淫片免费播放电影| 亚洲国产码专区在线观看| 成年女人喷潮免费视频| 亚洲 欧洲 自拍 另类 校园| 伊人成人在线视频免费| 五月婷久久麻豆国产| 99精品国产闺蜜国产在线闺蜜 | 日韩久久久久久中文人妻| 色综合久久久久综合99| 精品日韩精品国产另类专区 | 国内精品无码一区二区三区| 大尺度国产一区二区视频| 亚洲天堂自拍| 免费无码成人AV片在线| 国产亚洲综合欧美视频| 蜜桃视频在线网站免费看| 26uuu另类亚洲欧美日本| 欧洲成人在线观看| 国产成人精品日本亚洲成熟| 一区二区三区四区黄色片| 夜夜添无码一区二区三区| 三级国产在线观看| 日韩高清亚洲日韩精品一区二区| 精品国产AV色欲果冻传媒| 亚洲一区久久蜜臀av| 亚洲午夜无码久久久久小说| 亚洲一区二区三区自拍麻豆| 一本色道久久88亚洲综合|