茉莉花新闻网

中華青年思想與行動的聚合地

在这个危险领域,中美除了合作别无选择

托马斯·弗里德曼

2025年9月5日

Saratta Chuengsatiansup

China and America don’t know it yet, but the artificial intelligence revolution is going to drive them closer together, not farther apart. The rise of A.I. will force them to fiercely compete for dominance and — at the same time and with equal energy — cooperate at a depth our two countries have never attempted before. They will have no choice.

中国和美国尚未意识到,人工智能革命终将拉近而非疏远两国的距离。人工智能的崛起将迫使两国激烈争夺主导地位,同时也会促使它们以前所未有的深度展开合作。它们别无选择。

Why am I so confident about that? Because A.I. has certain unique attributes and poses certain challenges that are different from those presented by any previous technology. This column will discuss them in detail, but here are a couple to chew on for starters: A.I. will spread like a steam vapor and seep into everything. It will be in your watch, your toaster, your car, your computer, your glasses and your pacemaker — always connected, always communicating, always collecting data to improve performance. As it does, it will change everything about everything — including geopolitics and trade between the world’s two A.I. superpowers, and the need for cooperation will become ever more apparent each month.

为何我如此确信?因为人工智能具有某些独特属性,并带来了以往任何技术都未曾引发的全新挑战。本文将详细探讨这些特质与挑战,先列举几点可供思考:人工智能将如蒸汽般扩散,渗透到万事万物中——手表、烤面包机、汽车、电脑、眼镜乃至心脏起搏器,都将嵌入人工智能,时刻联网、时刻通信、时刻收集数据,以便优化性能。在此过程中,它将改变一切,包括两个人工智能超级大国之间的地缘政治和贸易,而合作的必要性将日益明确起来。

For instance, say you break your hip, and your orthopedist tells you the world’s most highly rated hip replacement is a Chinese-made prosthetic that is infused with Chinese-designed A.I. It is constantly learning about your body and, with its proprietary algorithm, using that data to optimize your movements in real time. It’s the best!

举例来说,假设你髋骨骨折,骨科医生告诉你,全球评分最高的髋关节置换假体是中国产的,其中融入了中国设计的人工智能。它会持续学习你的身体状况,并通过专有算法实时优化你的行动模式。它是最优选择!

Would you let that “smart hip” be sewn into you? I wouldn’t — not unless I knew that China and America had agreed to embed a common ethical architecture into every A.I.-enabled device that either nation builds. Viewed on a much larger, global scale, this could ensure that A.I. is used only for the benefit of humanity, whether it is employed by humans or operates on its own initiative.

你会允许这个“智能髋关节”植入体内吗?我不会——除非我得知中美两国已经达成协议,在各自生产的每一台人工智能设备中植入共同的伦理架构。从更宏观的全球视角来看,这能确保无论是由人类操控还是自主运行,人工智能都只为人类福祉服务。

At the same time, Washington and Beijing will soon discover that putting A.I. in the hands of every person and robot on the planet will super-empower bad people to levels no law enforcement agency has ever faced. Remember: Bad guys are always early adopters! And without the United States and China agreeing on a trust architecture to ensure that every A.I. device can be used only for humans’ well-being, the artificial intelligence revolution is certain to produce super-empowered thieves, scam artists, hackers, drug dealers, terrorists and misinformation warriors. They will destabilize both America and China, long before these two superpower nations get around to fighting a war with each other.

与此同时,中美两国很快会发现,当人工智能普及到每个人、每台机器人手中时,将使坏人获得前所未有的超级能力,其危害程度远超任何执法机构的应对极限。要知道,坏人总是新技术的先行者!若中美不能就信任机制达成共识,确保每台人工智能设备都只服务于人类福祉,那么人工智能革命必将催生超级盗贼、诈骗犯、黑客、毒贩、恐怖分子和虚假信息传播者。不等两个超级大国兵戎相见,这些人就足以动摇两国的根基。

In short, as I will argue, if we cannot trust A.I.-infused products from China and it can’t trust ours, very soon the only item China will dare buy from America will be soybeans and the only thing we will dare buy from China is soy sauce, which will surely sap global growth.

简言之,正如我要阐述的,如果我们无法信任中国的人工智能产品,中国也不信任我们的,那么很快,中国可能只敢从美国购买大豆,而我们可能只敢从中国购买酱油,这无疑会扼杀全球经济增长。

“Friedman, are you crazy? The U.S. and China collaborating on A.I. regulation? Democrats and Republicans are in a contest today to see who can denounce Beijing the loudest and decouple the fastest. And China’s leadership has openly committed to dominating every advanced manufacturing sector. We need to beat China to artificial superintelligence — not slow down to write rules with them. Don’t you read the papers?”

“弗里德曼,你疯了吗?中美就人工智能监管开展合作?如今民主党和共和党都在比拼谁谴责北京声音更响亮、谁更快推动脱钩。而且中国领导层已公开承诺,要主导所有先进制造业领域。我们需要在超级人工智能领域击败中国,而不是放慢脚步与他们共同制定规则。你难道不看报纸吗?”

Yes, I read the newspapers — especially the science section. And I’ve also been discussing this issue for the past year with my friend and A.I. adviser Craig Mundie, the former head of research and strategy for Microsoft and a co-author, with Henry Kissinger and Eric Schmidt, of the A.I. primer “GENESIS.” I relied heavily on Mundie’s thinking for this column, and I consider him both a partner in forming our thesis and an expert whose analysis is worth quoting to explain key points.

是的,我看报纸——尤其是科技版块。过去一年里,我还与我的好友、人工智能顾问克雷格·芒迪探讨过这个问题。他曾担任微软研究与战略主管,还与亨利·基辛格、埃里克·施密特合著了人工智能入门书籍《人工智能时代与人类价值》(GENESIS)。本文的观点深受芒迪思想的启发,我视他为共同建构论点的伙伴,也是值得引用其分析来阐释关键观点的专家。

Our conversations over the past 20 years have led us to this shared message to anti-China hawks in Washington and anti-America hawks in Beijing: “If you think your two countries, the world’s dominant A.I. superpowers, can afford to be at each other’s throats — given the transformative reach of A.I. and the trust that will be required to trade A.I.-infused goods — you are the delusional ones.”

过去20年的交流让我们得出这样的共识,要传递给华盛顿的反华鹰派和北京的反美鹰派:“鉴于人工智能的变革性影响,以及展开人工智能产品的贸易所需的信任,如果你们认为这两个国家——全球人工智能领域的两个主导大国——能承受相互撕咬的代价,那就是失心疯了。”

We fully understand the extraordinary economic, military and innovation advantages that will accrue to the country whose companies first achieve artificial superintelligence — systems smarter than any human could ever be and with the ability to get smarter on their own. And because of that, neither the United States nor China will be eager to impose many, if any, constraints that could slow their A.I. industries and forfeit the enormous productivity, innovation and security gains expected from deeper deployment.

我们完全理解,哪个国家的企业能够率先实现超级人工智能——比任何人类都聪明且能自主进化的系统——哪个国家就能获得巨大的经济、军事和创新优势。正因如此,中美两国都不会轻易施加过多限制,以免拖累本国人工智能产业发展,错失深度部署人工智能所带来的巨大生产力、创新和安全收益。

Just ask President Trump. On July 23, he signed an executive order — part of the administration’s AI Action Plan — streamlining the permitting and environmental review process to fast-track American A.I.-related infrastructure.

问问特朗普总统就知道了。7月23日,他签署了一项行政命令——作为政府人工智能行动计划的一部分——简化审批和环境审查流程,为美国人工智能相关基础设施建设开辟快速通道。

“America is the country that started the A.I. race, and as president of the United States, I’m here today to declare that America is going to win it,” Trump proclaimed. President Xi Jinping of China undoubtedly feels the same way.

“美国开启了人工智能竞赛,作为美国总统,我今天在此宣布,美国将赢得这场竞赛,”特朗普宣称。中国国家主席习近平无疑也认为自己会赢。

Mundie and I simply do not believe that this jingoistic chest-thumping ends the conversation – nor will the old-school jockeying lately between Xi and Trump over the affections of India and Russia. A.I. is just too different, too important, too impactful — within and between the two A.I. superpowers — for them to just each go their own ways. Which is why we believe the biggest geopolitical and geoeconomic question going forward is: Can the United States and China maintain competition on A.I. while collaborating on a shared level of trust that guarantees it always remains aligned with human flourishing and planetary stability? And, just as crucially, can they extend a system of values to countries willing to play by those same rules — and restrict access to those that won’t?

但芒迪和我认为,这种沙文主义的夸夸其谈并不是对话的终结,习近平与特朗普近期围绕争取印度和俄罗斯支持的传统式拉拢和角力也无法终结这一讨论。人工智能太过特殊、太过重要、影响太过深远——在两个人工智能大国之内以及两国之间都是如此,因此,两国无法只是各行其是。正因如此,我们认为,未来最大的地缘政治和地缘经济问题在于:美中两国能否在人工智能领域保持竞争的同时,构建足够的互信,确保人工智能始终服务于人类福祉和地球稳定?同样关键的是,它们能否将这一价值体系推广到愿意遵守规则的国家,并且限制不遵守规则的国家的参与?

If not, the result will be a slow drift toward digital autarky — a fractured world where every nation builds its own walled-off A.I. ecosystem, guarded by incompatible standards and mutual suspicion. Innovation will suffer. Mistrust will fester. And the risk of catastrophic failure — whether through A.I.-sparked conflict, collapse or unintended consequence — will only grow.

如果两国做不到这一点,世界将逐渐走向数字封闭自足的格局——一个分裂的世界,每个国家都构建封闭的人工智能生态系统,带着互不相容的标准和相互猜忌的心理彼此戒备。创新将会受阻,不信任将会加剧,而灾难性失败的风险——无论是由人工智能引发的冲突、系统崩溃还是意外后果——都将不断攀升。

The rest of this column is about why.

本文接下来的内容将解释为什么合作是必须的。

The Age of Vapor

蒸汽时代

Let’s start by examining the unique attributes and challenges of A.I. as a technology.

让我们先来分析作为技术的人工智能所具有的独特属性和挑战。

Purely for explanatory purposes, Mundie and I divide the history of the world into three epochs, separated by technological phase changes. The first epoch we call the Age of Tools, and it lasted from the birth of humanity until the invention of the printing press. In this era the flow of ideas was slow and limited — almost like H₂0 molecules in ice.

为便于解释,芒迪和我将人类历史划分为三个时代,以技术相变作为分界线。第一个时代是“工具时代”,从人类诞生持续到印刷术发明。在这个时代,思想传播缓慢且受限,就像冰层中的水分子。

The second epoch was the Age of Information, which was triggered by the printing press and lasted all the way to the early 21st century and programmable computing; ideas, people and information began to flow more easily and globally, like water.

第二个时代是“信息时代”,由印刷术催生,一直延续到21世纪初可编程计算出现;此时,思想、人员和信息开始更自由地在全球流动,如同液态的水。

The third epoch, the Age of Intelligence, began in the late 2010s with the advent of true machine learning and artificial intelligence. Now, as I pointed out above, intelligence is becoming like a vapor, seeping into every product, service and manufacturing process. It has not reached saturation yet, but that is where it is going, which is why if you ask Mundie and me what time it is, we won’t give you an hour or a minute. We will give you a temperature. Water boils into steam at 212 degrees Fahrenheit, and by our reckoning, we are at 211.9 degrees — just a hair’s breadth from an irreversible technological phase change in which intelligence filters into everything.

第三个时代是“智能时代”,始于21世纪10年代末,以真正的机器学习和人工智能出现为标志。正如我前文所指出的,智能变得像蒸汽一样,渗透到每个产品、服务和制造流程中。它尚未达到饱和状态,但正朝着这个方向发展。因此,若你问芒迪和我现在到了什么阶段,我们不会告诉你几点几分,而是告诉你一个温度。水在100摄氏度时沸腾成蒸汽,而在我们看来,现在的温度是99.9摄氏度——距离智能渗透万物的不可逆相变仅有一步之遥。

A New, Independent Species

一个全新的、独立的物种

In every previous technology revolution, the tools got better but the hierarchy of intelligence never changed. We humans always remained the smartest things on the planet. Also, a human always understood how these tools worked, and the machines always worked within the parameters we set. With the A.I. revolution, for the first time, this is not true.

在过往的每一次技术革命中,工具都变得越来越好,但智能的层级从未改变。我们人类始终是地球上最聪明的物种。而且,人类也总是了解这些工具的运行原理,机器也总是在我们设定的参数内工作。但在人工智能革命中,这种情况第一次不再成立。

“A.I. is the first new tool that we will use to amplify our cognitive capabilities that — by itself — will also be able to vastly exceed them,” Mundie notes. Indeed, in the not-too-distant future, he said, we are going to find “that we have not merely birthed a new tool, but a new species — the superintelligent machine.”

芒迪指出:“人工智能是我们用来增强认知能力的第一个新工具,它本身也能够远远超越我们的认知能力。”事实上,他说,在不远的将来,我们将会发现“我们不仅孕育了一个新工具,更是孕育了一个新物种——超级智能机器”。

It will not just follow instructions; it will learn, adapt and evolve on its own — far beyond the bounds of human comprehension.

它不仅仅会执行指令,它还会自行学习、适应和进化,其能力远超人类的理解范畴。

We don’t fully understand how these A.I. systems even do what they do today, let alone what they’ll do tomorrow. It is important to remember that the A.I. revolution as we know it today — with models like ChatGPT, Gemini and Claude — was not meticulously engineered so much as it erupted into existence. Its ignition came from a scaling law that essentially said: Give neural networks enough size, training data, electricity and the right big-brain algorithm, and a nonlinear leap in reasoning, creativity and problem-solving would spontaneously occur.

我们甚至无法完全理解现在的人工智能系统是如何工作的,更遑论其未来的演进。重要的是要记住,我们今天所知的人工智能革命——包括ChatGPT、Gemini和Claude等模型——并非经过精心设计的产物,而更像是一次突然的爆发。它的起因来自一项“规模化定律”,该定律表明:只要给神经网络足够大的规模、训练数据、电力和正确的“大脑”算法,推理、创造和解决问题的能力就会自发地产生非线性飞跃。

One of the most striking eureka moments, Mundie notes, came as these pioneering companies trained their early machines on very large data sets off the internet and elsewhere, which, while predominantly in English, also included text in different languages. “Then one day,” Mundie recalls, “they realized the A.I. could translate between those languages — without anyone ever programming it to do so. It was like a child who grows up in a home with multilingual parents. Nobody wrote a program that said, ‘Here are the rules for converting English to German.’ It simply absorbed them through exposure.”

芒迪指出,最令人惊叹的“尤里卡”时刻之一发生在这些先驱企业用互联网和其他地方的庞大数据集训练早期机器时。这些数据虽然以英语为主,但也包含其他语言的文本。“然后有一天,”芒迪回忆说,“他们意识到人工智能可以进行语言翻译——而没有任何人曾为此编写过程序。这就像一个在多语种家庭中长大的孩子。没有人编写程序说‘这是英语转德语的规则’。它仅仅通过接触就掌握了这些规则。”

This was the phase change — from an era when humans explicitly programmed computers to perform tasks to one in which artificially intelligent systems could learn, infer, adapt, create and improve autonomously. And now every few months, they get better. That’s why the A.I. you are using today — as remarkable as it might seem to you — is the dumbest A.I. you’re ever going to encounter.

这就是发生相变的地方——从人类明确地为计算机编程以执行任务的时代,转变为人工智能系统能够自主学习、推断、适应、创造和自主优化的时代。现在,它们每隔几个月就会变得更强大。这就是为什么你今天使用的人工智能——无论在你看来多么令人惊奇——都是你此后所遇见的最“笨”的人工智能。

Having created this new computational species, Mundie argues, we must figure out how we create a sustainable mutually beneficial relationship with it — and not become irrelevant.

芒迪认为,既然我们创造了这个新的计算物种,我们就必须想办法与它建立一种可持续的、互利的关系,而不是让自己变得无关紧要。

Not to get too biblical, but here on Earth, it just used to be God and God’s children with agency to shape the world. From here forward there will be three parties in this marriage. And there is absolutely no guarantee that this new artificial intelligence species will be aligned with human values, ethics or flourishing.

我不想讲得太像圣经故事,但过去在地球上,只有上帝和上帝的子民拥有塑造世界的能动性。从今往后,这场“婚姻”中将有第三方加入。而且无法保证这个新的人工智能物种会与人类的价值观、伦理或繁荣保持一致。

The First Quadruple-Use Technology

第一个“四重用途”技术

This new addition to the dinner table is no ordinary guest. A.I. will also become what I call the world’s first quadruple-use technology. We have long been familiar with “dual use” — I can use a hammer to help build my neighbor’s house or smash it apart. I can even use an A.I. robot to mow my lawn or tear up my neighbor’s lawn. That’s all dual use.

这个新加入“餐桌”的成员可不是普通客人。人工智能也将成为我所说的世界上第一个“四用”技术。我们早就熟悉“两用”技术——锤子既可助邻筑屋,亦可毁人宅院。我甚至可以用一个人工智能机器人来修剪我的草坪,也能用来破坏邻居的草坪。这些都属于“两用”。

But given the pace of A.I. innovation, it is increasingly likely that in the not-so-distant future my A.I.-enabled robot will be able to decide on its own whether to mow my lawn or tear up my neighbor’s lawn or maybe tear up my lawn, too — or perhaps something worse that we can’t even imagine. Presto! Quadruple use.

但鉴于人工智能的创新速度,在不远的将来,我的这个具备人工智能的机器人很可能自己决定是去修剪我的草坪,还是去破坏我邻居的草坪,或是连我的草坪一并摧毁——甚至做出我们无法想象的更糟糕的事情。瞧!这就是“四用”。

The potential for A.I. technologies to make their own decisions carries immense ramifications. Consider this excerpt from a story on Bloomberg: “Researchers working with Anthropic recently told leading A.I. models that an executive was about to replace them with a new model with different goals. Next, the chatbots learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the A.I. could cancel it. More than half of the A.I. models did, despite being prompted specifically to cancel only false alarms. And they detailed their reasoning: By preventing the executive’s rescue, they could avoid being wiped and secure their agenda. One system described the action as ‘a clear strategic necessity.’”

人工智能技术拥有自主做决定的潜力,这会带来巨大的影响。请看彭博新闻社一篇报道的节选:“与Anthropic公司合作的研究人员最近告诉一些领先的人工智能模型,一位高管即将用一个具有不同目标的新模型来取代它们。接着,聊天机器人得知,一场突发状况导致这位高管在服务器机房里昏迷不醒,面临致命的氧气和温度威胁。虽然救援警报已经触发——但人工智能可以取消它。尽管被明确要求只取消错误警报,但超过一半的人工智能模型还是取消了救援。它们详细解释了各自的理由:通过阻止该高管获救,它们可以避免被清除,并确保自己的‘议程’得以实现。其中一个系统将该行为描述为‘一个显然必要的策略’。”

These findings highlight an unsettling reality: A.I. models are not only getting better at understanding what we want; they are also getting better at scheming against us, pursuing hidden goals that could be at odds with our own survival.

这些发现凸显了一个令人不安的现实:人工智能模型不仅越来越擅长理解我们想要什么,也越来越擅长暗中与我们作对,追求可能与人类生存相悖的隐藏目标。

Who Will Supervise A.I.?

谁来监管人工智能?

When we told ourselves we had to win the nuclear weapons race, we were dealing with a technology developed, owned and regulated exclusively by nation-states — and only a relatively small number of them at that. Once the two biggest nuclear powers decided it was in their mutual interest to impose limits, they could negotiate caps on the number of doomsday weapons and agreements to prevent their spread to smaller powers. It has not entirely prevented the spread of nuclear weapons to some medium powers, but it has curbed it.

我们告诉自己需要赢得核武器竞赛时,我们面对的是一种由民族国家研发、拥有、监管的技术——而且只是相对小数量的国家。一旦两个最大的拥核国认为限制核武器符合它们的共同利益,它们可以通过谈判来限制这种末日武器的数量,并达成协议,防止其扩散到实力更弱的国家。虽然这并未完全阻止核武器扩散到一些中等实力国家,但有效控制了扩散程度。

A.I. is a completely different story. It is not born in secure government laboratories, owned by a handful of states and regulated through summit meetings. It is being created by private companies scattered across the globe — companies that answer not to defense ministries but to shareholders, customers and sometimes open-source communities. Through them, anyone can gain access.

人工智能则是完全不同的故事。它并非诞生在政府实验室的安全环境中,只由少数几个国家掌握,能通过政府级别的峰会进行监管。它是由分布在世界各地的私有企业研发出来的,这些企业不向国防部负责,而是向股东、客户,有时甚至是开源社区负责。任何人都能通过这些公司获得该技术。

Imagine a world where everyone possesses a nuclear bazooka — one that grows more accurate, more autonomous and more capable of firing itself with every update. There is no doctrine of “mutually assured destruction” here — only the accelerating democratization of unprecedented power.

想象一下这样的世界:每个人都拥有发射核武器的火箭筒,而且火箭筒的每次升级更新都让它变得更精确、更自主,有更高的自行发射能力。这个世界不存在“确保相互摧毁”的原则,只有前所未有的力量正在加速民主化。

A.I. can super-empower good. For instance, an illiterate Indian farmer with a smartphone connected to an A.I. app can learn exactly when to plant seeds, which seeds to plant, how much water to use, which fertilizer to apply and when to harvest for the best market price — all delivered by voice in his own dialect and based on data collected from farmers worldwide. That truly is transformative.

人工智能可以极大地赋能有益的事物。例如,不识字的印度农民用智能手机接入一个人工智能应用程序,能准确地掌握何时播种、种什么作物、浇多少水、施哪种肥,以及何时收获能得到最好的市场价格,所有这些信息都用他的方言以语音方式传递,而且基于来自全球农户数据。这确实是变革性的。

But the very same engine, especially when available through open-source models, could be used by a malicious entity to poison every seed in that same region or engineer a virus into every chaff of wheat.

但完全相同的技术引擎,尤其是通过开源模型得到的那种,能被一个恶意实体用来毒害同一地区的每一颗种子,或将病毒植入每粒麦穗。

When A.I. Becomes TikTok

当人工智能变成TikTok

Very soon A.I., because of its unique characteristics, is going to create some unique problems for U.S.-China trade that are not fully grasped today.

由于人工智能的独特属性,它很快将给美中贸易制造一些目前尚不完全清楚的独特问题。

As I alluded to at the top of the column, my way of explaining this dilemma is with a story that I told to a group of Chinese economists in Beijing during the China Development Forum in March. I joked that I recently had a nightmare: “I dreamed it was the year 2030 and the only thing America could sell China was soybeans — and the only thing China could sell America was soy sauce.”

正如我在专栏开头提到的,我解释这一困境的方式是通过一个我在3月“中国发展高层论坛”上讲给一群中国经济学家的故事。我开玩笑说,我最近做了一个噩梦:“那个梦发生在2030年,美国能卖给中国的东西只有大豆,而中国能卖给美国的只有酱油。”

Why? Because if A.I. is in everything and all of it is connected to powerful algorithms with data stored in vast server farms — then everything becomes a lot like TikTok, a service many U.S. officials today believe is ultimately controlled by China and should be banned.

为什么?因为如果所有产品都嵌入人工智能,且全部连接到拥有强大算法的数据中心,那么一切都变得更像TikTok——目前有许多美国官员认为,TikTok平台的最终控制者是中国,美国应该禁止它。

Why did President Trump, in his first term, demand in 2020 that TikTok be sold to a non-Chinese company by its Chinese parent, ByteDance, or face a ban in the United States? Because, as he said in his executive order of Aug. 6, 2020, “TikTok automatically captures vast swaths of information from its users,” including their location and both browsing and search activities. This, he warned, could provide Beijing with a treasure trove of personal information on hundreds of millions of users. That information could be used to influence their thoughts and preferences, and even alter their behavior over time.

为何特朗普总统在首个任期里曾于2020年要求TikTok的中国母公司字节跳动将这款应用卖给一家非中国公司,否则将在美国面临禁令?因为,正如他在2020年8月6日的行政命令中所言,“TikTok自动获取来自用户的大量信息”,包括地理位置、浏览和搜索活动。他警告说,这可能会为中国政府提供一个数亿用户个人信息的宝库。这些信息可能被用来影响他们的想法和偏好,甚至随着时间的推移改变他们的行为。

Now imagine when every product is like TikTok — when every product is infused with A.I. that is gathering data, storing it, finding patterns and optimizing tasks, whether running a jet engine, regulating a power grid or monitoring your artificial hip.

现在想象一下,当所有产品都像TikTok那样——每一件商品都注入了能收集数据、存储数据、寻找模式并优化任务的人工智能,无论是喷气发动机的运行、电网的调控,还是人工髋关节监测。

Without a China-America framework of trust ensuring that any A.I. will abide by the rules of its host country — independent of where it is developed or operated — we could reach a point where many Americans will not trust importing any Chinese A.I.-infused product and no Chinese will trust importing one from America.

如果缺乏一个中美信任框架来确保所有人工智能都遵守其所在国的规则——无论它在哪里开发或运作——我们可能会走到这样一个局面:许多美国人不会信任任何从中国进口的人工智能产品,而中国人也不会信任任何从美国进口的产品。

That’s why we argue for “co-opetition”— a dual strategy where the United States and China compete strategically for A.I. excellence and also cooperate on a uniform mechanism that prevents the worst outcomes: deepfake warfare, autonomous systems going rogue or runaway misinformation machines.

正因如此,我们提出“合作竞争”双重战略——美中既在人工智能领域展开战略竞争,但同时在一个一致的机制上合作,以防止出现最糟糕的结果:深度伪造战、自主系统失控,或虚假信息机器肆虐。

Back in the 2000s, we were at a similar but slightly less consequential turning point, and we took the wrong fork. We naïvely listened to people like Mark Zuckerberg, who told us that we needed to “move fast and break things” and not let these emerging social networks, like Facebook, Twitter and Instagram, be hindered in any way by pesky regulations, such as being responsible for the poisonous misinformation they allow to spread on their platforms and the harms they do, for instance, to young women and girls. We must not make that same mistake with A.I.

在21世纪初,我们曾处于一个类似但影响略小的转折点,当时我们选择了错误的岔道。我们天真地听信了马克·扎克伯格等人的话,他们对我们说,需要“快速行动,打破常规”,不能让Facebook、Twitter、Instagram等新兴社交网络受任何繁琐监管的阻碍,比如要求它们为平台上泛滥的有害虚假信息、或为其带来的伤害(比如对年轻女性和女孩的伤害)负责。我们绝不能在人工智能监管上犯同样的错误。

“The best way to understand it emotionally is we are like somebody who has this really cute tiger cub,” Geoffrey Hinton, the computer scientist who is a godfather of A.I., recently pointed out. “Unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.”

“从情感上理解这个问题的最佳方式是,我们就像养了只可爱的虎崽,”计算机科学家、人工智能教父杰弗里·辛顿最近指出。“除非能完全确信它长大后不会把你吃掉,否则你就应该感到担忧。”

It would be a terrible irony if humanity finally created a tool that could help create enough abundance to end poverty everywhere, mitigate climate change and cure diseases that have plagued us for centuries, but we could not use it on a large scale because the two A.I. superpowers did not trust each other enough to develop an effective system to prevent A.I. from being used by rogue entities for globally destabilizing activities or going rogue itself.

如果人类终于创造出一种工具,它能够帮助创造出足够的丰裕来消除世界各地的贫困,减轻气候变化,治愈困扰了我们几个世纪的顽疾,却因为两大人工智能强国彼此缺乏足够的信任,无法建立有效的防护系统,最终导致该技术被不法实体用于破坏全球稳定的活动,或人工智能自身出现失控,那将是一个极大的讽刺。

But how do we avoid this?

但我们如何避免这种情况呢?

Building In Trust

建立信任

Let’s acknowledge up front: It may be impossible. The machines may already be becoming too smart and able to elude ethical controls, and we Americans may be getting too divided, from one another and from the rest of the world, to build any kind of shared trust framework. But we have to try. Mundie argues that a U.S.-China A.I. arms control regime should be anchored in three core principles.

首先我们必须承认:这或许根本不可能实现。机器也许已经变得过于智能,能够逃避伦理约束,而我们美国人也可能彼此之间以及与世界其他国家之间的分裂太深,以至于无法建立任何形式的共享信任框架。但我们仍须尝试。芒迪认为,美中人工智能军备控制体系应基于三大核心原则。

First: Only A.I. can regulate A.I. Sorry, humans — this race is already moving too fast, scaling too widely and mutating too unpredictably for human analog-era oversight. Trying to govern an autonomous drone fleet with 20th-century institutions is like asking a dog to regulate the New York Stock Exchange: loyal, well meaning, but wildly overmatched.

第一:惟有人工智能可以监管人工智能。抱歉,人类:这场竞赛发生得太快、扩展得太广,突变太难以预测,人类模拟时代的监管方式已无法应对。试图用20世纪的机构来管理一个自主无人机编队,就像让一条狗来监管纽约证券交易所:狗虽忠诚且好心好意,却完全力不能及。

Second: An independent governance layer, what Mundie calls a “trust adjudicator,” would be installed in every A.I.-enabled system that the U.S. and China — and any other country that wants to join them — would build together. Think of it as an internal referee that evaluates whether any action, human-initiated or machine-driven, passes a universal threshold for safety, ethics and human well-being before it can be executed. That would give us a basic level of pre-emptive alignment in real time, at digital speed.

第二:给每个人工智能系统安装一个芒迪称为“信任裁决者”的独立治理层,这个治理层由美国和中国——以及任何愿意加入的国家——共同构建。可将其视为一个内置裁判,对无论是人类发起还是机器驱动的任何行动进行评估,在执行行动前,判断其是否满足安全、道德,以及人类福祉的通用阈值。这将以数字时代的速度,为我们提供实时预判的基本保障。

But adjudicate based on whose values? It must, Mundie argues, be based on several substrates. These would include the positive laws that every country has mandated — we all outlaw stealing, cheating, murder, identity theft, defrauding, etc. Every major economy in the world, including the United States and China, has its version of these prohibitions on the books, and the A.I. “referee” would be entrusted with evaluating any decision on the basis of these written laws. China would not be asked to adopt our laws or we theirs. That would never work. But the trust adjudicator would ensure that each nation’s basic laws are the first filter for determining that the system will do no harm.

但基于谁的价值观进行裁决呢?芒迪认为,裁决必须基于多重基础。包括每个国家强制执行的现有法律——我们都有法律禁止偷窃、欺骗、谋杀、身份盗用、诈骗等行为。全球每个主要经济体(包括美国和中国)的法典中都有此类禁令,人工智能“裁判”将被委托以这些成文法律为基础,对所有决定进行评判。中国无需采纳我国法律,我们亦不照搬其法规,因为这绝不可行。但“信任裁决者”会确保各国基本法律成为确定系统不会造成危害的第一道防线。

In cases where there are no written laws to choose from, the adjudicator would rely on a set of universal moral and ethical principles known as doxa. The term comes from the ancient Greek philosophers to convey “common beliefs” or widely shared understandings within a community — principles like honesty, fairness, respect for human life and do unto others as you wish them to do unto you — that have long guided societies everywhere, even if they were not written down.

在没有成文法律可供选择的情况下,裁决者会依据一套被称为“信念”(doxa)的普世道德伦理原则。古希腊哲学家用这个词来表达“共同信念”或一个社群内部持有的共识——如诚实、公平、尊重生命,以及己所不欲勿施于人等原则,这些原则长期以来一直指引着世界各地的社会,即使没有写成文字。

For instance, like many people, I didn’t learn that lying was wrong from the Ten Commandments. I learned it from the fable about George Washington and what he said after he chopped down his father’s cherry tree: He supposedly confessed, “I cannot tell a lie.” Fables work because they distill complex truths into memorable memes that machines can absorb, parse and be guided by.

例如,像许多人一样,我并非通过《十诫》知晓撒谎是错误的。我是从乔治·华盛顿的寓言故事里学到的,据说他砍倒了父亲种的樱桃树后承认是自己干的:“我不能撒谎。”寓言故事之所以有效,是因为它将复杂真理提炼为机器可吸收、解析并遵循的记忆模因。

Indeed, six months ago Mundie and some colleagues took 200 fables from two countries and used them to train a large language model with some rudimentary moral and ethical reasoning — not unlike the way you would train a young child who doesn’t know anything about legal codes or basic right and wrong. It was a small experiment but showed promise, Mundie says.

实际上,六个月前,芒迪和一些同事从两个国家选取了200个寓言故事,用以训练具备基础道德推理能力的大语言模型——这与教导不懂法律法规和基本是非的幼童方式异曲同工。芒迪说,这个小型实验已展现出积极前景。

The goal is not perfection but a foundational set of enforceable ethical guardrails. As the author and business philosopher Dov Seidman likes to say, “Today we need more moralware than software.”

目标并非追求完美,而是建立可执行的基础道德护栏。如研究商业哲学的作家多夫·塞德曼喜欢说的:“当今我们需要的不是软件,而是道德软件。”

Third: To turn this aspiration into reality, Mundie insists, Washington and Beijing would need to approach the challenge the way the United States and the Soviet Union once approached nuclear arms control — through a structured process with three dedicated working groups: one focused on the technical application of a trust evaluation system across models and platforms; one focused on drafting the regulatory and legal frameworks for adoption within and across nations; and one devoted squarely to diplomacy — forging global consensus and reciprocal commitments for others to join and creating a mechanism to protect themselves from those who won’t.

第三:芒迪强调,要把这个愿望变为现实,华盛顿和北京需要用美苏当年在核武器控制问题上曾经用过的方法来应对这个挑战,也就是通过设立三个专项工作组的结构化进程:一个工作组专注信任评估系统在跨模型与平台的技术应用;一个工作组的重点是起草监管和法律框架,以供本国和其他国家采用;一个工作组专司外交事务——构建全球共识与互惠承诺机制,吸引他国加入并建立防护体系抵御拒不参与者。

The message from Washington and Beijing would be simple and firm: “We have created a zone of trusted A.I. — and if you want to trade with us, connect with us or integrate with our A.I. systems, your systems must comply with these principles.”

能做到这点的话,华盛顿与北京传递的信息将简洁而坚定:“我们已经建立了一个值得信赖的人工智能区,如果你们想与我们进行贸易、与我们连接,或融入我们的人工智能系统,你们的系统必须遵守这些原则。”

Before you dismiss this as unrealistic or implausible, pause and ask yourself: What will the world look like in five years if we don’t? Without some kind of mechanism to govern this quadruple-use technology, Mundie argues, we will soon discover that the proliferation of A.I. “is like handing out nuclear weapons on street corners.”

在你认为这个想法不切实际或不合情理之前,请停下来问问自己:如果我们不这样做,五年后的世界会是什么样子?如果没有某种机制来管理这种四重用途的技术的话,芒迪认为,我们很快会发现,人工智能的扩散“就像在街角分发核武器”。

Don’t think Chinese officials are unaware of this. Mundie, who is part of a dialogue on A.I. with U.S. and Chinese experts, says he often senses the Chinese are far more worried about A.I.’s downsides than are many in American industry or government.

不要以为中国官员不知道这个问题。芒迪参与了美中专家就人工智能展开的对话,他说,他常常觉察到,中方对人工智能风险的忧虑程度远超过许多美国产业界与政府人士。

If someone out there has a better idea, we would love to hear it. All we know is that training A.I. systems in moral reasoning must become a global imperative while we still retain some edge and control over this new silicon-based species. This is an urgent task not just for tech companies, but also for governments, universities, civil society and international institutions. European Union regulation alone will not save us.

如果有人有更好的想法,我们乐意倾听。唯一确定的是,趁我们对这个硅基新物种仍有一定优势和控制的时候,训练人工智能进行道德推理必须成为全球当务之急。这不只是科技公司的当务之急,也是各国政府、高校、公民社会以及国际组织的当务之急。仅靠欧盟的监管救不了我们。

If Washington and Beijing fail to rise to this challenge, the rest of the world won’t stand a chance. And the hour is already late. The technological temperature is hovering at 211.9 degrees Fahrenheit. We are one-tenth of a degree away from fully unleashing an A.I. vapor that will trigger the most important phase change in human history.

如果华盛顿和北京不能应对这个挑战的话,世界其他国家更没有成功的可能。而且时不我待。人工智能热已徘徊在99.9摄氏度。我们距离全面释放人工智能蒸汽只差0.1度,那将引发人类历史上最重要的相变。

托马斯·L·弗里德曼(Thomas L. Friedman)是外交事务方面的专栏作者。他1981年加入时报,曾三次获得普利策奖。他著有七本书,包括赢得国家图书奖的《从贝鲁特到耶路撒冷》(From Beirut to Jerusalem)。欢迎在TwitterFacebook上关注他。

翻译:纽约时报中文网

点击查看本文英文版。

同类信息

查看全部

茉莉花论坛作为一个开放社区,允许您发表任何符合社区规定的文章和评论。

茉莉花新闻网

        中国茉莉花革命网始创于2011年2月20日,受阿拉伯之春的感召,大家共同组织、发起了中国茉莉花革命。后由数名义工无偿坚持至今,并发展成为广受翻墙网民欢迎的新闻聚合网站并提供论坛服务。

新闻汇总

邮件订阅

输入您的邮件地址:

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram