茉莉花新闻网

中華青年思想與行動的聚合地

你告诉AI机器人一个“犯罪秘密”,你会被逮捕吗?

NILS GILMAN

2025年11月12日

Gaia Alari

On New Year’s Day, Jonathan Rinderknecht purportedly asked ChatGPT: “Are you at fault if a fire is lift because of your cigarettes,” misspelling the word “lit.” “Yes,” ChatGPT replied. Ten months later, he is now being accused of having started a small blaze that authorities say reignited a week later to start the devastating Palisades fire.

元旦那天,乔纳森·林德克内希特据称问了ChatGPT这样一个问题:“如果因为你的香烟引起了火灾,这算是你的错吗?”ChatGPT回答说:“是的。”10个月后,他现在被指控纵火——当局称他引发了一场小火,火在一周后复燃,最终酿成帕利塞兹特大火。

Mr. Rinderknecht, who has pleaded not guilty, had previously told the chatbot how “amazing” it had felt to burn a Bible months prior, according to a federal complaint, and had also asked it to create a “dystopian” painting of a crowd of poor people fleeing a forest fire while a crowd of rich people mocked them behind a gate.

根据联邦起诉书,此前林德克内希特曾向聊天机器人透露数月前焚烧圣经带来的“奇妙”感受,还要求其创作一幅“反乌托邦”画作:描绘一群穷人逃离森林大火时,一群富人隔门嘲讽的场景。目前被告已作无罪抗辩。

For federal authorities, these interactions with artificial intelligence indicated Mr. Rinderknecht’s pyrotechnic state of mind and motive and intent to start the fire. Along with GPS data that they say puts him at the scene of the initial blaze, it was enough to arrest and charge him with several counts, including destruction of property by means of fire.

联邦当局认为,这些与人工智能的互动印证了林德克内希特的纵火心理、动机与意图。结合将其定位在初始火源现场的GPS数据,足以将其逮捕并提出多项指控,包括纵火破坏财产。

This disturbing development is a warning for our legal system. As people increasingly turn to A.I. chat tools as confidants, therapists and advisers, we urgently need a new form of legal protection that would safeguard most private communications between people and A.I. chatbots. I call it A.I. interaction privilege.

这一令人不安的进展为司法体系敲响警钟。随着越来越多的人将AI聊天工具视为知己、心理治疗师和顾问,我们亟需建立新型法律保护机制,守护人机对话的隐私屏障。我称其为“AI交互特权”。

All legal privileges rest on the idea that certain relationships — lawyer and client, doctor and patient, priest and penitent — serve a social good that depends on candor. Without assurance of privacy, people self-censor and society loses the benefits of honesty. Courts have historically been reluctant to create new privileges, except where “confidentiality has to be absolutely essential to the functioning of the relationship,” Greg Mitchell, a University of Virginia law professor, told me. Many users’ engagements with A.I. now reach this threshold.

所有法律特权都基于同一理念:律师与客户、医生与患者、神父与忏悔者等特定关系,依赖坦诚交流才能促进整个社会的福祉。若无隐私保障,人们便会自我审查,而社会也将失去诚实带来的益处。弗吉尼亚大学法学教授格雷格·米切尔告诉我,法院历来不愿创设新特权,除非“保密性成为维系该关系的绝对必要条件”。如今,众多用户与AI的交互已达到了这一门槛。

People speak increasingly freely to A.I. systems, not as diaries but as partners in conversation. That’s because these systems hold conversations that are often indistinguishable from human dialogue. The machine seemingly listens, reasons and provides responses — in some cases not just reflecting but shaping how users think and feel. A.I. systems can draw users out, just as a good lawyer or therapist does. Many people turn to A.I. precisely because they lack a safe and affordable human outlet for taboo or vulnerable thoughts.

人们越来越自然地与人工智能系统交谈,不再把它用作日记,而是视其为对话伙伴。因为这些系统进行的对话已与人类交流无异。机器似乎在倾听、推理、回应——在某些情况下,它不仅反映用户的思想和情感,甚至对其加以塑造。人工智能系统能像优秀的律师或心理治疗师一样引导人们表达自己。许多人之所以求助于人工智能,正是因为他们缺乏一个安全且负担得起的人类对象来倾诉禁忌或脆弱的想法。

This is arguably by design. Just last month the OpenAI chief executive, Sam Altman, announced that the next iteration of its ChatGPT platform would “relax” some restrictions on users and allow them to make their ChatGPT “respond in a very humanlike way.”

这在某种程度上可以说是设计使然。就在上个月,OpenAI首席执行官萨姆·奥尔特曼宣布,其ChatGPT平台的下一代版本将“放宽”部分用户限制,允许用户让自己的ChatGPT“以高度人性化的方式回应”。

Allowing the government to access such unfiltered exchanges and treat them as legal confessions would have a massive chilling effect. If every private thought experiment can later be weaponized in court, users of A.I. will censor themselves, undermining some of the most valuable uses of these systems. It will destroy the candid relationship that makes A.I. useful for mental health and legal and financial problem-solving, turning a potentially powerful tool for self-discovery and self-representation into a potential legal liability.

让政府获取这些未经筛选的交流内容并将其视为法律供词,将产生巨大的寒蝉效应。如果每个私密的思想实验日后都可能成为法庭上的武器,AI用户必将自我审查,从而削弱这些系统一些最具价值的功能。这将摧毁人工智能在心理健康、法律和财务问题的解决过程中所依赖的那种坦诚关系,把这种本可成为自我探索与自我表达的强大工具变成潜在的法律风险。

At present, most digital interactions fall under the Third-Party Doctrine, which holds that information voluntarily disclosed to other parties — or stored on a company’s servers — carries “no legitimate expectation of privacy.” This doctrine allows government access to much online behavior (such as Google search histories) without a warrant.

目前,大多数数字互动都属于“第三方原则”的范畴。该原则认为,任何自愿向第三方披露的信息——或存储在企业服务器上的数据——都“不具备合法的隐私期待”。这使政府能够在无需搜查令的情况下获取大量线上行为记录(例如谷歌搜索历史)。

But are A.I. conversations “voluntary disclosures” in this sense? Since many users approach these systems not as search engines but as private counselors, the legal standard should evolve to reflect that expectation of discretion. A.I. companies already hold more intimate data than any therapist or lawyer ever could. Yet they have no clear legal duty to protect it.

但AI对话是否属于此种意义上的"自愿披露"?既然众多用户将这些系统视为私人顾问而非搜索引擎,法律标准就应当与时俱进,以反映这种保密期待。AI企业掌握的私人数据量,已超越任何心理治疗师或律师所能触及的私密范畴,至今却未承担明确的法律保护责任。

A.I. interaction privilege should mirror existing legal privileges in three respects. First, communications with the A.I. for the purpose of seeking counsel or emotional processing should be protected from forced disclosure in court. Users could designate protected sessions through app settings or claim privilege during legal discovery if the context of the conversation supports it. Second, this privilege must incorporate the so-called duty to warn principle, which obliges therapists to report imminent threats of harm. If an A.I. service reasonably believes a user poses an immediate danger to self or others or has already caused harm, disclosure should be not just permitted, but obligated. And third, there must be an exception for crime and fraud. If A.I. is used to plan or execute a crime, it should be discoverable under judicial oversight.

“AI交互特权”应在三个方面借鉴现有的法律特权。首先,为寻求咨询或情绪疏导而与人工智能进行的交流,应受到保护,免于在法庭上被强制披露。用户可以通过应用程序的设置来指定受保护的会话,或在法律取证阶段主张特权,只要对话的上下文支持这一主张。其次,该特权必须纳入所谓的“警示义务”原则,即AI服务合理判定认为当事人对自己或他人构成迫在眉睫的威胁或者已造成实际伤害时,有义务报告。第三,必须为犯罪和欺诈行为设立例外。若AI被用于策划或实施犯罪,相关对话记录应在司法监督下作为证据调取。

Under this logic, Mr. Rinderknecht’s case reveals both the need and the limits of such protection. His cigarette query, functionally equivalent to an internet search, would not merit privilege. But under A.I. interaction privilege, his confession about Bible burning should be protected. It was neither a plan for a crime nor an imminent threat.

按照这种逻辑,林德克内希特的案件同时揭示了这种保护的必要性与局限性。他关于香烟引发火灾的提问,本质上等同于一次互联网搜索,不应享有特权保护;但根据“AI交互特权”,他关于焚烧《圣经》的坦白则应受到保护——那既不是犯罪计划,也不是迫在眉睫的威胁。

Creating a new privilege follows the law’s pattern of adapting to new forms of trust. The psychotherapist-patient privilege itself was only formally recognized in 1996, when the Supreme Court acknowledged the therapeutic value of confidentiality. The same logic applies to A.I. now: The social benefit of candid interaction outweighs the cost of occasional lost evidence.

建立一种新的特权符合法律不断适应新型信任关系的演进模式。心理治疗师与病人之间的保密特权本身直到1996年才被正式承认,当时美国最高法院认定保密对心理治疗具有重要价值。同样的逻辑如今也适用于AI:坦诚交互带来的社会效益,远超过偶尔失去某些证据的代价。

To leave these conversations legally unprotected is to invite a regime where citizens must fear that their digital introspection could someday be used against them. Private thought — whether spoken to a lawyer, a therapist or a machine — must remain free from the fear of state intrusion.

若放任这些人机对话处于法律真空,将导致公民终日担忧数字化自省可能某天成为呈堂证供。无论是与律师、治疗师,还是与机器的私密交流——都必须享有免于恐惧国家窥探的自由。

Nils Gilman是贝格鲁恩研究所的高级顾问,也是《一颗谦逊之星的孩子:危机时代的行星思维》一书的合著者。

翻译:杜然

点击查看本文英文版。

同类信息

查看全部

茉莉花论坛作为一个开放社区,允许您发表任何符合社区规定的文章和评论。

茉莉花新闻网

        中国茉莉花革命网始创于2011年2月20日,受阿拉伯之春的感召,大家共同组织、发起了中国茉莉花革命。后由数名义工无偿坚持至今,并发展成为广受翻墙网民欢迎的新闻聚合网站并提供论坛服务。

新闻汇总

邮件订阅

输入您的邮件地址:

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram