Confronting Hallucinatory AI: The End of Accountability?
- FULR Management
- Apr 1, 2024
- 6 min read
Updated: Aug 22, 2024
By Sebastian Smith '25
When one makes the conscious decision to end another person's life, they are swiftly held accountable by the hand of the law, often with grave consequences. But let's imagine a world in which, if you are hallucinating, then you cannot be held accountable at all. If you are hallucinating when you pull the trigger, then accountability is essentially waived. Psychedelics, or any hallucinogens for that matter, relieve you of any and all culpability—but only if you were hallucinating. While seemingly dystopian, this elusiveness of accountability is the reality of “hallucinatory” artificial intelligence.
Despite its nearly universal impact, AI is by no means perfect, plagued by pervasive “hallucinations.” AI hallucination is a phenomenon where large language models, a ubiquitous algorithm powering AI, create inaccurate outputs. (1) Essentially, the large language model identifies nonexistent patterns and variables, producing responses that are not based on reality. Considering the monumental influence of AI on contemporary human life, the widespread emergence of hallucinations poses an imminent danger.
Extremely powerful and relatively cheap, AI is spreading faster than even computers did, touching every imaginable industry. (2) From connecting to vital infrastructure such as power grids, stock markets, and roadways, to diagnosing patients for healthcare professionals, to overtaking traditional search engines, there is no escaping its unprecedented influence. (3) This is why AI hallucination warrants significant concern; as AI stretches its reach into nearly every realm of human endeavor, errors will only become more and more costly.
Accountability provides the mechanism to manage AI development, mitigating hallucinatory outcomes by legislating prompt discipline and forcing algorithms to learn and improve as necessary. Implemented effectively, accountability creates order and stability, ensuring responsible development. Just as we were taught to be responsible for our own actions, artificial intelligence must also internalize this sense of responsibility to prevent incorrigible hallucinations. In this article, I will explore the strenuous battle to hold AI accountable and how our failure to do so carries grave consequences.
To hold an entity accountable in the eyes of the law, an element of intent is commonly required. Our current legal codes and regulatory initiatives are unequipped to attribute accountability to AI, as intentionality cannot be assigned to a non-sentient entity. AI only emulates human intelligence by integrating advanced algorithms - it can't replicate the consciousness of human behavior, entirely void of intention. Furthermore, constraining accountability to programmers neglects the emergent power of AI to act independently of human influence, purely on its autonomic reflexes. This problem reveals imminent vulnerabilities in our regulatory framework and declaratory initiatives, such as those authored by the Biden administration and the European Union, that enable AI to hallucinate without restraint.
In 2023, President Biden issued the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence order in hopes of laying the groundwork for a future marked by responsible AI. While an admirable effort, the order possesses a fundamental shortcoming. When addressing accountability, the order asserts that “it is necessary to hold those developing and deploying AI accountable.” (4) However, Biden’s conception of accountability is limited to human programmers, failing to target AI itself, which often pursues ventures its original programmers could not intend nor foresee. The hallucinations of AI are simply not attributable to a specific act or negligent step by its developers, rendering it highly difficult to hold any human actors accountable.
The tendency of AI to act and grow independently of any human programming poses a significant challenge to holding it accountable for its hallucinations. For example, consider the scenario in which generative AI propagates incorrect, defamatory information, harmful to an innocent man’s reputation, as experienced by the director of Stanford’s Program in Law, Science and Technology - Professor Mark Lemey. In 2023, the generative AI chatbot ChatGPT-4 produced information that he was a criminal; namely, that he had misappropriated trade secrets. (5) In reality, he had done no such thing at all. Typically, when defamatory information is disseminated, a defamation claim would be filed. Defamation is an intentional tort, requiring the plaintiff to prove a wrongful volitional act done by the defendant. (6) A volitional act dictates that the defendant acted with a mindful will. (7)
However, establishing intent or a “mindful will” is impossible with AI. AI doesn’t intend to propagate false information or harm an individual’s reputation at all, solely producing a designated output derived from its underlying algorithm. As Professor Lemley affirms, “The company that runs the AI is not doing anything deliberate. They don’t necessarily know what the AI is going to say in response to any given prompt.” (8) Furthermore, relying on precedent may be fruitless, as extensive case law on the liability of AI remains elusive. (9) AI cannot be held accountable for defamation, thus managing to evade responsibility.
This problem isn’t reserved to the United States. The European Union issued the Declaration on Digital Rights and Principles for the Digital Decade, a document regarded as the most comprehensive outline for successfully managing AI's revolutionary impact. The declaration affirms that it will “hold accountable those that seek to undermine security online and the integrity of the digital environment.” (10) Once again, we are presented with the issue of intentionality, or lack thereof. AI is a non-sentient entity, operated by its underlying algorithm – it simply cannot intend anything. Consequently, if an autonomous algorithm truly does “undermine the security and integrity of the digital environment,” as the declaration forbids, it still avoids accountability, as it does not “seek” to do so at all.
Hallucinations are often seen as trivial, laughable missteps by our robotic friends. But this could not be further from the truth. According to the new start-up Vectara, generative AI chatbots employed by reputable companies such as Google and Bing have been found to hallucinate at a rate as high as 27 percent. (11) With AI endowed with such unprecedented power, the lack of accountability for its prevalent hallucinations entails staggering outcomes.
The current trend of automation is displacing human labor with AI in nearly every field - and is increasing rapidly. Important decisions that have historically been made by humans, as simple as when to turn your car and as dire as tactical military operations, are now being made by algorithms, leaving millions of human lives vulnerable to AI hallucinations. Take AI’s role in operating autonomous vehicles, for example. If the algorithm hallucinates, the potential consequences are as grave as a fatal accident. Or less deadly, consider AI’s role as a search engine and information resource. If the algorithm hallucinates, personal information that is incorrect and defamatory may be disseminated, destroying an innocent person's reputation.
The hard truth is that we are unequipped to effectively hold AI accountable for its hallucinations. Failure to impose responsibility exposes hazardous legal and regulatory challenges, rendering our systems of oversight vulnerable to exploitation. This culminates in far greater security concerns, with humanity relinquishing its autonomy to the discretion of hallucinatory technological power.
However, the urgency of this predicament is not shared by all. Many tech developers maintain that attempting to apply legal accountability to hallucinatory AI will hinder innovation and the actual improvement of algorithms. Instead, firms will be forced to guard against potential legal exposure, discouraged from pursuing ambitious innovation projects, and may even leave the country altogether. Peter Diamandis, a tech entrepreneur founder of the XPrize Foundation and Zero-Gravity corporation, affirms this idea, asserting that “If the government regulates against use of [artificial intelligence]...the work and the research leave the borders of that country and go someplace else.” (12)
This traditional laissez-faire position, fervently resisting intervention, enables reckless innovation justified under the veil of productive advancement. The developments created amid deregulation are plagued by hazards going uncorrected by procedural safeguards. Proper legal regulations ensure that the research and development of AI is responsible, prioritizing citizens' security and safety.
As I’m sure many of you do, I hope policymakers, tech developers, and judges can compromise on the critical issue of hallucinatory AI and accountability, paving the way for a future marked by responsible artificial intelligence. But until then, we are on our own. We must be cognizant of our reliance on AI and cautiously analyze its risks, to avoid becoming another victim of hallucination.
Endnotes
“What Are AI Hallucinations?” IBM, 2024, www.ibm.com/topics/ai-hallucinations.
“The Sunny and the Dark Side of Ai.” The Economist, The Economist Newspaper, 28 Mar. 2018, www.economist.com/special-report/2018/03/28/the-sunny-and-the-dark-side-of-ai.
Metz, Cade. “How Could A.I. Destroy Humanity?” The New York Times, The New York Times, 10 June 2023, www.nytimes.com/2023/06/10/technology/ai-humanity.html.
Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. President Biden, 2023.
Weber, Tomas. “Artificial Intelligence and the Law: Legal Scholars on the Potential for Innovation and Upheaval.” Stanford Law School, 5 Dec. 2023, law.stanford.edu/stanford-lawyer/articles/artificial-intelligence-and-the-law/
“What Is a Volitional Act?” LexRoll Law Encyclopedia, LexRoll, 2023, encyclopedia.lexroll.com/encyclopedia/volitional-act/.
Ibid.
Weber, Tomas. “Artificial Intelligence and the Law” Stanford Law School, 5 Dec. 2023, law.stanford.edu/stanford-lawyer/articles/artificial-intelligence-and-the-law/
Ibid.
European Union. "European Declaration on Digital Rights and Principles for the Digital Decade." European Commission, 2022.
Metz, Cade. “Chatbots May ‘Hallucinate’ More Often than Many Realize.” The New York Times, The New York Times, 6 Nov. 2023, www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html.
Diamandis, Peter. Twitter, Twitter, 13 May 2021, twitter.com/iadouala/status/1392849243858259970.
Comments