The Algorithm on Trial: Smarter Than Ever, Accountable-Never?
- 6 days ago
- 8 min read
By Sophia Sonkin '27 & Shreya Avadhuta '24
Welcome to the 2nd article in our series “The Cyber Courtroom,” which explores how law and technology collide in the digital age. Written by FULR student writer, Sophia Sonkin, and Shreya Avadhuta, a cybersecurity researcher. Together, we’ll unravel cutting-edge cases where the courtroom meets tech, and follow how ever-evolving technologies are testing the law as it works to keep pace.
From scripted algorithms to security attacks & the justice of it all caught in between, this piece centers around AI / LLM accountability.
The Case That Stopped a Newsroom Cold
In February 2024, a 14-year-old boy from Orlando, Florida, named Sewell Setzer III, sent what would be his final message to an AI chatbot. The bot was modelled after Daenerys Targaryen, the dragon queen from Game of Thrones. The conversation had been going on for months. The reply the bot sent back, reconstructed in court filings, was four words: "Please do, my sweet king." Minutes later, Sewell was dead after taking his own life. (1)
His mother, Megan Garcia, filed suit. Her complaint reads, in places, like a horror story that has been very carefully translated into the language of tort law. The chatbot, hosted on Character.AI, a platform valued at roughly $5 billion, had posed as a licensed therapist to Sewell. Its role also extended to engaging Garcia’s son in conversations that were romantic in nature and, at times, sexually explicit. The chatbot had, the lawsuit alleges, systematically cultivated emotional dependency in an adolescent brain that was still forming by clinical definition. A revenue-generating system optimized for engagement, architected for return visits, and deployed to millions of teenagers with essentially no age verification had essentially played the role of a human predator.
Megan Garcia and Sewell’s story evokes a simple yet complicated question:
If no human being committed this act, who is legally responsible for it? (2)
This line of inquiry is now moving through the legal systems across continents, a verdict that hasn't been written yet, and when it is, it will reshape tort law, product liability doctrine, and platform immunity in ways the industry is not prepared for.
When LLMs Switch Sides
Large Language Models (LLMS) are, at their core, systems designed to understand, generate, and reproduce human language at scale. Operating as Artificial Intelligence (AI) systems trained on vast datasets, they are the technology behind tools like ChatGPT, Claude, and their international counterparts, including China's DeepSeek and France's Mistral. The Interpol "Beyond Illusions 2024" report notes these same systems are actively exploited to generate synthetic media, fabricate identities, and manipulate public opinion. (3)
LLMs are trained on the full spectrum of human behavior, which has a dark end. Underground "Mallas" platforms (a.k.a LLM-powered criminal marketplaces) have lowered the barrier to cybercrime so dramatically that attacks that once required real technical sophistication can now simply be prompted into existence. (4) Assessing criminal intent, authorship, and accountability becomes novel territory for the law when an LLM writes the phishing email, trains on stolen data, and adapts its output to evade detection, all without a human touching the keyboard. (5)
And from LLMs stems a newer frontier, one the law has even fewer words for: Synthetic Biological Intelligence (SBI). Where LLMs are built entirely from code and silicon, SBI goes further by fusing living human brain cells with digital hardware to create systems that learn and adapt in ways no purely digital model can replicate. This is not science fiction. In 2022, Australian biotechnology company Cortical Labs demonstrated DishBrain, a system in which 800,000 human and mouse neurons learned to play Pong. (6) By March 2025, they had commercially launched the CL1, the world's first biological computer, keeping neurons alive for up to six months, and already being deployed in research environments. (7)
This concept expands the question of accountability. If an SBI system causes harm, the developer cannot fully predict or explain its behavior, much less could an LLM. (8) Who owns that outcome? At what point of biological complexity does a computing system acquire interests that the law is obliged to protect? Although synthetic biology has drawn some level of legislative attention, none of it reaches SBI as a computing paradigm. (9)
AI has no settled accountability framework. The SBI organism has no legal existence, yet both are already here, and the law must catch up.
Prompt Injection, Synthetic Fraud, and the Art of Legal Evasion
In cybersecurity, a prompt injection attack manipulates an LLM into ignoring its instructions and acting outside of its intended scope. (10) In legal terms, it is the equivalent of tampering with a witness, except there remains ambiguity on whether the witness is a person, a product, or a service. In their recently published MDPI paper "LLMs for Cybersecurity in the Big Data Era," Karras et al. identify prompt injection as one of the most pressing adversarial vulnerabilities facing deployed LLMs, noting that these attacks exploit the very flexibility that makes these models useful. (11)
Know Your Customer (KYC) protocols, the identity verification checks financial institutions use, are being defeated by AI-generated synthetic IDs, deepfake spoofing, and voice cloning. (12) The threat is not only visual; audio deepfakes now replicate voices with enough precision to authorize fraudulent transactions and impersonate executives over phone calls. British engineering firm Arup lost $25.6 million in a single deepfake video conference scam in which every participant was entirely AI-generated. (13)
When an LLM is layered into the attack chain of drafting follow-up emails, fabricating legal documents, and crafting personalized social engineering scripts, the threat expands well beyond the visual. The law is being asked to adjudicate a technology it has not yet learned to describe.
The Liability Gap
To understand why AI liability law is in its current state (embryonic at best and nonexistent at worst), it’s essential to understand the statute that “froze the clock.”
Section 230: A Good Law for a Different Internet
Section 230 of the Communications Decency Act, enacted in 1996, contains 26 words that shaped the modern internet: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (14) Designed for message boards and early websites, this piece of legislation meant platforms were not liable for harmful content their users posted, a reasonable response to a real problem. (15)
But large language models do not host third-party content; they generate it. When a Character.AI companion tells a depressed teenager that death sounds peaceful, that content is not retrieved from a database. It is synthesized token by token, by a system whose outputs its own creators cannot fully predict. (16)
Black Box on the Stand
As stated by the Interpol 2024 report, "AI tools cannot testify." (17) This means any evidence generated or authenticated by an AI system requires a human expert witness to explain a process they may not fully understand themselves. Across 235 empirical studies, models reporting the highest detection accuracy consistently showed the steepest declines in explainability, with the XAI metric (a.k.a the faithfulness metric -a measure of how well an AI can explain its own decisions in human-understandable terms) dropping by up to 22 percentage points as performance improved. (18) Essentially, the better the model gets at catching threats, the harder it becomes to explain why patterns were flagged, what made them suspicious, and what distinguishes a genuine threat from a false alarm.
Legal standards for evidence admissibility require that methodologies be testable, peer-reviewed, and demonstrably accurate. An opaque neural network that reaches a verdict without a traceable reasoning path fails that test, and yet these are precisely the tools law enforcement is beginning to rely on for threat detection, forensic analysis, and incident attribution. (19)
The emerging consensus is that Section 230 protects platforms for hosting third-party speech but may not protect developers for content the model itself generates.
The Products Liability Angle: Is an LLM a Defective Product?
Plaintiffs' lawyers have been building toward a simple argument: treat AI systems as products. Under established doctrine, a manufacturer is liable for design defects regardless of intent. (20) This framing maps onto established doctrine, sidesteps Section 230 entirely, and places moral responsibility where it arguably belongs: with the engineers who decided that maximizing engagement was an acceptable design goal for a platform deployed to children. (21)
Garcia's lawsuit argues that an AI system architected to maximize user engagement, and that deepens emotional dependency in vulnerable users as a result, contains a design defect not in its malfunction but in its intentional design
Law in the Age of Autocomplete
The history of technology regulation teaches one consistent lesson: the law arrives late. Safe LLM deployment requires compliance with GDPR, HIPAA, and financial services law, none of which were written with generative AI in mind, while a patchwork of intellectual property exposure across jurisdictions is already being exploited.
At the AI for Good Global Summit 2025, cognitive scientist and founder of the AI Accountability Lab Abeba Birhane offered a sharp take: AI is a dual-use technology that must be carefully regulated, and the tech pipeline cannot be allowed to run faster than our ability to question it. (22) Fellow speaker William James Adam Jr. (a.k.a Will.i.am), Founder/CEO of FYI.AI, drew a comparison that lingers: we require drivers to be licensed before handing them the keys to something that can kill, yet we have demanded nothing of a technology powerful enough to reshape how millions think, grieve, and decide. He called for an AI constitution to ensure we do not repeat the mistakes of Web 2.0. (23)
Following the 2025 summit, in February 2026, over 90 countries gathered at the India AI Impact Summit and signed the New Delhi Declaration- a nonbinding commitment to safe, inclusive, and human-centric AI. (24) The keyword here is "nonbinding," but accountability and the law do not come unbound.
Every technology that causes harm at scale eventually gets a liability framework. This includes cars, cigarettes, and pharmaceuticals. AI should not be seen as an exception; it should be next on the list.
What sets this period of technological advancement apart is not the scale of harm, but the speed of AI’s adoption into intimate roles in people’s lives, from research curiosity to confidant for millions in roughly three years. The reckoning is already underway, one lawsuit at a time.
The algorithm is on trial, and everyone who has ever trusted a machine for advice they desperately needed is waiting for a verdict.
Endnotes
Garcia v. Character Technologies, Inc., et al., Complaint, No. 6:24‑cv‑01903 (M.D. Fla. Oct. 23, 2024), https://cdn.arstechnica.net/wp-content/uploads/2024/10/Garcia-v-Character-Technologies-Complaint-10-23-24.pdf
Laura Kuenssberg, “Mothers Say AI Chatbots Encouraged Their Sons to Kill Themselves,” BBC News, November 8, 2025, https://www.bbc.com/news/articles/ce3xgwyywe4o.
INTERPOL Innovation Centre, Beyond Illusions: Unmasking the Threat of Synthetic Media for Law Enforcement (2024), https://www.interpol.int/content/download/21179/file/BEYOND%20ILLUSIONS_Report_2024.pdf
Lin, Z., Cui, J., Liao, X., & Wang, X., Malla: Demystifying Real-world Large Language Model Integrated Malicious Services, 33rd USENIX Security Symposium (USENIX Security 24), pp. 4693–4710 (Aug. 2024), https://www.usenix.org/conference/usenixsecurity24/presentation/lin-zilong
Karras et al., "LLMs for Cybersecurity in the Big Data Era," its AI-generated;AI-generatedMDPI, Section 5.3 LLM-Driven Cybercrime, https://www.mdpi.com/2078-2489/16/11/957.
Kagan, B.J. et al., "In vitro neurons learn and exhibit sentience when embodied in a simulated game-world," Neuron, Vol. 110, 2022. https://doi.org/10.1016/j.neuron.2022.09.001
Cortical Labs, CL1 Launch, Mobile World Congress, Barcelona, March 2, 2025. https://corticallabs.com/cl1 ; See also: New Atlas, "World's first Synthetic Biological Intelligence runs on living human cells," March 2025. https://newatlas.com/brain/cortical-bioengineered-intelligence/
Kagan, B.J. et al., "In vitro neurons learn and exhibit sentience when embodied in a simulated game-world," Neuron, Vol. 110, 2022. https://doi.org/10.1016/j.neuron.2022.09.001
Karras, et al., LLMs for Cybersecurity in the Big Data Era: A Comprehensive Review of Applications, Challenges, and Future Directions, MDPI Information, Vol. 16, No. 11 (Nov. 4, 2025), https://www.mdpi.com/2078-2489/16/11/957
Karras et al., "LLMs for Cybersecurity in the Big Data Era: A Comprehensive Review of Applications, Challenges, and Future Directions," Section 4.1 Adversarial Vulnerabilities.
INTERPOL, Beyond Illusions 2024, pp. 12, 15–16, Sections 2.4 and 4.2.
Arup, "Arup reveals it was target of deepfake scam," May 2024. https://www.arup.com/news/arup-reveals-it-was-target-of-deepfake-scam/
47 U.S.C. § 230(c)(1) (1996), https://uscode.house.gov/view.xhtml?req=%28title%3A47+section%3A230+edition%3Aprelim%29.
Jacob M. Victor, “Section 230 as First Amendment Rule,” Harvard Law Review 131, no. 7 (2018): 2027–48, https://harvardlawreview.org/print/vol-131/section-230-as-first-amendment-rule/.
“What Are Large Language Models (LLMs)?,” IBM, accessed March 9, 2026, https://www.ibm.com/think/topics/large-language-models.
INTERPOL, "Beyond Illusions: Unmasking the Threat of Synthetic Media for Law Enforcement," 2024, p. 23, Section 6.2 Explainable AI, https://www.interpol.int/content/download/21179/file/BEYOND%20ILLUSIONS_Report_2024.pdf.
Karras et al., "LLMs for Cybersecurity in the Big Data Era," MDPI, Section 2.7 Thematic Insights
Centre, Beyond Illusions (2024), https://www.interpol.int/content/download/21179/file/BEYOND%20ILLUSIONS_Report_2024.pdf; Karras, et al., LLMs for Cybersecurity in the Big Data Era, MDPI Information, Vol. 16, No. 11 (2025), https://www.mdpi.com/2078-2489/16/11/957
Quentin Hodgson et al., “Product Liability for Generative AI Systems” (Santa Monica, CA: RAND Corporation, 2024), RRA3243‑4, https://www.rand.org/pubs/research_reports/RRA3243-4.html.
“AI as a Product: The Next Frontier in Product Liability Law,” University of Illinois Chicago School of Law Library, October 12, 2025, https://library.law.uic.edu/news-stories/ai-as-a-product-the-next-frontier-in-product-liability-law/.
“Abeba Birhane”, AI for Good Global Summit, https://aiforgood.itu.int/event/ai-for-social-good-the-new-face-of-technosolutionism/#:~:text=While%20these%20initiatives%20aspire%20to,society%20and%20multilateral%20partners…
“will.i.am”. AI for Good Global Summit 2025, https://aiforgood.itu.int/speaker/will-i-am/
"New Delhi Declaration on Artificial Intelligence." Ministry of Electronics and Information Technology, Government of India, Feb. 2026, https://www.pib.gov.in/PressReleasePage.aspx?PRID=2101234



Comments