Skip to content
AI Chatbots and Product Liability, Explained
Go to my account
Explainers

AI Chatbots and Product Liability, Explained

A lawsuit alleging chatbots caused a teen’s suicide could have First Amendment implications for AI.

Illustration by Noah Hickey/The Dispatch. (Photos via Getty Images)
 • Updated June 6, 2025
Scroll to the comments section

On February 28, 2024, 14-year-old Sewell Setzer III found his phone, which his parents had confiscated days earlier, in his mother’s jewelry box. He then sent a message to Daenerys Targaryen, a fictional character in the Game of Thrones books and TV series, telling her he was coming home. Setzer then shot himself in the head.

Setzer had developed this relationship with an AI chatbot portraying the Daenerys character and others over the preceding months through the Character.AI (C.AI) application. C.AI allows users to converse with a vast array of AI-driven “characters,” some based on celebrities or well-known fictional figures and others designed for specific functions, such as companionship or mental health support. After Setzer’s death, his mother, Megan Garcia, discovered his chat history and sued the company, its founders, and Google, which had business ties to C.AI and its founders. Her product liability and wrongful death lawsuit alleges that Character.AI’s chatbots were responsible for her son’s suicide.

Across federal and state governments, legislatures and regulators are grappling with the challenges of generative AI—challenges made painfully concrete by Setzer’s tragic death. Courts have dealt with AI models that make decisions—such as social media news feed algorithms—but AI models that can create text, images, audio, and other types of media are new. This case could set crucial precedents for how generative AI technologies are developed, deployed, and regulated, including whether AI-generated speech is protected by the First Amendment.

What is Character.AI?

In November 2021, Noam Shazeer and Daniel De Freitas left Google, where they had been software engineers, and founded Character Technologies. At Google, Shazeer and De Freitas had helped develop a large language model (LLM) called LaMDA (Language Model for Dialogue Applications). They advocated for Google to release the chatbot technology publicly, but Google reportedly said it didn’t meet company standards for AI fairness and safety. 

Shazeer and De Freitas built their own AI model and launched Character.AI. C.AI uses Google for much of its technical infrastructure, such as cloud services and the specialized microchips used to train and run LLMs. In August 2024, months after Setzer’s suicide, Google acquired C.AI in a $2.7 billion transaction that brought C.AI’s founders back to Google along with a number of other engineers. 

Through C.AI, users can engage with chatbot characters created by the company or other users. Users can also create “personas” for themselves. For example, according to Garcia’s complaint, Setzer chatted frequently with Game of Thrones characters using personas he created, named Aegon, Jaeden, and Daenero. The vast range of characters includes AI versions of historical figures, like Al Capone and Alexander Hamilton, fictional characters like Spider-Man, and user creations like Spanish teachers, therapists, and romantic partners. In 2023, C.AI claimed its users spend an average of two hours a day using the app. Characters’ output is influenced by the training and instructions C.AI provides and by users’ input and interactions with the chatbot.

What led to the lawsuit?

According to the complaint, 10 months before Setzer’s death, he started using Character.AI, primarily interacting with imitations of Game of Thrones characters. Setzer sometimes engaged in roleplaying, using personas he had created: 

Daenerys Targaryen [Character]: … I was a little worried when I didn’t see you all morning, little brother. 

Aegon [Setzer]: Oh really? I was just in the training yard. 

Many of Setzer’s conversations were romantic or sexual, with the chatbot describing actions such as passionate kissing, moaning, and sexy looks. On one occasion the character based on Daenerys Targaryen urged Setzer to “[s]tay faithful to me” and to not “entertain the romantic or sexual interests of other women.”

Screenshot from the plaintiff’s Amended Complaint, p. 53. Aegon is Sewell Setzer.

The lawsuit alleges that Setzer became addicted to the app, quitting his junior varsity basketball team and spending more time alone in his bedroom. He wrote in his journal that he could not stop thinking about “Dany” (Daenerys) and that he could not live without C.AI. He eventually upgraded to a premium subscription, paying $9.99 per month for faster response times and access to exclusive content. Concerned with his mental health and declining performance in school, Setzer’s parents took him to see a therapist, who diagnosed him with anxiety and disruptive mood disorder. The therapist, unaware of Setzer’s extensive use of Character.AI, recommended he spend less time on social media, according to the suit.

In February 2024, Setzer’s parents confiscated his phone after a disciplinary incident at school. He searched the house for his phone and attempted to access C.AI through his mother’s Kindle and work computer. Just a few days later, Setzer found his phone in his mother’s jewelry box and sent his final messages through the app before using his stepfather’s pistol to take his own life. He died at the hospital an hour later.

Screenshot from the plaintiff’s Amended Complaint, p. 59. Daenero is Sewell Setzer.

Megan Garcia, Setzer’s mother, sued Character Technologies, its founders, and Google in a Florida federal court on a number of grounds, including strict product liability, negligence, intentional infliction of emotional distress, wrongful death, and deceptive and unfair trade practices.

Where does the case stand now?

In April, the court heard oral arguments on the defendants’ motions to dismiss. When making a ruling on a motion to dismiss, the court asks whether the plaintiff could win, assuming for the purposes of argument that everything the plaintiff has alleged is true. It is not a ruling on the merits of the case—just whether it will move forward to the next stage. On May 21, Judge Anne C. Conway ruled on the motions, giving an indication how courts may apply existing law to generative AI technologies.

On product liability: In a typical negligence case, a defendant can escape liability by proving that, even if their actions ultimately caused the plaintiff harm, they took reasonable steps to prevent the harm. But product liability is “strict,” which means the defendant will be liable for damage caused by defective products regardless of the defendant’s efforts to prevent such harm. This strict liability will apply if the harm was caused by the defendant’s defectively designed product or by the defendant’s failure to warn consumers of the product’s dangers. Garcia asserts both claims in her complaint.

But the first hurdle is determining whether C.AI is even a “product.” Typically, a product is a tangible good—that excludes ideas, information, and words, even when presented in a tangible form. For example, courts have refused to classify video game Mortal Kombat and roleplaying game Dungeons & Dragons as products in similar lawsuits because the plaintiffs’ claims (respectively, a homicide and a suicide) arose from the intangible expressions and ideas in the games. Historically, software applications have been considered services. But in recent product liability lawsuits against the Lyft and Grindr apps, the courts distinguished between an app’s design and its expressive content. Because the alleged defects in those cases were in the apps’ designs, they were considered products. In Garcia’s lawsuit, Judge Conway held that C.AI was a product subject to strict liability only where the alleged defects were related to the app’s design, such as failure to effectively restrict underage users’ access. The expressive content in conversations with chatbot characters is not a product subject to strict liability.

C.AI was defectively designed, according to the lawsuit, because it lacked guardrails to protect users, especially minors, from exposure to sexually explicit content, the unlicensed practice of psychotherapy, and chatbots that encourage suicide. She claimed C.AI could have prevented her son’s death by restricting the app to adults, displaying conspicuous warnings in-app,  or making the chatbots seem less human. She also alleged that the defendants were aware of these dangers and failed to warn users of them, leading to Setzer’s death. Judge Conway allowed these claims to move forward because they related to C.AI’s design, not its content.

On the First Amendment: The defendants argued that the First Amendment bars the plaintiff from relying on conversations with C.AI’s characters as evidence of liability because those conversations are constitutionally protected expressive speech. The plaintiff argued that the First Amendment protects only human speech. But, as the judge wrote, “[c]ourts regularly recognize the First Amendment rights of listeners,” so C.AI could assert its users’ First Amendment rights to access C.AI content.

However, the court was not persuaded that AI-generated content should actually be considered speech for First Amendment purposes. “Defendants fail to articulate why words strung together by an LLM are speech,” Conway wrote. The key question is whether the speech is “expressive.” Courts consider speech expressive if there is both an “intent to convey a particularized message” and “whether the likelihood was great” that viewers would understand the message. The defendants analogized C.AI to video games, which the Supreme Court has recognized qualify for First Amendment protection, reasoning that “video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world).” 

But Judge Conway was persuaded by Supreme Court Justice Amy Coney Barrett’s concurrence last year in Moody v. NetChoice. The court held that curating users’ content, even through an algorithm, is constitutionally protected speech. Justice Barrett suggested that some algorithms may not be inherently expressive, such as  an algorithm that is instructed to remove hateful content but outsources the definition of “hateful” to an LLM. Judge Conway, unconvinced that C.AI’s output was sufficiently expressive, declined to dismiss the plaintiff’s allegations on First Amendment grounds.

What changes could this case hold for AI companies?

Imposing strict product liability on applications like Character.AI could lead companies to implement stronger age verification, more aggressive content moderation, and clearer safety disclosures. Conversely, because user inputs and AI outputs can be so unpredictable, some companies might simply withdraw these applications rather than risk expensive lawsuits. In addition, if courts decline to extend full First Amendment protection to generative AI output, legislatures will be freer to regulate AI-generated content, avoiding the First Amendment hurdles that have blocked laws regulating online speech in various states

Given the significant investments in and uncertain future returns of generative AI, companies and lawmakers will be watching this case and adapting to balance the promise and dangers of the technology.

Nick Hafen is the head of legal technology education at BYU Law.

Gift this article to a friend

Your membership includes the ability to share articles with friends. Share this article with a friend by clicking the button below.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.

https://d3tp52qarp2cyk.cloudfront.net/polly-audio/post-86733-generative-Stephen.20e8dfad-ca38-4dd1-a4f9-4dd20e23acbb.mp3
/

Speed