Marketplace®

Daily business news and economic stories
Jun 9, 2025

A federal judge ruled AI chatbots don't have free speech protections — for now

Jane Bambauer, law professor at the University of Florida, says though this is just a preliminary ruling, it could set precedence for the AI industry and the way companies are held liable moving forward.

Download
A federal judge ruled AI chatbots don't have free speech protections — for now
Moor Studio/Getty Images

Subscribe:

There’s a lawsuit right now that’s testing the question of whether AI chatbots are protected by the First Amendment. But before we get into it, a warning that our story today includes discussion of suicide.

Last year, 14-year-old Sewell Setzer III took his own life, after having months of conversations with a chatbot powered by Character.AI. Its parent company, Character Technologies, operates a bunch of character-driven chatbots. Setzer’s mother Megan Garcia sued, saying her teenage son was engaged in a virtual emotional and sexual relationship with one of those AI characters.

The company moved to have the case dismissed, in part on free speech grounds. But the presiding judge shot it down, saying Character Technologies has more work to do to prove that it has free speech protections.

Marketplace’s Nova Safo spoke with Jane Bambauer, law professor at the University of Florida, who’s been following this case.

More on this

Judge rejects claim chatbots have free speech in suit over teen’s death” from the Washington Post

A Teen killed himself after talking to a Chatbot. His mom's lawsuit could cripple the AI industry.” from Reason Magazine

Character.AI’s statement to Marketplace Tech:

“It’s long been true that the law takes time to adapt to new technology, and AI is no different. In the May 21st order, the court made clear that it was not ready to rule on all of Character.AI’s arguments at this stage and we look forward to continuing to defend the merits of the case. 

We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe. We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time spent notification, updated prominent disclaimers and more. 

Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.”

The Team

AI chatbots don't have free speech protections, judge rules