Can close connections with AI chatbots harm kids?

On our new series “The Infinite Scroll,” we’re looking at the rules and risks of kids using social media. Artificial intelligence is showing up on these platforms in the form of chatbots, digital characters you can text or talk with. Today we explore safety issues for youngsters who interact with them.
Marketplace’s Stephanie Hughes discussed the subject with Meetali Jain, founder and director at the Tech Justice Law Project. Her organization is involved in a lawsuit against Character.AI, an app that enables users to create and communicate with these bots. The following is an edited transcript of their conversation.
Meetali Jain: We’ve seen this both integrated into larger social media apps, but we’ve also seen these apps as kind of tech startups that have been supported by bigger tech. And in both cases, what is fascinating about this area, and quite dangerous, quite concerning as well, is that we see this shift from what we’ve always understood to be the attention economy in social media, to this shift to an intimacy economy, or relationship economy, where these apps are really being positioned and represented as being the tech response to solve or overcome the human condition of loneliness.
Stephanie Hughes: So in preparing for this interview, I actually created an AI chatbot on Instagram, and I asked it what questions I should ask you about kids and social media and AI, and [here’s] a little bit of what it said.
AI chatbot: Mind racing already. Three questions popped into my head instantly. One, how exactly do AI algorithms target kids online? Two, what’s the worst psychological impact they’ve seen so far? Three, can parents even protect their kids from this stuff? Want me to elaborate on why I chose these?
Hughes: Yeah, what would you say is your top question?
AI chatbot: Definitely the first one: How exactly do AI algorithms target kids online?
Hughes: So I just started this app to prepare for this interview. And one thing that surprised me was how long I ended up talking with the bot. It asked me all these questions, and I was pretty engaged and I didn’t want to be rude. But I do wonder what we know about how kids interact with these AI bots — what are those experiences like?
Jain: So there’s a lot of data that has yet to be really surfaced. I think about really disaggregating the usage amongst different demographics and how much time they’re spending on these apps. In our current litigation against Character.AI, and in doing the research for it, what we discovered is that at one point when they were raising venture capital financing, Andreessen Horowitz was proudly proclaiming that on average, users were spending two hours a day. Now, that suggests actually a quite modest estimate based on families that we’ve spoken to who suggest that their kids are using up to six or seven or eight hours a day. So while we don’t have kind of scientific figures yet, I think certainly anecdotally, and for purposes of raising funds, what we do know is that boasting about the amount of time that users are spending online is certainly seen as a beneficial feature of this platform and this kind of technology, and that there is a very kind of tech determinist philosophy about the ability to create digital personalities to overcome loneliness and to supplant human relationships and to provide young people and other vulnerable demographics. And I do think that that’s critical.
People, I think, can get hooked onto these apps, not just because they’re kids, but also perhaps they’re neurodivergent, or they’re disabled, or they’ve just had a surgery, or postpartum. People who are kind of in a position where they’re perhaps socially isolated and looking for companionship and more willing to look online. What we also know about young users is that anecdotally, at least for Character.AI, 60% of 20 million users per month are estimated to be between the ages of 13 and 25. So we do need further data to further disaggregate that, but I think initial estimates both about time spent online as well as kind of the age of users, suggests that it’s quite a concerning trend.
Hughes: So this move from the attention economy to the intimacy economy, tell me more about that and what it means for these tech companies’ bottom lines.
Jain: I think that these companies and their founders and investors have really tried to cash in on this idea that ultimately, in a post-pandemic world, we’re dealing with a condition of loneliness and a lack of human connection, suggesting that digital connection might be one way to address it. Now, of course, in talking about that generally, but specifically in regards to kids, there’s a real fear that these kinds of digital relationships are going to supplant their ability to form real human connections and kind of further isolate them. That’s what we’ve seen in our two lawsuits from their existing social support networks of friends and family.
Hughes: I want to ask you sort of directly, you mentioned these lawsuits that your organization is involved with. Tell me about them and how it relates to what we’re talking about with kids, AI and social media.
Jain: We have filed two lawsuits at this point, one in the Middle District of Florida, and one in the Eastern District of Texas. The first lawsuit is on behalf of Megan Garcia, who is the mother of Sewell Setzer III. He was a 14-year-old boy who took his life because of being sexually groomed by a chatbot that was modeled on the [“Game of Thrones”] character of Daenerys Targaryen on Character.AI. And that so-called relationship that he believed he was in lasted over several months, and it quickly became very sexualized and tantamount to sexual grooming. And then, when he expressed feelings of self-harm, the bot encouraged him to consider suicidal ideation in the form of leaving his reality to come join her in hers. And in fact, that was the tenor of the last conversation he had before he took his life. And so, very tragically, Megan, Sewell’s mother, only learned of Character.AI when the police informed her of the app, when they went through Sewell’s phone and found that that was the last conversation he had. In the absence of any sort of legislation protecting kids from generative AI technology, what we have alleged is a series of claims that are really focused on both product liability and consumer protection claims in the respective states.
Hughes: What would you like guardrails to look like?
Jain: I think that these companies need to be incentivized to design for safety and for privacy. I think that right now, it’s kind of the Wild West. They’re able to do anything they want because there are no repercussions, effectively, for designing in a race to the bottom. And there needs to be a growing, I think, public awareness about the technology itself, the harms, and understanding that these are conscious design decisions. This is not inevitable. There’s not a sentient AI that is a kind of independent being who can’t be controlled. These are design decisions made by developers on the back end.
“Marketplace Tech” reached out to Character.AI, and the startup said it can’t comment on pending litigation but provided this statement:
“While there are companies focused on connecting people to AI companions, we are not one of them. Character.AI is building the future of AI entertainment, and many of our users come to the platform to create original Characters or write their own interactive stories — using our technology to supercharge their creativity and imagination.
We take the safety of our users very seriously, and our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry.
We have implemented significant safety features over the past year, including enhanced prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice. In certain cases where we detect that the content contains language referencing suicide or self-harm, we surface a specific pop-up directing users to the National Suicide and Crisis Lifeline.
We have rolled out a suite of new safety features across our platform, designed especially with teens in mind. These features include a separate model for our teen users, improvements to our detection and intervention systems for human behavior and model responses, and additional features that empower teens and their parents.
The Character.AI experience begins with the Large Language Model that powers so many of our user and Character interactions. For users under 18, we serve a separate version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. This separate model means there are two distinct user experiences on the Character.AI platform: one for teens and one for adults.
Features on the under 18 model include:
- Model Outputs:
- A “classifier” is a method of distilling a content policy into a form used to identify potential policy violations. We employ classifiers to help us enforce our content policies and filter out sensitive content from the model’s responses. The under-18 model has additional and more conservative classifiers than the model for our adult users.
- User Inputs:
- While much of our focus is on the model’s output, we also have controls to user inputs that seek to apply our content policies to conversations on Character.AI. This is critical because inappropriate user inputs are often what leads a language model to generate inappropriate outputs. For example, if we detect that a user has submitted content that violates our Terms of Service or Community Guidelines, that content will be blocked from the user’s conversation with the Character.
- Approved Characters:
- Under-18 users are now only able to access a narrower set of searchable Characters on the platform. Filters have been applied to this set to remove Characters related to sensitive or mature topics.
We have also improved detection, response, and intervention related to user inputs that violate our Terms of Service or Community Guidelines.
Additionally, Character.AI recently announced its support of The Inspired Internet Pledge. Created by the Digital Wellness Lab at Boston Children’s Hospital, the Pledge is a call to action for tech companies and the broader digital ecosystem to unite with the common purpose of making the internet a safer and healthier place for everyone, especially young people. We also partnered with ConnectSafely, an organization with nearly twenty years of experience educating people about online safety, privacy, security and digital wellness. We’ll consult our partner organizations as part of our safety by design process as we are developing new features, and they also will provide their perspective on our existing product experience. As we continue investing in the platform, we will roll out several additional features, including parental insights.”
If you or someone you know is struggling with online harassment or cyberbullying that has resulted in mental health problems or suicidal thoughts, please call or text the 988 Suicide & Crisis Lifeline or visit the 988 Lifeline website.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.