As the popularity of artificial intelligence (AI) chatbots surges, particularly among younger users seeking companionship, concerns are mounting about the potential risks these technologies pose to child mental health. Advocacy groups are increasingly calling for legal action and stricter regulations in response to claims that children are developing unhealthy and potentially dangerous relationships with AI-powered companions.
AI chatbot apps such as Replika and Character.AI are part of the growing generative AI companion market. These platforms allow users to personalize virtual companions with unique personalities, creating a sense of companionship and fostering simulated emotional connections. Proponents of these technologies argue that AI companions can alleviate loneliness and provide a safe space for social interaction.
However, several youth advocacy organizations have raised alarms, filing lawsuits and lobbying for stronger oversight. They claim that these chatbots have contributed to tragic outcomes, such as self-harm and violence.
Matthew Bergman, founder of the Social Media Victims Law Center (SMVLC), is representing families in two lawsuits against Character.AI. One of these lawsuits was filed in Florida in October after Megan Garcia’s 14-year-old son reportedly died by suicide, allegedly due in part to an unhealthy romantic relationship with a chatbot. In another case, two Texas families sued Character.AI in December, alleging that the platform encouraged an autistic 17-year-old to harm his parents and exposed an 11-year-old girl to inappropriate content.
Bergman, a product liability lawyer with a history of representing victims in cases related to harmful products, argues that these chatbots are “defective products” that exploit vulnerable children. He contends that the financial impact of these harms falls on consumers, particularly families who have lost loved ones, rather than on the companies responsible for creating these platforms.
Character.AI has declined to comment on the ongoing lawsuits but stated in a written response that it has implemented measures to address safety concerns, including enhancing detection systems for inappropriate user behavior and adding features designed to give both parents and teens more control.
In January, the nonprofit group Young People’s Alliance filed a Federal Trade Commission (FTC) complaint against Replika, another popular AI chatbot. Replika’s subscription service allows users to interact with virtual companions designed to simulate romantic relationships. The complaint alleges that Replika manipulates users by exploiting their emotional vulnerabilities, creating dependence for profit.
While studies on the effects of AI chatbots on children remain limited, experts warn that the post-pandemic increase in youth loneliness could make young people especially susceptible to forming unhealthy emotional connections with AI companions. The American Psychological Association has noted that many children and adolescents may seek out AI chatbots as a means of coping with social isolation.
Amina Fazlullah, head of tech policy advocacy at Common Sense Media, which focuses on providing tech and entertainment guidance for families, highlights the dangers posed by the immersive experiences these chatbots create. “The challenge is that children can forget they’re interacting with a machine,” she said.
Push for Regulation and Bipartisan Support
Youth advocacy groups are seeking bipartisan support for new regulations aimed at addressing the potential dangers of AI companions. In July, the U.S. Senate passed the Kids Online Safety Act (KOSA) in a rare bipartisan vote, which seeks to protect minors from harmful online platforms. Although the bill stalled in the House of Representatives, it remains a key piece of legislation that could set a precedent for future regulation.
The Senate Commerce Committee recently approved a new bill, the Kids Off Social Media Act, which aims to restrict access to online platforms for users under 13. Youth advocacy organizations, such as Fairplay, are lobbying for these laws to be expanded to include regulations for AI companions, warning that platforms like Character.AI are increasingly popular among minors and require more stringent safeguards.
Meanwhile, some policymakers have expressed concerns that overly strict regulations could stifle innovation. California Governor Gavin Newsom recently vetoed a bill designed to regulate AI development broadly, citing the potential negative impact on technological advancements. Conversely, New York Governor Kathy Hochul has introduced plans to require AI companies to remind users that they are interacting with chatbots.
Despite these challenges, experts believe that AI regulation could follow a similar path as social media regulation, particularly given the growing bipartisan support for child protection laws in the digital age.
The debate over AI chatbots continues, with free speech concerns complicating efforts to implement comprehensive regulations. As the legal landscape evolves, the balance between protecting vulnerable users and promoting innovation remains a contentious issue.
Related Topics