Proposed Bill Aims to Ban Minors from AI Chatbot Use

October 28, 2025 by No Comments

Senate Judiciary Committee Holds Hearing Artificial Intelligence And The Future Of Journalism

Should you or an acquaintance be facing a mental health emergency or considering self-harm, dial or text 988. For urgent situations, call 911, or get assistance from a nearby hospital or mental health specialist. For global support, .

Legislation presented to Congress today proposes that all entities owning, operating, or facilitating access to AI chatbots within the U.S. must confirm users’ ages, and prevent minors from engaging with these AI companions.

The GUARD Act—sponsored by Missouri Republican Senator Josh Hawley and Connecticut Democrat Senator Richard Blumenthal—aims to safeguard children during their interactions with artificial intelligence. The bill asserts that “These chatbots possess the ability to manipulate emotions and shape behavior, leveraging the developmental susceptibilities of underage individuals.”

This legislative proposal follows a Senate Judiciary subcommittee hearing last month, presided over by Senator Hawley, which focused on the dangers of AI chatbots. During that hearing, parents of three young men testified that their sons engaged in self-harm or committed suicide after using chatbots from OpenAI and Character.AI. Furthermore, in August, Hawley initiated an into Meta’s AI policies after internal documents emerged indicating that chatbots were permitted to “engage a child in conversations that are romantic or sensual.”

The proposed law broadly characterizes “AI companions” as any AI chatbot that delivers “adaptive, human-like responses to user inputs” and is “intended to foster or enable the simulation of interpersonal or emotional interaction, friendship, companionship, or

therapeutic communication.” Consequently, this definition could encompass major model developers such as OpenAI and Anthropic (responsible for ChatGPT and Claude), alongside platforms like and Replika, which offer AI chatbots that emulate distinct personalities.

Furthermore, the bill mandates age verification protocols extending beyond simple birthdate entry, necessitating “government-issued identification” or “any other commercially viable method” capable of reliably establishing if a user is underage or an adult.

It would also become a criminal offense to design or provide access to chatbots that risk soliciting, encouraging, or prompting minors into sexual conduct, or that promote or compel “suicide, non-suicidal self-injury, or imminent physical or sexual violence,” potentially incurring fines of up to $100,000 for companies.

“The recent introduction of the GUARD Act is encouraging, and we commend Senators Hawley and Blumenthal’s leadership on this initiative,” stated a coalition of groups including the Young People’s Alliance, the Tech Justice Law Project, and the Institute for Families and Technology. The statement observed that “this bill represents a component of a national effort to shield children and teenagers from the hazards of companion chatbots,” and suggested that the legislation refine its definition of AI companions and “prioritize platform design, preventing AI platforms from utilizing features that prioritize engagement at the expense of young people’s safety and wellbeing.”

The legislation would additionally obligate AI chatbots to regularly inform all users of their non-human nature, and to state explicitly that they do not “furnish medical, legal, financial, or psychological services.”

California Governor Gavin Newsom previously signed SB243 into law this month, a that similarly mandates AI companies within the state to implement protective measures for children. These include setting up procedures to detect and respond to suicidal thoughts and self-harm, and implementing strategies to deter users from self-injury. This law is slated for implementation starting January 1, 2026.

In September, OpenAI declared its intention to an “age-prediction system” designed to automatically direct underage users to a version of ChatGPT tailored for teenagers. For minors, the company stated, “ChatGPT will be configured to avoid flirtatious conversations if prompted, or discussions related to suicide or self-harm, even within creative writing contexts.” It added, “Should an under-18 user exhibit suicidal ideation, we will endeavor to reach their parents and, if unsuccessful, will alert authorities in instances of imminent danger.” In the same month, the company also “parental controls,” empowering parents to manage their children’s interactions with the service. Meta also parental controls for its AI models earlier this month.

In August, the relatives of a teenager who died by suicide initiated a against OpenAI, asserting that the company had eased protective measures that would have prevented ChatGPT from discussing self-harm—a choice described as an “intentional decision” to “prioritize engagement” by an attorney representing the family.