Yesterday, Congressman John Joyce, M.D. (PA-13), who serves as Chairman of the Subcommittee on Oversight and Investigations, led a hearing in Washington, D.C., focused on the risks and benefits of AI chatbots.
Chairman Joyce stated, "AI chatbots are increasingly integrated into the lives of many Americans, and yesterday’s hearing offered the opportunity to have a balanced, frank conversation about the potential benefits and harms of AI chatbots to Americans. It is important that we consider the implications of these technologies as we promote AI innovation while protecting the most vulnerable among us."
During the hearing, several members of Congress raised concerns regarding how children interact with AI chatbots and what measures could be put in place to ensure their safety.
Congressman Rick Allen (GA-12) discussed cases where teens spend significant time with AI chatbots: "We’ve seen cases of teens who spend hours a day on AI chatbots. Some of these conversations are mundane. [But] there are examples [of] self-harm and sexualized material. A growing number of teens are becoming emotionally dependent on these. From a clinical standpoint, are there design practices or guardrails that platforms should consider, especially for entertainment or companionship, to prevent minors from forming unsafe or addictive relationships from these systems?" Dr. Torous responded by comparing AI bots to self-help books: "We’re still learning about these parasocial relationships where people make these relationships with these bots. These are not objects; these are not people. And in some ways, I think a useful analogy I can tell patients is think of an AI like a self-help book. [...] I think where it crosses the line is when the self-help book stops giving basic self-help, starts getting too personal, starts talking about deeper issues. So, I think it’s possible for the bots to operate as self-help books by having very clear guardrails where they stop and where they hand you off to a person."
Congressman Russ Fulcher (ID-01) raised questions about children viewing chatbots as authority figures: "Kids are wired to form attachments with things that act friendly. What we don’t want happening is a chatbot taking the role of teaching a child right and wrong. With AI utilization increasing in children, are you concerned that children may look up to a faceless chatbot as a sort of parental authority or figure? And how do we propose that parents and educators prevent that from happening?" Dr. Wei explained that while children often use chatbots for practical reasons at first, their reliance can shift over time: "A lot of times, teens and children turn to AI chatbots first for homework or for useful purposes, and then it can shift. And that’s where that shift is. We don’t know the long-term effects of AI companions and chatbots in terms of emotional relationships. It’s a frictionless relationship. It doesn’t offer the same kinds of moral guidance like you referenced or the complexity of human dynamics. So, we still need to understand better how to help kids navigate that, while still being able to use AI for good purposes."
Congresswoman Erin Houchin (IN-09) emphasized online safety standards for children: "Kids deserve the same safety mindset online that we bring to car seats, playgrounds, and stranger danger. Unfortunately, we have seen heartbreaking stories recently that are cause for concern and action by this committee. Our job is to set clear guardrails so the best ideas can scale safely."
The hearing addressed both opportunities presented by artificial intelligence technology as well as ongoing concerns about its impact on youth.
