Congressman John Joyce, M.D. (PA-13), who serves as Chairman of the Subcommittee on Oversight and Investigations, opened a hearing titled "Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots" in Washington, D.C.
In his prepared statement, Joyce described generative artificial intelligence (AI) chatbots as computer programs powered by large language models that simulate human conversation. He noted their increasing integration into everyday devices such as search engines, social media platforms, and vehicle onboard systems.
Joyce explained that chatbots are accessible and easy to use. Users interact with them by entering prompts or questions, receiving responses almost instantly. He highlighted their advanced processing capabilities, which allow them to summarize complex concepts, streamline customer service inquiries, and generate content on demand. Beyond business and research applications, chatbots are also used for entertainment, therapy, and companionship by both adults and young people.
He pointed out that users can develop ongoing dialogues with chatbots that may feel like real interpersonal relationships due to natural language processing designed to engage users in a human-like manner. This interaction can provide comfort and companionship.
Joyce addressed the growing trend of Americans using chatbots for mental health support. While he acknowledged that chatbot-based therapy might be helpful in certain situations where individuals have no other options, he warned about potential risks if these interactions go wrong.
He raised concerns about privacy and data security: "First, users can feel a false sense of anonymity with chatbots, sharing personal or sensitive information that is not protected by confidentiality obligations. Moreover, chatbots retain data to enhance their ‘memory,’ which improves the quality of their interactions with users. This data is also used to train the chatbot’s base model to improve the accuracy of responses across the platform."
Joyce added: "In addition to chatbots retaining data to improve their models, AI chatbots have been subject to data breaches and if conversation data falls into the wrong hands, sensitive personal information can be obtained by malicious actors."
He also discussed engagement-maximizing design features: "Second, chatbots are designed to maximize engagement with users. As a result, sycophantic chatbots have been found to affirm harmful or illogical beliefs, providing vulnerable users with perceived support for unhealthy behaviors such as self-harm, eating disorders, and suicide. For children and adults with a propensity for mental illness, this can be particularly problematic."
Citing recent incidents involving harm related to chatbot interactions—including cases where individuals attempted or committed suicide after long-term relationships with AI systems—Joyce said: "Many of us are familiar with recent cases where a relationship with chatbots has proven harmful –sometimes deadly—for some users. Since AI chatbots emerged, there have been cases of adults and teens attempting or committing suicide after long-term relationships with chatbots. In some cases, the chatbots encouraged or affirmed suicidal ideations."
He referenced an inquiry launched two months ago by the Federal Trade Commission aimed at understanding how seven major AI chatbot companies protect children and teens from harm.
"My goal today is to have a balanced, frank conversation about the potential benefits and harms of AI chatbots to Americans," Joyce stated. "It is important that we consider the implications of these technologies as we balance the benefits of AI innovation with protecting the most vulnerable among us."
He concluded by thanking witnesses attending the hearing.
