In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.
Executive Summary
Americans want limits on the types of data companies collect about them, especially when that data is sensitive personal data related to their health. While technologies designed for and used specifically in healthcare settings are governed by the Health Insurance Portability and Accountability Act, general-purpose tools like chatbots are not. Yet consumers are increasingly turning to these chatbots for health-related concerns, including mental health support.
My remarks highlight two major data privacy concerns I see in the use of chatbots:
- Users are increasingly disclosing highly sensitive personal information to chatbots, which are designed to mimic human conversation and maximize user engagement. Large platforms are contemplating how to monetize this data in other parts of their businesses.
- Developers are incorporating chatbot-derived user data into model training without oversight. Their privacy policies demonstrate a lack of transparency regarding whether and how they take steps to mitigate privacy risks, including for children’s data.
To address these concerns, I recommend three specific areas for congressional attention:
- Implement data privacy and safety design principles. Demand that chatbot developers institute both data privacy and health and safety design principles that prioritize the trust and well-being of the public.
- Minimize the scope of personal data in AI training. Mandate that developers make transparent their data collection and processing practices. Users should not be automatically opted in to model training, and developers should proactively remove sensitive data from training sets.
- Demand that developers adopt safety metrics. Developers must track and report metrics related to user privacy, safety, and experiences of harm and increase vetted researcher access to chatbot training data to ensure independent review and ensure accountability.
- Date Published:11.18.2025
- Original Publication: Stanford University Human-Centered Artificial Intelligence