OpenAI CEO Warns ChatGPT Therapy Conversations Are Not Legally Confidential

He emphasized that the absence of a legal framework around AI conversations means user data may be vulnerable.

OpenAI Sam Altman
Unlike medical or legal professionals bound by confidentiality laws, AI chatbots like ChatGPT currently operate without such legal safeguards. Photo: File photo
info_icon

OpenAI CEO Sam Altman has cautioned users that conversations with ChatGPT, particularly those involving mental health, personal relationships, or emotional support, are not protected by confidentiality laws — raising fresh concerns about data privacy as millions increasingly turn to AI for sensitive guidance.

In a recent appearance on the This Past Weekend podcast hosted by Theo Von, Altman addressed the growing use of AI chatbots like ChatGPT as informal therapy tools, particularly among younger users. While many treat the AI as a virtual counselor or confidant, Altman warned that these interactions do not carry the same legal protections as conversations with licensed professionals such as therapists, doctors, or lawyers.

“People talk about the most personal sh*t in their lives to ChatGPT,” Altman said on the podcast. “Young people, especially, use it as a therapist or life coach... But right now, if you talk to a therapist or a doctor about those problems, there’s confidentiality. We haven’t figured that out yet for AI.”

He emphasized that the absence of a legal framework around AI conversations means user data may be vulnerable. “If you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that — and I think that’s very screwed up,” Altman added, according to The Indian Express.

No Legal Protection For AI Conversations

Unlike medical or legal professionals bound by confidentiality laws, AI chatbots like ChatGPT currently operate without such legal safeguards. While OpenAI’s privacy policy outlines some protections, Altman’s comments underline that user chats can potentially be accessed or disclosed under legal compulsion.

This means that deeply personal exchanges — ranging from mental health struggles to family disputes — are not shielded from court proceedings or government requests. In contrast, communications on platforms such as WhatsApp or Signal are end-to-end encrypted and not accessible to the service provider.

Although OpenAI says it deletes conversations from free-tier users within 30 days, it retains the right to store them longer for legal, safety, or quality control purposes. Enterprise customers and API users, however, are offered greater data protection and privacy by default.

Ongoing Legal Battles Amplify Concerns

Altman’s remarks come as OpenAI faces a high-profile lawsuit filed by The New York Times, which has required the company to preserve user chat logs for legal review. This has raised questions about the extent of data OpenAI holds and under what conditions it can be disclosed.

Privacy advocates say the lack of legal clarity around AI interactions puts users at risk, especially as AI tools become more integrated into daily life. “When people think they’re having a private conversation with a virtual therapist, they don’t expect that data to be retrievable by third parties,” said a digital rights expert based in Washington D.C.

OpenAI has encouraged users to review their privacy settings and avoid sharing sensitive personal information when using ChatGPT. The company also allows users to turn off chat history, which prevents conversations from being used to train its models — though this doesn’t guarantee full deletion or immunity from legal requests.

Call for Clear AI Privacy Regulations

Altman’s comments have prompted renewed calls for governments and tech companies to create clear regulatory frameworks around AI privacy and data use, especially as more people rely on AI for companionship, emotional support, and life advice.

“It’s time we establish guardrails,” said a spokesperson from the Electronic Frontier Foundation (EFF). “If an AI tool is functioning like a therapist for millions, then its users deserve the same level of confidentiality and trust.”

As AI adoption accelerates globally, the issue of data privacy in emotionally sensitive conversations is likely to remain a key area of focus for regulators, developers, and users alike.

Published At:
SUBSCRIBE
Tags

Click/Scan to Subscribe

qr-code

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

×