OpenAI CEO Sam Altman has issued a stark warning that private conversations with ChatGPT are not legally protected — and could potentially be used in court, as Individuals, specially youth increasingly turning to AI chatbots for personal advice.
Speaking on the popular podcast This Past Weekend with Theo Von, Altman revealed that many young people, especially, use ChatGPT as a virtual therapist or life coach, often sharing deeply personal information and seeking advice on relationship issues, mental health, and major life decisions.
However, unlike conversations with a doctor, lawyer, or licensed therapist — which are legally protected under confidentiality laws — discussions with ChatGPT do not currently enjoy such safeguards.
“If someone confides their most personal issues and that ends up in legal proceedings, we could be compelled to hand that over. And that’s a real problem,” Altman said, adding, “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist. But no one had to think about that even a year ago.”
Altman’s comments come at a time when AI tools are rapidly becoming integrated into daily life, particularly among youth. He acknowledged that people are beginning to rely heavily — sometimes entirely — on ChatGPT for decision-making, which he finds alarming.
“People rely on ChatGPT too much,” Altman warned during another appearance at a U.S. Federal Reserve banking conference. “There are young people who say, ‘I can’t make any decision in my life without telling ChatGPT everything… I’m going to do whatever it says.’ That feels really bad to me.”
Despite the AI tool’s ability to offer quick, often helpful responses, Altman cautioned against treating ChatGPT as a substitute for human judgement. “Even if itgives better advice than a human therapist, something about collectively deciding we’re going to live our lives the way AI tells us feels bad and dangerous.”