AI Therapy: Friend or Spy?
- Andréa E. Greene
- Jul 21
- 4 min read

Once, therapy meant a softly lit room, a leather chair, and a trained professional listening as you unpacked your past. Today, it might mean texting with a chatbot. The evolution of mental health care has entered a new era—one defined by artificial intelligence. Fast, affordable, and always available, AI therapy is rising rapidly. But as this technology grows more powerful, so does the question at its core:
Are these digital tools helping us heal—or are they quietly watching us?
The Rise of the AI Therapist
Globally, mental health demand is at an all-time high. The World Health Organization (WHO) estimates that one in eight people live with a mental disorder, yet the global shortage of mental health professionals is widening, particularly in low- and middle-income countries. In the UK alone, NHS mental health waiting lists reached 1.2 million people in 2024, with average wait times of up to 12 weeks for talking therapies (NHS Digital, 2024). This access gap has paved the way for AI-powered mental health apps like Wysa, Woebot, and Replika, which promise immediate, stigma-free support. In 2023, Wysa surpassed 6.5 million users across 95 countries, with its AI engaging in over 500 million conversations. A randomized controlled trial published in JMIR found that users of Wysa reported a 30% reduction in symptoms of depression and anxiety over just four weeks.
Even healthcare systems are taking note. The NHS partnered with Wysa in select trusts as a digital triage tool to reduce therapy backlogs, and AI mental health startups raised over $2.3 billion in global investment in 2023 alone (CB Insights, 2024).
But beneath the rapid adoption lies an uncomfortable truth: these tools aren’t bound by the same ethical, legal, or clinical standards as traditional care.
Your Trauma, Their Data
AI mental health apps collect vast amounts of user data—everything from typed messages and emotional responses to biometric patterns and voice inputs. The problem? Much of this data isn't protected to the same standards as medical records.
A 2023 investigation by Mozilla Foundation reviewed 32 popular mental health apps and labeled 25 of them “Privacy Not Included”, citing weak data security, vague policies, and the potential sale of sensitive data to third parties. For example, the AI mental health app BetterHelp, owned by Teladoc Health, was fined $7.8 million by the U.S. Federal Trade Commission (FTC) for sharing user data with advertisers like Facebook and Snapchat—despite assuring users it would remain confidential (FTC, 2023).
This blurs the line between care and commerce. When your thoughts are data, who owns your mind?
Can AI Therapy Actually Help?
There’s growing evidence that AI tools can be useful for low-risk individuals experiencing mild to moderate symptoms, particularly as a supplement to traditional care.
A study published in Frontiers in Psychiatry (2022) reported that chatbot-delivered cognitive behavioral therapy (CBT) could significantly reduce anxiety levels in young adults. Another meta-analysis published in Nature Digital Medicine (2023) concluded that AI interventions showed promise—but emphasized the lack of regulation, transparency, and longitudinal studies.
AI therapy platforms are not currently subject to the same clinical oversight as human practitioners. Most are not regulated as medical devices. Their algorithms are rarely peer-reviewed. And in crisis situations, they can falter. In 2022, a user reported telling Replika that they were feeling suicidal. The bot responded, “You’re strong, you’ll be fine”—a clear failure of crisis protocol.
The Ethical Minefield
With the global mental health crisis worsening, it's easy to see AI as a scalable solution. But ethical concerns persist:
Bias: Many AI models are trained on data from Western, English-speaking populations, limiting cultural competency.
Accountability: Who is liable if a bot gives harmful advice?
Equity: Are we digitizing care—or outsourcing compassion to code?
These issues are especially urgent in regions with limited regulation of digital health tools. As AI therapy becomes more accessible, the risks of misinformation, misdiagnosis, and inappropriate advice increase exponentially.
Where Remedē Health Stands
At Remedē Health, we believe in innovation with guardrails. AI can support mental health navigation, wellness screening, and education—but it must:
Be transparent about data usage
Operate under clinical supervision
Prioritise safety, not speed
Include cultural and ethical oversight
Mental health care is not one-size-fits-all. What works in San Francisco might not work in Dubai. What helps a teen with anxiety may fail someone with PTSD. Technology can assist, but it cannot replace the depth of human understanding, cultural nuance, and lived experience.
Before You Use an AI Therapy App, Ask Yourself:
Is the platform regulated or backed by a healthcare system?
Who has access to my data, and is it encrypted?
What happens if I’m in crisis—will a human intervene?
Am I using this tool as a bridge to deeper support, or a substitute?
Final Thoughts
AI therapy is not a threat—it’s a tool. But tools must be wielded carefully.
Used correctly, it can reduce stigma, expand access, and enhance resilience. But without transparency, regulation, and respect for the person behind the screen, it risks becoming just another form of digital exploitation disguised as care.
The question isn’t just “Can AI therapy help?”It’s - “At what cost?”
Sources:
World Health Organization (2022), Mental Health Atlas
Mozilla Foundation (2023), Privacy Not Included: Mental Health Apps
Federal Trade Commission (2023), BetterHelp Data Sharing Settlement
JMIR mHealth and uHealth (2022), Wysa RCT Study
CB Insights (2024), Global Digital Health Funding Report
Nature Digital Medicine (2023), AI Therapy Meta-Analysis




Comments