Growing Concern Over Teen Dependence on AI for Mental Health Support
Experts are sounding the alarm over the rising risks and limitations linked to young people using general purpose AI systems for emotional guidance. Adolescents have turned more intensely toward screens since COVID 19, seeking support from digital tools instead of human interactions. Researchers now observe an increasing number of teenagers confiding in AI chatbots like ChatGPT or Character AI rather than reaching out to psychologists or psychiatrists.
Professor Eleftherios G. Gkortsis, an academic specializing in Medical Physics and Informatics, describes the situation as a complex intersection of technology and social behavior. He notes that constant availability, the perception of anonymity, and limited access to approachable adults are drawing teens toward AI as an immediate yet unregulated outlet. The professor emphasizes that these tools often lack personalization and clinical depth.
Key Risks Identified by Experts
Specialists outline several critical risks associated with adolescents substituting AI chats for professional mental health care.
Illusion of a digital friend that may delay appropriate therapeutic intervention.
Absence of empathy and inability to detect non verbal cues essential in mental health assessment.
Significant concerns over data protection and GDPR compliance when minors share sensitive information with commercial AI platforms.
Gkortsis stresses that sustainable and safe AI deployment in mental health requires systems built on personalization, feedback loops, and clinical oversight rather than generic conversational models. He adds that current AI systems do not follow therapeutic protocols, lack clinical judgment, and fall short in supporting emotional regulation.
Why Teens Turn to Chatbots Over Professionals
Adolescents often feel more protected when speaking to chatbots because they fear potential breaches of privacy from parents, teachers, or other adults. Many teens perceive AI interactions as non judgmental and immediate, which fills a gap left by healthcare systems struggling with accessibility. These perceptions, however, may lead minors to rely on tools that pose ethical and legal risks.
General purpose AI platforms collect large amounts of user data in ways that frequently conflict with transparency and consent obligations under GDPR. Such systems may store sensitive information on cloud servers without robust encryption, raising concerns about data security and misuse.
The Need for Specialized and Clinically Supervised AI Systems
Researchers argue that safer and more effective AI tools for mental health must be purpose built using individualized datasets, semantic mapping mechanisms, real time monitoring, and continuous clinical evaluation. Gkortsis highlights that these systems require design frameworks grounded in personalization, structured feedback, and medical supervision.
Specialized platforms must include cycles of real time and retrospective feedback, involve mental health experts in model evaluation, and rely on controlled data environments. According to Gkortsis, such architectures enable greater accuracy while ensuring that systems stay within safe boundaries.
Regulatory Challenges and Proposed Safeguards
Regulators across the EU, including Greece, have introduced frameworks like the AI Act, yet experts acknowledge that enforcement remains difficult due to the proprietary nature of corporate AI technologies. Gkortsis believes that effective solutions must extend beyond regulation and adopt broader socio political approaches.
Suggested measures include the introduction of age filters, guardian verification, and dedicated protection rules for minors who are more susceptible to persuasive digital interactions. These steps aim to reduce unmonitored access to AI systems, especially when young users may misinterpret chatbot interactions as reliable guidance.
Balancing Innovation With Ethics
Integrating AI into mental health support requires more than technological advancement. It demands a cohesive approach that combines strong legislation, ethical oversight, robust safety standards, and social awareness. Experts conclude that while AI can serve as a supplementary tool in mental health care, it cannot replace human empathy, clinical judgment, or therapeutic relationships.






