The Integration of Artificial Intelligence-Powered Psychotherapy Chatbots in Pediatric Care: Scaffold or Substitute?
Abstract
In April 2024, the United States Food and Drug Administration approved the first digital application to treat major depression in adults 22 and older.1 The app—Rejoyn—joins a growing list of artificial intelligence (AI)-based platforms designed to treat mental illness.2 These tools range from chatbots to gamified cognitive behavioral therapy (CBT), to machines that emulate human therapists. Given the significant barriers to accessing mental health care, these technologies have been pitched as a means of addressing the current mental health crisis among youth.3 Although there is a growing literature exploring the ethical implications of the use of AI in mental health care, their use in pediatric populations remains underexplored. This commentary explores the use of large language model (LLM)-based digital applications in pediatric mental health
care, with the goal of beginning to map this terrain. LLMs are a form of AI designed to generate human-like responses
to natural language prompts. These models use deep learning techniques, specifically transformer architectures, to analyze
patterns in language and create coherent, contextually relevant responses. We will refer to LLM-based “conversational
agents” colloquially as “therapy bots” or “AI chatbots” interchangeably throughout. These are systems that use LLMs, or
other advanced natural language processing systems, to simulate the utterances of a human therapist as part of a
conversation with a pediatric patient.