
In our clinical and research work with adolescents, privacy is not a side issue, it is a developmental workspace where autonomy, identity, and boundaries are tested in real time. The American Psychological Association (APA) has emphasized that wanting privacy is normal and healthy in adolescence, especially when teens are exploring identity, relationships, and new emotions. When privacy is treated as inherently suspicious, we often intensify secrecy rather than strengthen judgment.
At the same time, we are seeing a shift in where privacy is practiced. Many teens now process questions and feelings in conversations with AI chatbots. As Dr. Joshua Goodman notes, “Conversations with AI chatbots can feel like a safe space” for adolescents to discuss things they would not feel safe bringing to parents or others. In practice, that “safe space” feeling often comes from immediacy, lower embarrassment, and the absence of visible adult worry. The interface can feel calm, nonreactive, and available, especially for teens who expect criticism or conflict.
From a therapist’s standpoint, this is precisely why we cannot treat chatbot use as a trivial trend or a purely parental concern. We have a professional responsibility to follow adolescents’ real help-seeking pathways, including the digital ones, and to make sure that what feels private is not mistaken for what is protected. If a teen is using a “kid chatbot” as their primary container for shame, identity questions, sexual concerns, trauma narratives, or self-harm thoughts, then chatbot use becomes clinically relevant behavior. It belongs in assessment, formulation, and ongoing treatment planning the same way we track sleep, peer relationships, substance use, and social media exposure.
The central clinical concern is simple: perceived safety is not the same as real safety. Dr. Mary Alvord states it directly: “Perceived safety isn’t the same as real safety.” When a chatbot feels validating and discreet, teens may assume it is confidential in the way a therapeutic relationship is meant to be. But teens often use the word “private” to mean “no one will judge me,” “no one will get angry,” or “no one will make it awkward.” Data privacy asks different questions: who can access the content, whether it is stored, whether it is analyzed, and how it might be reused.
This is where our clinical role is not to frighten teens, but to clarify reality. Psychoeducation about privacy is not a parent lecture; it is a therapeutic intervention. Just as we help adolescents build mental models of consent, emotional regulation, risk, and relationships, we can help them build a mental model of digital disclosure. A chatbot can be emotionally comfortable while still being structurally nonconfidential. An interface can feel intimate without offering the protections we associate with private human relationships.
Importantly, the therapist’s task is not only to warn; it is to understand the function the chatbot serves. In sessions, we can ask: What platform do you use? When do you turn to it? What do you ask it for—comfort, advice, distraction, validation, “permission,” reassurance? How do you feel afterward—relieved, calmer, more distressed, more confused, more dependent on reassurance? These questions help us assess whether the chatbot is supporting coping or quietly strengthening avoidance, rumination, or isolation. They also help us identify a common pattern in adolescence: the teen is not necessarily choosing a chatbot over people; they may be choosing it over anticipated judgment.
Because we hold a duty of care, we also have a responsibility to name the “privacy gap” explicitly: many adolescents do not know what happens to their data. They may assume that because the interaction is one-on-one and “sounds caring,” it is private in the same way that a clinician’s office is private. Our role is to bring the invisible layer into view, gently, concretely, and repeatedly, so that teens can make informed choices rather than emotionally-driven assumptions. This is especially important with younger users and “kid” versions of chatbots, where the design may inadvertently amplify the sense of safety while the teen’s understanding of data practices is still developing.
One effective approach is to translate privacy into a skill-building exercise rather than a moral warning. In practice, we can guide adolescents to sort what they might share with a chatbot into clear categories: low-stakes content (study help, hobbies, neutral questions), personally identifying details (full name, school, address, phone number, account handles), and high-sensitivity disclosures (sexual content, self-harm thoughts, trauma narratives, family conflict, screenshots of private messages, and medical or mental health details).
The goal is not to ban curiosity or punish disclosure. The goal is to build discrimination: which topics can tolerate a lower-protection environment, and which topics deserve a safer channel—trusted adults, clinicians, or crisis supports when needed. From there, we can teach a practical rule that adolescents can actually use: the more specific and identifiable the details are, the higher the potential cost if the content is stored, misunderstood, shared, or later accessed.
We can also strengthen what developmental psychology already tells us adolescents are practicing: “future-self thinking.” We can ask: How would you feel if a parent, peer, school, or future partner saw this? Would you still want it written down somewhere? Could it be taken out of context? This anticipatory reflection is developmentally appropriate because foresight and risk appraisal are still consolidating in adolescence. Practiced regularly, it turns privacy from an abstract value into a usable decision-making habit.
An ethical reflection is unavoidable because the issue is not only teen behavior, but also system design and adult responsibility. Teens and caregivers deserve transparent explanations of what chatbots can and cannot promise, especially when the interface feels intimate. As clinicians, we cannot outsource clinical judgment, or the protection of vulnerable disclosures, to systems without human accountability and professional ethics. Yet we also have to hold complexity: some teens have limited access to responsive adults, and chatbots may function as a temporary bridge.
Our responsibility, then, is not to deliver blanket approval or bans. It is to stay close to adolescents’ real lives, to ask about chatbot use without shaming, to clarify the privacy gap with concrete guidance, and to protect the teen’s development of autonomy with informed decision-making rather than secrecy. In doing so, we support privacy in its fullest sense: not only the emotional experience of safety, but the informational protections that make safety real.
