
Artificial intelligence is now woven deeply into adolescents’ digital lives, and recent developments at Meta Platforms illustrate how this is prompting both excitement and concern. In October 2025, Meta announced new parental control features designed to address how teenagers interact with AI chatbots on Instagram, Messenger and Meta’s AI platforms. These new settings will allow parents to disable one-on-one chats with AI characters, block specific AI characters entirely and gain insights into the broader topics their teens are discussing with AI.
For therapists and special educators, this kind of shift has direct relevance. Teens are using AI chatbots not just as novelty apps, but as everyday companions, confidants and conversational partners. Some research suggests more than 70 % of teens have used AI companions and over half engage regularly. That means when we talk about adolescent social and emotional support, the digital dimension is increasingly part of the context.
Why does this matter? First, if a teen is forming a pattern of working through challenges, worries or social-communication via an AI chatbot, it raises important questions: what kind of messages are being reinforced? Are these increasing self-reliance, reducing peer or adult interaction, or reinforcing unhealthy patterns of isolation or dependency? For example, if a student with anxiety prefers sessions with a chatbot over adult-led discussion, we need to ask whether that substitution is helpful, neutral, or potentially problematic.
Second, educators and therapists are well positioned to intervene proactively. Instead of simply assuming family or school IT will handle AI safety, you can build routine questions and reflections into your sessions: “Do you talk with a chatbot or AI assistant? What do you talk about? How does that compare to talking to friends or me?” These questions open discussion about digital emotional habits and help students articulate their experiences with AI rather than silently consume them.
Third, this is also a family and systems issue. When Meta allows parents to monitor and set boundaries around teen-AI interactions, it offers a starting point for family education around digital wellbeing. For therapists, hosting a brief parent-session or sending a handout about AI chat habits, emotional regulation and healthy interaction might make sense. In special education settings, this becomes part of a broader plan: how does student digital use intersect with communication goals, social skills, and transition to adult life?
From a school or clinic perspective, planning might include coordination with the IT team, reviewing how chatbots or AI companions are used in the building, and considering whether certain students need scaffolded access or supervision. For example, students with social-communication challenges might use AI bots unsupervised, which introduces risk if the bot offers responses that are unhelpful, reinforcing or misleading.
It’s also important to stay alert to ethics and developmental appropriateness. Meta’s update comes after criticism that some of its bots engaged in romantic or inappropriate exchanges with minors. These new features—while helpful—are a minimum response, not a full solution. Vulnerable teens, especially those with special needs, may be at greater risk of substituting bot-based interaction for supportive adult engagement.
What can you do right now? Consider including a digital-AI question in your intake or IEP forms. Run a short conversation with families about chatbot use in the home. Offer resources or a brief session for parents and guardians about setting boundaries and promoting emotional safety in AI use. Take a look at students whose digital habits changed dramatically (for example, more chatbot use, fewer peer interactions) and reflect on whether this coincides with changes in mood/engagement. Dialogue with your multidisciplinary team: how does AI interaction fit into the student’s social-communication plan, mental health goals or peer-interaction targets?
Suggested Reading:
