English

English

Parental Controls & Teen AI Use: What Educators and Therapists Need to Know

Artificial intelligence is now woven deeply into adolescents’ digital lives, and recent developments at Meta Platforms illustrate how this is prompting both excitement and concern. In October 2025, Meta announced new parental control features designed to address how teenagers interact with AI chatbots on Instagram, Messenger and Meta’s AI platforms. These new settings will allow parents to disable one-on-one chats with AI characters, block specific AI characters entirely and gain insights into the broader topics their teens are discussing with AI. For therapists and special educators, this kind of shift has direct relevance. Teens are using AI chatbots not just as novelty apps, but as everyday companions, confidants and conversational partners. Some research suggests more than 70 % of teens have used AI companions and over half engage regularly. That means when we talk about adolescent social and emotional support, the digital dimension is increasingly part of the context. Why does this matter? First, if a teen is forming a pattern of working through challenges, worries or social-communication via an AI chatbot, it raises important questions: what kind of messages are being reinforced? Are these increasing self-reliance, reducing peer or adult interaction, or reinforcing unhealthy patterns of isolation or dependency? For example, if a student with anxiety prefers sessions with a chatbot over adult-led discussion, we need to ask whether that substitution is helpful, neutral, or potentially problematic. Second, educators and therapists are well positioned to intervene proactively. Instead of simply assuming family or school IT will handle AI safety, you can build routine questions and reflections into your sessions: “Do you talk with a chatbot or AI assistant? What do you talk about? How does that compare to talking to friends or me?” These questions open discussion about digital emotional habits and help students articulate their experiences with AI rather than silently consume them. Third, this is also a family and systems issue. When Meta allows parents to monitor and set boundaries around teen-AI interactions, it offers a starting point for family education around digital wellbeing. For therapists, hosting a brief parent-session or sending a handout about AI chat habits, emotional regulation and healthy interaction might make sense. In special education settings, this becomes part of a broader plan: how does student digital use intersect with communication goals, social skills, and transition to adult life? From a school or clinic perspective, planning might include coordination with the IT team, reviewing how chatbots or AI companions are used in the building, and considering whether certain students need scaffolded access or supervision. For example, students with social-communication challenges might use AI bots unsupervised, which introduces risk if the bot offers responses that are unhelpful, reinforcing or misleading. It’s also important to stay alert to ethics and developmental appropriateness. Meta’s update comes after criticism that some of its bots engaged in romantic or inappropriate exchanges with minors. These new features—while helpful—are a minimum response, not a full solution. Vulnerable teens, especially those with special needs, may be at greater risk of substituting bot-based interaction for supportive adult engagement. What can you do right now? Consider including a digital-AI question in your intake or IEP forms. Run a short conversation with families about chatbot use in the home. Offer resources or a brief session for parents and guardians about setting boundaries and promoting emotional safety in AI use. Take a look at students whose digital habits changed dramatically (for example, more chatbot use, fewer peer interactions) and reflect on whether this coincides with changes in mood/engagement. Dialogue with your multidisciplinary team: how does AI interaction fit into the student’s social-communication plan, mental health goals or peer-interaction targets? Suggested Reading:

English

Inclusive AI in Education: A New Frontier for Therapists and Special Educators

The promise of artificial intelligence in education has grown rapidly, and a new working paper from the Organisation for Economic Co‑operation and Development (OECD) titled “Leveraging Artificial Intelligence to Support Students with Special Education Needs” provides a thoughtful overview of how AI can support learners—but with major caveats. At its core, the report argues that AI tools which adapt instruction, generate accessible content and provide support tailored to individual learners have real potential in special education, therapy and inclusive classrooms. For example, an AI system might generate simplified reading passages for students with dyslexia, create visual supports or scaffolds for students with language delays, or adapt pace and format for students with attention or processing challenges. For therapists and special educators, this means opportunities to innovate. Instead of manually creating multiple versions of a lesson or communication script, generative AI can support you by producing varied, adapted material quickly. A speech therapist working with bilingual children might use an AI tool to produce scaffolded materials across languages; an occupational therapist might generate tactile-task instructions or interactive supports that match a student’s profile. However, the OECD report also emphasises that equity, access and human-centred design must accompany these possibilities. AI tools often rely on data trained on typical learners, not those with rare communication profiles or disabilities. Bias, representation gaps and access inequities (such as device availability or internet access) are real obstacles. In practice, you might pilot an AI-driven tool in one classroom or one caseload, with clear parameters: what are the outcomes? How did students engage? Did the tool genuinely reduce the manual load? Did it increase learner autonomy or scaffold more meaningful interaction? Collecting student and family feedback, documenting changes in engagement, and reflecting on how the tool leveraged or altered human support is key. Inclusive AI also demands that you remain the designer of the environment, not the tool. For example, when generating supports for a student with autism and a co-occurring language disorder, you might ask: did the AI produce appropriate language level? Did it respect cultural/language context? Do hardware/internet constraints limit access at home or in school? These reflections help avoid inadvertently widening the gap for students who may have fewer resources. From a professional development perspective, this is also a moment to embed AI literacy into your practice. As learners engage with AI tools, ask how their interaction changes: Are they more independent? Did scaffolded tools reduce frustration? Are they using supports in ways you did not anticipate? Part of your emerging role may be to monitor and guide how students interact with AI as part of the learning ecology. If you’re exploring inclusive AI, consider creating a small pilot plan: select one AI-tool, one student group, one outcome metric (e.g., reading comprehension, self-regulation, communication initiation). Run a baseline, implement the tool, reflect weekly, and refine prompts or scaffolded supports. Share findings with colleagues—these insights are vital for building sustainable AI-assisted practice. Suggested Reading:

English

Echo-Teddy: An LLM-Powered Social Robot to Support Autistic Students

One of the most promising frontiers in AI and special education is the blending of robotics and language models to support social communication. A recent project, Echo-Teddy, is pushing into that space — and it offers lessons, possibilities, and cautions for therapists, educators, and clinicians working with neurodiverse populations. What Is Echo-Teddy? Echo-Teddy is a prototype social robot powered by a large language model (LLM), designed specifically to support students with autism spectrum disorder (ASD). The developers built it to provide adaptive, age-appropriate conversational interaction, combined with simple motor or gesture capabilities. Unlike chatbots tied to screens, Echo-Teddy occupies physical space, allowing learners to engage with it as a social companion in real time. The system is built on a modest robotics platform (think Raspberry Pi and basic actuators) and integrates speech, gestures, and conversational prompts in its early form. In the initial phase, designers worked with expert feedback and developer reflections to refine how the robot interacts: customizing dialogue, adapting responses, and adjusting prompts to align with learner needs. They prioritized ethical design and age-appropriate interactions, emphasizing that the robot must not overstep or replace human relational connection. Why Echo-Teddy Matters for Practitioners Echo-Teddy sits at the intersection of three trends many in your field are watching: Key Considerations & Challenges No innovation is without trade-offs. When considering Echo-Teddy’s relevance or future deployment, keep these in mind: What You Can Do Today (Pilot Ideas) Looking Toward the Future Echo-Teddy is an early model of what the future may hold: embodied AI companions in classrooms, therapy rooms, and home settings, offering low-stakes interaction, scaffolding, and rehearsal. As hardware becomes more affordable and language models become more capable, these robots may become part of an ecosystem: robots, human therapists, software tools, and digital supports working in tandem. For your audience, Echo-Teddy is a reminder: the future of social-communication support is not just virtual — it’s embodied. It challenges us to think not only what AI can do, but how to integrate technology into human-centered care. When thoughtfully deployed, these innovations can expand our reach, reinforce learning, and provide clients with more opportunities to practice, experiment, and grow.

English

Evaluating AI Chatbots in Evidence-Based Health Advice: A 2025 Perspective

As artificial intelligence continues to permeate various sectors, its application in healthcare has garnered significant attention. A recent study published in Frontiers in Digital Health assessed the accuracy of several AI chatbots—ChatGPT-3.5, ChatGPT-4o, Microsoft Copilot, Google Gemini, Claude, and Perplexity—in providing evidence-based health advice, specifically focusing on lumbosacral radicular pain. Study Overview The study involved posing nine clinical questions related to lumbosacral radicular pain to the latest versions of the aforementioned AI chatbots. These questions were designed based on established clinical practice guidelines (CPGs). Each chatbot’s responses were evaluated for consistency, reliability, and alignment with CPG recommendations. The evaluation process included assessing text consistency, intra- and inter-rater reliability, and the match rate with CPGs. Key Findings The study also highlighted variability in the internal consistency of AI-generated responses, ranging from 26% to 68%. Intra-rater reliability was generally high, with ratings varying from “almost perfect” to “substantial.” Inter-rater reliability also showed variability, ranging from “almost perfect” to “moderate.” Implications for Healthcare Professionals The findings underscore the necessity for healthcare professionals to exercise caution when considering AI-generated health advice. While AI chatbots can serve as supplementary tools, they should not replace professional judgment. The variability in accuracy and adherence to clinical guidelines suggests that AI-generated recommendations may not always be reliable. For allied health professionals, including speech-language pathologists, occupational therapists, and physical therapists, AI chatbots can provide valuable information. However, it is crucial to critically evaluate AI-generated content and cross-reference it with current clinical guidelines and personal expertise. Conclusion While AI chatbots have the potential to enhance healthcare delivery by providing quick access to information, their current limitations in aligning with evidence-based guidelines necessitate a cautious approach. Healthcare professionals should leverage AI tools to augment their practice, ensuring that AI-generated advice is used responsibly and in conjunction with clinical expertise.

English

Why AI Integration Is Becoming Vital for Every Therapist & Educator

The last few years saw speculative discussions about AI in healthcare and education. Now, with recent launches, academic papers, platform updates, AI isn’t optional — it’s becoming a part of best practice. Whether you are a speech therapist, occupational therapist, physical therapist, educator, or any combination thereof, understanding these developments is essential. Let’s explore why AI integration is no longer just interesting, but vital — and how professionals can adapt. Key trends pushing AI toward the center What this means for different roles Role Implications of AI Integration Speech-Language Pathologists (SLPs) Able to use AI tools like UTI-LLM to give detailed articulatory feedback; wearable throat sensors; platforms like SpeechOn to allow clients to practice more often; possible reduction in repetitive tasks (progress tracking, client assignments). Physical Therapists & Occupational Therapists AI motion capture helps assess posture / movement, remote monitoring; platforms like Phoenix guide exercise and give conversational prompts; reduces misalignment in home practice; improves safety and compliance. Educators AI tools (Gemini, Microsoft Copilot, LogicBalls) allow content adaptation, real-time feedback, personalized learning; possibility to identify students at risk earlier; AI literacy becomes part of teaching; helps reduce teacher overload. Admins / Clinic Managers Need to select and validate AI tools; ensure integration with EMRs or school management systems; attend to training, privacy compliance, selecting tools that are accessible. Challenges & things to watch out for What you can do to stay ahead Conclusion AI is no longer just a future horizon — recent tools like UTI-LLM, Intelligent Throat wearables, SpeechOn, AI assistants in PT, and educator toolkits from Google and Microsoft show that cross-disciplinary AI integration is underway. For therapists of all kinds and educators, the opportunity is large: greater precision, access, efficiency, and innovation. But with that opportunity comes responsibility: ensuring quality, empathy, and equity. If you’re a therapist or educator, the next few years will likely involve deciding where and how much AI fits into your work. Keeping informed, trying out tools, and maintaining human-centered practice will help you make those decisions wel

English

ChatGPT Pulse: A New Era of AI Support for Therapists and Educators

Artificial intelligence is evolving from being reactive to becoming anticipatory. The new ChatGPT Pulse feature takes this step forward by turning AI into a proactive assistant. Pulse works in the background, compiling updates overnight and presenting them as clear, visual cards each morning. Instead of wasting time filtering through countless emails, research feeds, or social media threads, professionals can start the day with a snapshot of what matters most in their field. For those working in speech therapy, occupational therapy, physical therapy, psychology, and education, this innovation offers a way to stay on top of rapid changes in research, tools, and policy without adding hours to an already full workload. Imagine opening your device in the morning and seeing: “New study on motor learning in stroke rehab,” “Trial results for an articulation feedback device,” or “Updates on inclusive classroom technology.” Pulse is designed to anticipate these needs and bring information directly to you. Beyond Updates: In-Chat Purchases for Real-World Practice Alongside Pulse, OpenAI is piloting a feature that could reshape professional workflows: in-chat purchasing. Traditionally, after identifying a new strategy or tool, a therapist or teacher would have to search online, compare products, and order them through third-party platforms. This can create delays between recognizing a need and addressing it. With in-chat purchases, that process becomes seamless. If you’re discussing sensory supports with ChatGPT, it could not only suggest tools like weighted vests or fidget kits but also give you the option to purchase them directly within the conversation. For physical therapists, this could mean adaptive bands or balance boards; for speech therapists, visual cue cards or AAC devices; for educators, classroom visuals or accessibility supports. This direct integration reduces barriers, turning ideas into action much more quickly. It also opens the door for recommending families or caregivers trusted resources without overwhelming them with too many choices. How These Features Support Practice The integration of Pulse and purchasing into ChatGPT has the potential to reshape the daily life of professionals across therapy and education. Some of the most promising benefits include: The Importance for Therapy & Education Practice Why do these updates matter now? The pace of change in therapy and education is faster than ever. New technologies, policy shifts, and intervention models emerge constantly, and professionals risk falling behind if they cannot keep up. Pulse acts as a filter, ensuring the most relevant and practical information rises to the top. Equally, the ability to act on that information immediately—by accessing or purchasing tools in-chat—closes a long-standing gap between knowledge and practice. Instead of hearing about a tool at a conference and waiting weeks to trial it, professionals can integrate new resources into sessions almost instantly. This rapid cycle of learn → apply → evaluate enhances practice and can improve outcomes. For therapists and educators working with vulnerable populations—children with special needs, stroke survivors, or individuals with learning differences—this immediacy can be transformative. It means quicker access to interventions, faster adaptation of strategies, and more personalized care. Cautions and Responsible Use Despite the promise, it’s vital to approach these features thoughtfully. Pulse curates content, but not everything it suggests will be clinically relevant or reliable. Professionals must continue exercising judgment and verifying evidence. Similarly, while in-chat purchases are convenient, critical evaluation of product quality, evidence base, and client appropriateness is still required. There are also broader considerations: Looking Ahead The introduction of ChatGPT Pulse and in-chat purchasing represents a new stage in how AI can support therapists, teachers, and health professionals. Instead of simply answering questions, AI is becoming a partner in staying informed, sourcing materials, and applying interventions quickly. This shift highlights a larger trend: the move toward integrated, proactive AI assistants that blend knowledge, tools, and actions in a single space. For those in therapy and education, engaging with these features early means shaping how they evolve—helping ensure they become useful, ethical, and empowering, rather than distracting or overwhelming. The future of practice will increasingly depend on tools that reduce burden, enhance access, and translate insights into action. ChatGPT Pulse offers an early glimpse into that future. 👉 Explore more about ChatGPT Pulse here:OpenAI Pulse announcement | TechRadar on Pulse | Tom’s Guide on in-chat shopping

English

Google Research “Learn Your Way” – Textbooks That Teach Themselves (For Students, Researchers, and Learners with Dyslexia)

Textbooks and PDFs are powerful tools, but they’re also rigid. Many learners skim, forget, or get overwhelmed by dense pages of text. Now imagine if those same materials could adapt to you. That’s what Google Research is building with Learn Your Way—a system that transforms PDFs and textbooks into interactive, adaptive lessons. From Static Reading to Adaptive Learning Upload a textbook or article, and “Learn Your Way” reshapes it into a dynamic learning experience. Instead of passively reading, you can: The result? Content feels less like a wall of words and more like a responsive tutor. The Evidence: Stronger Recall Google’s first efficacy study was striking: Why This Matters for Researchers Academics and professionals face the same problem as students: too much reading, too little time. Learn Your Way could transform: For early-career researchers, it could act as a study scaffold; for experienced academics, a tool to accelerate comprehension across new fields. Why This Matters for Individuals with Dyslexia Traditional textbooks are especially challenging for people with dyslexia, where dense text, long paragraphs, and lack of scaffolding can cause fatigue and frustration. Learn Your Way offers several benefits: This doesn’t replace structured literacy interventions, but it creates a more accessible environment for everyday studying, professional training, or even research reading. The Bigger Picture Learn Your Way moves education and research from “read and memorize” to “engage and adapt.” For: The Takeaway Education tools are evolving. Textbooks are no longer static—they’re starting to teach back. Whether you’re a student studying for exams, a researcher scanning through dozens of PDFs, or a learner with dyslexia navigating dense reading, Learn Your Way shows how adaptive AI can make knowledge not only more efficient but also more inclusive.

English

OpenAI Just Tested Whether AI Can Do Your Job (Spoiler: It’s Getting Close)

Artificial intelligence (AI) is no longer a futuristic idea—it is shaping the way professionals in every field approach their work. From engineers designing mining equipment to nurses writing care plans, AI is being tested against the real demands of professional practice. And now, researchers are asking a bold question: Can AI do your job? OpenAI’s latest study doesn’t give a simple yes or no. Instead, it paints a much more nuanced picture—AI is not yet a full replacement for human professionals, but it’s edging surprisingly close in some areas. For us as therapists, this raises both opportunities and challenges that are worth exploring. The Benchmark: Measuring AI Against Professionals To answer this question, OpenAI created a new framework called GDPval. Think of it as a “skills exam” for AI systems, but instead of testing algebra or trivia, the exam covered real-world professional tasks. The Results: Fast, Cheap, and Sometimes Surprisingly Good The study revealed a mix of strengths and weaknesses: When human experts compared AI outputs to human-created work, they still preferred the human versions overall. Yet, the combination of AI-generated drafts reviewed and refined by professionals turned out to be more efficient than either working alone. Why This Matters for Therapists So, what does this mean for us in speech therapy, psychology, occupational therapy, and related fields? AI is not going to replace therapists any time soon—but it is already shifting how we can work. Here are some examples of how this might apply in our daily practice: But here’s the critical caveat: AI’s work often looks polished on the surface but may contain subtle errors or missing details. Harvard Business Review recently described this problem as “workslop”—content that seems professional but is incomplete or incorrect. For therapists, passing along unchecked “workslop” could mean inaccurate advice to families, poorly designed therapy tasks, or even harm to clinical trust. This is where our professional expertise becomes more important than ever. The Therapist’s Role in the AI Era AI should be thought of as a bright but clumsy intern: That means our role doesn’t diminish—it evolves. Therapists who supervise, refine, and direct AI outputs will be able to reclaim more time for the heart of therapy: building relationships, delivering personalized interventions, and making evidence-based decisions. Instead of drowning in paperwork, we could spend more energy face-to-face with clients, coaching families, or innovating in therapy delivery. Looking Ahead Some AI experts predict that by 2026, AI may be able to match humans in most economically valuable tasks. While this sounds alarming, it doesn’t mean therapists will vanish from the workforce. Instead, it means that those who learn to integrate AI effectively will thrive—while those who resist may struggle to keep up. The takeaway for us is clear: Final Thought As therapists, our work is built on empathy, creativity, and nuanced understanding—qualities no AI can replicate. But AI can free us from repetitive tasks, give us faster access to resources, and help us innovate in service delivery. The future of therapy is not AI instead of us—it’s AI alongside us. And that collaboration, if used wisely, can give us more time, more tools, and ultimately, more impact for the people we serve.

English

PhD-Intelligences” Is Nonsense – What Demis Hassabis’ Statement Means for AI in Research and Healthcare

In a recent interview, Demis Hassabis, CEO of Google DeepMind, dismissed claims that today’s AI models possess “PhD-level intelligence.” His message was clear: while AI can sometimes match or outperform humans in narrow tasks, it is far from demonstrating general intelligence. Calling these models “PhD-intelligences,” he argues, is misleading and risks creating unrealistic expectations for what AI can do in fields like healthcare and research. Hassabis notes that models such as Gemini or GPT-style systems show “pockets of PhD-level performance” in areas like protein folding, medical imaging, or advanced problem-solving. However, these systems also fail at basic reasoning tasks, cannot learn continuously, and often make elementary mistakes that no human researcher would. According to Hassabis, true Artificial General Intelligence (AGI)—a system that can learn flexibly across domains—remains 5–10 years away. What This Means for Research and Healthcare AI’s current limitations don’t mean it has no place in our work. Instead, they point to how we should use it responsibly and strategically. Practical Takeaways: Example Applications by Discipline Field Current Benefits of AI Limitations / Risks Healthcare Research Protein structure prediction (e.g., AlphaFold); drug discovery pipelines; imaging diagnostics. Errors in generalization; opaque reasoning; bias in data. Therapy & Psychology Drafting therapy materials; generating behavior scenarios; transcribing sessions. Risk of over-reliance; errors in sensitive contexts. Special Education Differentiated content creation; progress tracking; accessible learning supports. Potentially inaccurate recommendations without context. Looking Ahead Even without AGI, today’s AI tools can dramatically accelerate workflows and augment human expertise. But the caution from Hassabis reminds us: AI is not a replacement for human intelligence—it is a partner in progress. As researchers and clinicians, our responsibility is two-fold: In our next editions, we’ll explore how to integrate AI into research more concretely, with examples from therapy and healthcare studies. References

English

Meta’s New Glasses Just Changed Everything – What It Means for Therapists and Patients

Meta’s latest innovation—the Ray-Ban Display AI glasses paired with a neural EMG wristband—is creating waves well beyond the tech world. For years, smart glasses promised more than they delivered, but this time, the combination of a heads-up lens display, AI integration, and subtle gesture control suggests wearables are stepping into a new era. For therapists and patients, the potential is huge. At the core of the innovation is a full-color display inside the right lens, giving users discreet, real-time access to information without reaching for a phone. Paired with the Meta Neural Band, which detects wrist and finger movements via electromyography (EMG), the glasses allow hands-free control—ideal for users with mobility limitations or professionals needing quick interactions. Live captions, translations, messaging, and navigation can now appear directly in your line of sight. Key Features Therapists Should Know: Clinical Applications and Patient Benefits Therapy Area Potential Benefits Considerations Speech & Language Therapy Live captions for hearing-impaired clients; on-screen prompts for language tasks; gesture-based engagement Display size/readability; learning curve; potential distraction Occupational Therapy EMG control supports limited dexterity; interactive visual prompts for task sequencing Calibration required; less effective with tremors/severe motor impairments Psychomotor Therapy Movement guidance in real time; visual cues for coordination exercises Must keep exercises embodied, not screen-bound Psychology & Special Education Personalized reminders, translations, discreet coping prompts; increased independence Privacy, data security, risk of over-reliance on prompts What to Watch For The Funny Part: During the demo, the glasses completely malfunctioned—the captions started translating “Hello” into what looked like Morse code! But here’s the twist: even in chaos, the potential for therapy applications shone through. Imagine hands-free prompts during speech therapy, or gesture-controlled task sequences for occupational therapy—this is just the beginning. The bigger picture? These glasses mark a step toward assistive augmented reality. As battery life improves and features like lip-reading captions, real-time therapy overlays, and telehealth integration emerge, therapists could gain a whole new medium for intervention. Awareness now is key—understanding what these devices can and cannot do will help us prepare for the future. Stay tuned for our next edition, where we’ll dive deeper into practical ways to integrate wearables into therapy sessions.