Author name: Admin

English

Parental Controls & Teen AI Use: What Educators and Therapists Need to Know

Artificial intelligence is now woven deeply into adolescents’ digital lives, and recent developments at Meta Platforms illustrate how this is prompting both excitement and concern. In October 2025, Meta announced new parental control features designed to address how teenagers interact with AI chatbots on Instagram, Messenger and Meta’s AI platforms. These new settings will allow parents to disable one-on-one chats with AI characters, block specific AI characters entirely and gain insights into the broader topics their teens are discussing with AI. For therapists and special educators, this kind of shift has direct relevance. Teens are using AI chatbots not just as novelty apps, but as everyday companions, confidants and conversational partners. Some research suggests more than 70 % of teens have used AI companions and over half engage regularly. That means when we talk about adolescent social and emotional support, the digital dimension is increasingly part of the context. Why does this matter? First, if a teen is forming a pattern of working through challenges, worries or social-communication via an AI chatbot, it raises important questions: what kind of messages are being reinforced? Are these increasing self-reliance, reducing peer or adult interaction, or reinforcing unhealthy patterns of isolation or dependency? For example, if a student with anxiety prefers sessions with a chatbot over adult-led discussion, we need to ask whether that substitution is helpful, neutral, or potentially problematic. Second, educators and therapists are well positioned to intervene proactively. Instead of simply assuming family or school IT will handle AI safety, you can build routine questions and reflections into your sessions: “Do you talk with a chatbot or AI assistant? What do you talk about? How does that compare to talking to friends or me?” These questions open discussion about digital emotional habits and help students articulate their experiences with AI rather than silently consume them. Third, this is also a family and systems issue. When Meta allows parents to monitor and set boundaries around teen-AI interactions, it offers a starting point for family education around digital wellbeing. For therapists, hosting a brief parent-session or sending a handout about AI chat habits, emotional regulation and healthy interaction might make sense. In special education settings, this becomes part of a broader plan: how does student digital use intersect with communication goals, social skills, and transition to adult life? From a school or clinic perspective, planning might include coordination with the IT team, reviewing how chatbots or AI companions are used in the building, and considering whether certain students need scaffolded access or supervision. For example, students with social-communication challenges might use AI bots unsupervised, which introduces risk if the bot offers responses that are unhelpful, reinforcing or misleading. It’s also important to stay alert to ethics and developmental appropriateness. Meta’s update comes after criticism that some of its bots engaged in romantic or inappropriate exchanges with minors. These new features—while helpful—are a minimum response, not a full solution. Vulnerable teens, especially those with special needs, may be at greater risk of substituting bot-based interaction for supportive adult engagement. What can you do right now? Consider including a digital-AI question in your intake or IEP forms. Run a short conversation with families about chatbot use in the home. Offer resources or a brief session for parents and guardians about setting boundaries and promoting emotional safety in AI use. Take a look at students whose digital habits changed dramatically (for example, more chatbot use, fewer peer interactions) and reflect on whether this coincides with changes in mood/engagement. Dialogue with your multidisciplinary team: how does AI interaction fit into the student’s social-communication plan, mental health goals or peer-interaction targets? Suggested Reading:

English

Inclusive AI in Education: A New Frontier for Therapists and Special Educators

The promise of artificial intelligence in education has grown rapidly, and a new working paper from the Organisation for Economic Co‑operation and Development (OECD) titled “Leveraging Artificial Intelligence to Support Students with Special Education Needs” provides a thoughtful overview of how AI can support learners—but with major caveats. At its core, the report argues that AI tools which adapt instruction, generate accessible content and provide support tailored to individual learners have real potential in special education, therapy and inclusive classrooms. For example, an AI system might generate simplified reading passages for students with dyslexia, create visual supports or scaffolds for students with language delays, or adapt pace and format for students with attention or processing challenges. For therapists and special educators, this means opportunities to innovate. Instead of manually creating multiple versions of a lesson or communication script, generative AI can support you by producing varied, adapted material quickly. A speech therapist working with bilingual children might use an AI tool to produce scaffolded materials across languages; an occupational therapist might generate tactile-task instructions or interactive supports that match a student’s profile. However, the OECD report also emphasises that equity, access and human-centred design must accompany these possibilities. AI tools often rely on data trained on typical learners, not those with rare communication profiles or disabilities. Bias, representation gaps and access inequities (such as device availability or internet access) are real obstacles. In practice, you might pilot an AI-driven tool in one classroom or one caseload, with clear parameters: what are the outcomes? How did students engage? Did the tool genuinely reduce the manual load? Did it increase learner autonomy or scaffold more meaningful interaction? Collecting student and family feedback, documenting changes in engagement, and reflecting on how the tool leveraged or altered human support is key. Inclusive AI also demands that you remain the designer of the environment, not the tool. For example, when generating supports for a student with autism and a co-occurring language disorder, you might ask: did the AI produce appropriate language level? Did it respect cultural/language context? Do hardware/internet constraints limit access at home or in school? These reflections help avoid inadvertently widening the gap for students who may have fewer resources. From a professional development perspective, this is also a moment to embed AI literacy into your practice. As learners engage with AI tools, ask how their interaction changes: Are they more independent? Did scaffolded tools reduce frustration? Are they using supports in ways you did not anticipate? Part of your emerging role may be to monitor and guide how students interact with AI as part of the learning ecology. If you’re exploring inclusive AI, consider creating a small pilot plan: select one AI-tool, one student group, one outcome metric (e.g., reading comprehension, self-regulation, communication initiation). Run a baseline, implement the tool, reflect weekly, and refine prompts or scaffolded supports. Share findings with colleagues—these insights are vital for building sustainable AI-assisted practice. Suggested Reading:

English

Echo-Teddy: An LLM-Powered Social Robot to Support Autistic Students

One of the most promising frontiers in AI and special education is the blending of robotics and language models to support social communication. A recent project, Echo-Teddy, is pushing into that space — and it offers lessons, possibilities, and cautions for therapists, educators, and clinicians working with neurodiverse populations. What Is Echo-Teddy? Echo-Teddy is a prototype social robot powered by a large language model (LLM), designed specifically to support students with autism spectrum disorder (ASD). The developers built it to provide adaptive, age-appropriate conversational interaction, combined with simple motor or gesture capabilities. Unlike chatbots tied to screens, Echo-Teddy occupies physical space, allowing learners to engage with it as a social companion in real time. The system is built on a modest robotics platform (think Raspberry Pi and basic actuators) and integrates speech, gestures, and conversational prompts in its early form. In the initial phase, designers worked with expert feedback and developer reflections to refine how the robot interacts: customizing dialogue, adapting responses, and adjusting prompts to align with learner needs. They prioritized ethical design and age-appropriate interactions, emphasizing that the robot must not overstep or replace human relational connection. Why Echo-Teddy Matters for Practitioners Echo-Teddy sits at the intersection of three trends many in your field are watching: Key Considerations & Challenges No innovation is without trade-offs. When considering Echo-Teddy’s relevance or future deployment, keep these in mind: What You Can Do Today (Pilot Ideas) Looking Toward the Future Echo-Teddy is an early model of what the future may hold: embodied AI companions in classrooms, therapy rooms, and home settings, offering low-stakes interaction, scaffolding, and rehearsal. As hardware becomes more affordable and language models become more capable, these robots may become part of an ecosystem: robots, human therapists, software tools, and digital supports working in tandem. For your audience, Echo-Teddy is a reminder: the future of social-communication support is not just virtual — it’s embodied. It challenges us to think not only what AI can do, but how to integrate technology into human-centered care. When thoughtfully deployed, these innovations can expand our reach, reinforce learning, and provide clients with more opportunities to practice, experiment, and grow.

English

Evaluating AI Chatbots in Evidence-Based Health Advice: A 2025 Perspective

As artificial intelligence continues to permeate various sectors, its application in healthcare has garnered significant attention. A recent study published in Frontiers in Digital Health assessed the accuracy of several AI chatbots—ChatGPT-3.5, ChatGPT-4o, Microsoft Copilot, Google Gemini, Claude, and Perplexity—in providing evidence-based health advice, specifically focusing on lumbosacral radicular pain. Study Overview The study involved posing nine clinical questions related to lumbosacral radicular pain to the latest versions of the aforementioned AI chatbots. These questions were designed based on established clinical practice guidelines (CPGs). Each chatbot’s responses were evaluated for consistency, reliability, and alignment with CPG recommendations. The evaluation process included assessing text consistency, intra- and inter-rater reliability, and the match rate with CPGs. Key Findings The study also highlighted variability in the internal consistency of AI-generated responses, ranging from 26% to 68%. Intra-rater reliability was generally high, with ratings varying from “almost perfect” to “substantial.” Inter-rater reliability also showed variability, ranging from “almost perfect” to “moderate.” Implications for Healthcare Professionals The findings underscore the necessity for healthcare professionals to exercise caution when considering AI-generated health advice. While AI chatbots can serve as supplementary tools, they should not replace professional judgment. The variability in accuracy and adherence to clinical guidelines suggests that AI-generated recommendations may not always be reliable. For allied health professionals, including speech-language pathologists, occupational therapists, and physical therapists, AI chatbots can provide valuable information. However, it is crucial to critically evaluate AI-generated content and cross-reference it with current clinical guidelines and personal expertise. Conclusion While AI chatbots have the potential to enhance healthcare delivery by providing quick access to information, their current limitations in aligning with evidence-based guidelines necessitate a cautious approach. Healthcare professionals should leverage AI tools to augment their practice, ensuring that AI-generated advice is used responsibly and in conjunction with clinical expertise.

English

Google Research “Learn Your Way” – Textbooks That Teach Themselves (For Students, Researchers, and Learners with Dyslexia)

Textbooks and PDFs are powerful tools, but they’re also rigid. Many learners skim, forget, or get overwhelmed by dense pages of text. Now imagine if those same materials could adapt to you. That’s what Google Research is building with Learn Your Way—a system that transforms PDFs and textbooks into interactive, adaptive lessons. From Static Reading to Adaptive Learning Upload a textbook or article, and “Learn Your Way” reshapes it into a dynamic learning experience. Instead of passively reading, you can: The result? Content feels less like a wall of words and more like a responsive tutor. The Evidence: Stronger Recall Google’s first efficacy study was striking: Why This Matters for Researchers Academics and professionals face the same problem as students: too much reading, too little time. Learn Your Way could transform: For early-career researchers, it could act as a study scaffold; for experienced academics, a tool to accelerate comprehension across new fields. Why This Matters for Individuals with Dyslexia Traditional textbooks are especially challenging for people with dyslexia, where dense text, long paragraphs, and lack of scaffolding can cause fatigue and frustration. Learn Your Way offers several benefits: This doesn’t replace structured literacy interventions, but it creates a more accessible environment for everyday studying, professional training, or even research reading. The Bigger Picture Learn Your Way moves education and research from “read and memorize” to “engage and adapt.” For: The Takeaway Education tools are evolving. Textbooks are no longer static—they’re starting to teach back. Whether you’re a student studying for exams, a researcher scanning through dozens of PDFs, or a learner with dyslexia navigating dense reading, Learn Your Way shows how adaptive AI can make knowledge not only more efficient but also more inclusive.

English

OpenAI Just Tested Whether AI Can Do Your Job (Spoiler: It’s Getting Close)

Artificial intelligence (AI) is no longer a futuristic idea—it is shaping the way professionals in every field approach their work. From engineers designing mining equipment to nurses writing care plans, AI is being tested against the real demands of professional practice. And now, researchers are asking a bold question: Can AI do your job? OpenAI’s latest study doesn’t give a simple yes or no. Instead, it paints a much more nuanced picture—AI is not yet a full replacement for human professionals, but it’s edging surprisingly close in some areas. For us as therapists, this raises both opportunities and challenges that are worth exploring. The Benchmark: Measuring AI Against Professionals To answer this question, OpenAI created a new framework called GDPval. Think of it as a “skills exam” for AI systems, but instead of testing algebra or trivia, the exam covered real-world professional tasks. The Results: Fast, Cheap, and Sometimes Surprisingly Good The study revealed a mix of strengths and weaknesses: When human experts compared AI outputs to human-created work, they still preferred the human versions overall. Yet, the combination of AI-generated drafts reviewed and refined by professionals turned out to be more efficient than either working alone. Why This Matters for Therapists So, what does this mean for us in speech therapy, psychology, occupational therapy, and related fields? AI is not going to replace therapists any time soon—but it is already shifting how we can work. Here are some examples of how this might apply in our daily practice: But here’s the critical caveat: AI’s work often looks polished on the surface but may contain subtle errors or missing details. Harvard Business Review recently described this problem as “workslop”—content that seems professional but is incomplete or incorrect. For therapists, passing along unchecked “workslop” could mean inaccurate advice to families, poorly designed therapy tasks, or even harm to clinical trust. This is where our professional expertise becomes more important than ever. The Therapist’s Role in the AI Era AI should be thought of as a bright but clumsy intern: That means our role doesn’t diminish—it evolves. Therapists who supervise, refine, and direct AI outputs will be able to reclaim more time for the heart of therapy: building relationships, delivering personalized interventions, and making evidence-based decisions. Instead of drowning in paperwork, we could spend more energy face-to-face with clients, coaching families, or innovating in therapy delivery. Looking Ahead Some AI experts predict that by 2026, AI may be able to match humans in most economically valuable tasks. While this sounds alarming, it doesn’t mean therapists will vanish from the workforce. Instead, it means that those who learn to integrate AI effectively will thrive—while those who resist may struggle to keep up. The takeaway for us is clear: Final Thought As therapists, our work is built on empathy, creativity, and nuanced understanding—qualities no AI can replicate. But AI can free us from repetitive tasks, give us faster access to resources, and help us innovate in service delivery. The future of therapy is not AI instead of us—it’s AI alongside us. And that collaboration, if used wisely, can give us more time, more tools, and ultimately, more impact for the people we serve.

English

Claude AI — What’s New & How We Can Use It (SLPs, OTs, Educators, Psychologists)

Claude, by Anthropic, is one of the leading Large Language Models (LLMs). It has been evolving fast, and many updates are relevant for therapy, special education, psychology, and related fields. Here’s a summary of what’s new with Claude, plus ideas (and cautions) for how professionals like us can use it. Recent updates in Claude How these can help SLPs, OTs, Special Educators, Psychologists Here are some practical ways we might use Claude’s recent capabilities, plus what to be cautious about. Function / Task How Claude can support Things to watch / best practices Goal / IEP Planning Use Claude to draft or refine Individualized Education Program (IEP) goals, generate multiple options, suggest evidence-based strategies for goals in speech, fine motor, executive functioning, etc. Because of its improved context memory, Claude can remember student profile details across prompts to help maintain coherence. Always review drafts carefully; ensure the language matches legal/regulatory standards; verify that suggestions are appropriate for the individual child. Don’t rely on AI for diagnosis. Keep sensitive student info anonymized. Therapy Material Creation Generate therapy stimuli: e.g. social stories, visual supports, worksheets, scripts for practice, prompts for articulation or language, adapted texts. Longer context window means more ability to build complex lesson sets (e.g. a sequence of sessions) without re-uploading all the materials. Check for accuracy, cultural appropriateness, developmental level. Avoid overly generic content. Use human insight to adapt. Progress Monitoring & Data Analysis Claude can help pull together progress reports, analyze data (e.g. logs of student performance or assessment scores), spot trends, suggest modifications in therapy plans. With improved reasoning, it might help suggest when progress is stalled and propose alternative interventions. Be wary of over-interpreting AI suggestions. Ensure data quality. Maintain human responsibility for decisions. Supporting Learning & Generalization Use learning modes to help students think through tasks: rather than giving answers, Claude can scaffold reasoning, guide metacognitive strategies, support writing reflections. For older students, help them plan writing or projects with step-by-step reasoning. For psychologists, use it for psycho-educational support (e.g. helping students with ADHD plan tasks, break down executive functioning demands). Important: always ensure student is learning the process, not “cheating” or bypassing thinking. Monitor for bias or content that seems inappropriate. Confirm information (e.g. if medical or psychological content). Administrative / Documentation Efficiency Use Claude’s upgraded file tools to create formatted documents, progress notes, therapy plans, meeting summaries, parent-friendly reports. Memory and long context help keep consistent details so you don’t keep repeating basic background. Even here, you need to review for correctness. Also, check confidentiality and data protection policies. For example, do you have permission to include certain data? Ensure work complies with privacy laws. What to be cautious about & ethical considerations What to try soon References Anthropic. (2025, May 22). Introducing Claude 4. https://www.anthropic.com/news/claude-4 Anthropic Anthropic. (2025, August 12). Claude Sonnet 4 model now has a 1 million token context window. TechCrunch. TechCrunch Anthropic. (2025, August 11). Claude AI memory upgrade & incognito mode. The Verge. The Verge Anthropic. (n.d.). Claude for Education: Reimagining AI’s Role in K-12 Learning. Eduscape. Eduscape

English

AI & Scientific Research — What’s New, What’s Changing

What’s new in AI & research Another example is The AI Scientist-v2, which submitted fully AI-generated manuscripts to peer‐review workshops. Though human oversight was still needed in many parts, this is a milestone: an AI doing many steps that were traditionally human-only. arXiv There are also “virtual research assistants” being developed (e.g. at Oxford) that reduce workload by filtering promising leads in large datasets (like astronomical signals) so that scientists can focus their effort. Windows Central What this means (for us, in therapy & education & research) — “so what” What to watch next Here are some topics I’m planning to dive into in future issues: References Wei, J., Yang, Y., Zhang, X., Chen, Y., Zhuang, X., Gao, Y., Zhou, D., Ouyang, W., Dong, A., Cheng, Y., Sun, Y., Bai, L., Bowen, Z., Dong, N., You, C., Sun, L., Zheng, S., Ning, D., … & Zhou, D. (2025). From AI for Science to Agentic Science: A Survey on Autonomous Scientific Discovery. arXiv. arXiv Yamada, Y., Lange, R. T., Lu, C., Hu, S., Lu, C., Foerster, J., Ha, D., & Clune, J. (2025). The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search (arXiv preprint). arXiv “AI is Revolutionizing University Research: Here’s How.” TechRadar. (2025, September).

English

How AI Just Saved Brain Cells: What the NHS Stroke-Detection Tool Teaches Us About Timing in Therapy

When it comes to brain health, timing isn’t just important—it’s everything. A recent breakthrough in England demonstrates just how transformative artificial intelligence can be when speed and accuracy mean the difference between life-long disability and meaningful recovery. The NHS has introduced an AI-powered tool across all 107 stroke centres in England that can analyze CT scans in under a minute. By instantly identifying the type and severity of a stroke, doctors can make treatment decisions faster and more confidently. The results are remarkable: treatment time dropped from an average of 140 minutes to 79 minutes, and the proportion of patients recovering with little or no disability nearly tripled—from 16% to 48% (The Guardian, 2025). Why Therapists Should Pay Attention While most of us don’t work in emergency rooms, the lesson here applies powerfully to our field: the earlier the intervention, the better the outcome. Just as “time is brain” in stroke care, time is potential in developmental therapy. For children with speech delays, autism spectrum disorder (ASD), ADHD, or dyslexia, early intervention is proven to reshape developmental trajectories. Research consistently shows that children who receive targeted therapy early demonstrate stronger communication, social, and learning outcomes compared to those who start later. In swallowing therapy, catching a feeding issue before it escalates can prevent hospitalizations and improve nutritional health. AI’s success in stroke care reminds us of two things: Drawing Parallels for Therapy Imagine an AI assistant that quickly analyzes a child’s speech sample and highlights phonological processes or syntactic errors in minutes—leaving the therapist more time for direct intervention. Or a system that alerts you when a client’s attention patterns, logged across sessions, suggest the need for a strategy change. Like the NHS stroke tool, these systems wouldn’t “do therapy” for us—but they could give us insights faster, allowing us to act at the moment it matters most. Ethical Integration: Guardrails We Need The NHS model also teaches us about safe integration: AI works with clinicians, not instead of them. For therapy, this means: Takeaway Toolkit: “Timely AI Use in Therapy” Here are four reflective questions to guide safe, effective use of AI in your practice: Final Thoughts The NHS story is inspiring—not just because of its immediate life-saving impact, but because it paints a picture of how AI and clinicians can work together. For us in therapy, the lesson is clear: when interventions happen sooner, lives change more profoundly. With AI as a partner, not a substitute, we may be able to bring timely support to even more clients who need it.

English

When Law Meets AI: Illinois Bans AI Therapy—Here’s What It Means for Clinical Practice

AI is advancing faster than regulation can keep up, and mental health is now at the heart of this debate. In August 2025, Illinois became the third U.S. state (after Utah and Nevada) to ban the use of AI in therapy decision-making. The law prohibits licensed therapists from using AI for diagnosis, treatment planning, or direct client communication. Companies are also barred from marketing “AI therapy” services that bypass licensed professionals (Washington Post, 2025; NY Post, 2025). This move reflects growing concerns about “AI psychosis,” misinformation, and the lack of accountability when vulnerable people turn to chatbots for therapy. Why This Matters for Therapists Everywhere Even if you don’t practice in Illinois, the ripple effects are significant. Regulations often start locally before spreading nationally—or globally. It raises key questions for all of us: What’s Still Allowed Importantly, the Illinois law doesn’t ban AI altogether. Therapists may still use AI for: What’s explicitly prohibited is letting AI act as the therapist. This distinction reinforces what many of us already believe: AI can support our work—but empathy, relational attunement, and clinical reasoning cannot be automated. Therapist Responsibility: Transparency and Boundaries With or without regulation, therapists should: The Bigger Picture: Advocacy and Ethics While some view bans as overly restrictive, they reflect real concerns about client safety and misinformation. Rather than rejecting AI outright, therapists can play an advocacy role—helping shape policies that strike a balance between innovation and protection. We can imagine a future where regulators, clinicians, and developers collaborate to define “safe zones” for AI use in therapy. For example, AI could continue to support therapists with data organization, early screening cues, and progress tracking—while humans remain the ultimate decision-makers. Takeaway Roadmap: “Using AI Without Crossing the Line” Here’s a simple three-step check-in for ethical AI use: Final Thoughts The Illinois ban isn’t about shutting down technology—it’s about drawing clearer boundaries to protect vulnerable clients. For therapists, the message is simple: AI can be a tool, but never the therapist. As the legal landscape evolves, staying proactive, transparent, and ethical will ensure we keep both innovation and humanity at the heart of our practice.

Shopping Cart