English

English

Why AI Integration Is Becoming Vital for Every Therapist & Educator

The last few years saw speculative discussions about AI in healthcare and education. Now, with recent launches, academic papers, platform updates, AI isn’t optional — it’s becoming a part of best practice. Whether you are a speech therapist, occupational therapist, physical therapist, educator, or any combination thereof, understanding these developments is essential. Let’s explore why AI integration is no longer just interesting, but vital — and how professionals can adapt. Key trends pushing AI toward the center What this means for different roles Role Implications of AI Integration Speech-Language Pathologists (SLPs) Able to use AI tools like UTI-LLM to give detailed articulatory feedback; wearable throat sensors; platforms like SpeechOn to allow clients to practice more often; possible reduction in repetitive tasks (progress tracking, client assignments). Physical Therapists & Occupational Therapists AI motion capture helps assess posture / movement, remote monitoring; platforms like Phoenix guide exercise and give conversational prompts; reduces misalignment in home practice; improves safety and compliance. Educators AI tools (Gemini, Microsoft Copilot, LogicBalls) allow content adaptation, real-time feedback, personalized learning; possibility to identify students at risk earlier; AI literacy becomes part of teaching; helps reduce teacher overload. Admins / Clinic Managers Need to select and validate AI tools; ensure integration with EMRs or school management systems; attend to training, privacy compliance, selecting tools that are accessible. Challenges & things to watch out for What you can do to stay ahead Conclusion AI is no longer just a future horizon — recent tools like UTI-LLM, Intelligent Throat wearables, SpeechOn, AI assistants in PT, and educator toolkits from Google and Microsoft show that cross-disciplinary AI integration is underway. For therapists of all kinds and educators, the opportunity is large: greater precision, access, efficiency, and innovation. But with that opportunity comes responsibility: ensuring quality, empathy, and equity. If you’re a therapist or educator, the next few years will likely involve deciding where and how much AI fits into your work. Keeping informed, trying out tools, and maintaining human-centered practice will help you make those decisions wel

English

ChatGPT Pulse: A New Era of AI Support for Therapists and Educators

Artificial intelligence is evolving from being reactive to becoming anticipatory. The new ChatGPT Pulse feature takes this step forward by turning AI into a proactive assistant. Pulse works in the background, compiling updates overnight and presenting them as clear, visual cards each morning. Instead of wasting time filtering through countless emails, research feeds, or social media threads, professionals can start the day with a snapshot of what matters most in their field. For those working in speech therapy, occupational therapy, physical therapy, psychology, and education, this innovation offers a way to stay on top of rapid changes in research, tools, and policy without adding hours to an already full workload. Imagine opening your device in the morning and seeing: “New study on motor learning in stroke rehab,” “Trial results for an articulation feedback device,” or “Updates on inclusive classroom technology.” Pulse is designed to anticipate these needs and bring information directly to you. Beyond Updates: In-Chat Purchases for Real-World Practice Alongside Pulse, OpenAI is piloting a feature that could reshape professional workflows: in-chat purchasing. Traditionally, after identifying a new strategy or tool, a therapist or teacher would have to search online, compare products, and order them through third-party platforms. This can create delays between recognizing a need and addressing it. With in-chat purchases, that process becomes seamless. If you’re discussing sensory supports with ChatGPT, it could not only suggest tools like weighted vests or fidget kits but also give you the option to purchase them directly within the conversation. For physical therapists, this could mean adaptive bands or balance boards; for speech therapists, visual cue cards or AAC devices; for educators, classroom visuals or accessibility supports. This direct integration reduces barriers, turning ideas into action much more quickly. It also opens the door for recommending families or caregivers trusted resources without overwhelming them with too many choices. How These Features Support Practice The integration of Pulse and purchasing into ChatGPT has the potential to reshape the daily life of professionals across therapy and education. Some of the most promising benefits include: The Importance for Therapy & Education Practice Why do these updates matter now? The pace of change in therapy and education is faster than ever. New technologies, policy shifts, and intervention models emerge constantly, and professionals risk falling behind if they cannot keep up. Pulse acts as a filter, ensuring the most relevant and practical information rises to the top. Equally, the ability to act on that information immediately—by accessing or purchasing tools in-chat—closes a long-standing gap between knowledge and practice. Instead of hearing about a tool at a conference and waiting weeks to trial it, professionals can integrate new resources into sessions almost instantly. This rapid cycle of learn → apply → evaluate enhances practice and can improve outcomes. For therapists and educators working with vulnerable populations—children with special needs, stroke survivors, or individuals with learning differences—this immediacy can be transformative. It means quicker access to interventions, faster adaptation of strategies, and more personalized care. Cautions and Responsible Use Despite the promise, it’s vital to approach these features thoughtfully. Pulse curates content, but not everything it suggests will be clinically relevant or reliable. Professionals must continue exercising judgment and verifying evidence. Similarly, while in-chat purchases are convenient, critical evaluation of product quality, evidence base, and client appropriateness is still required. There are also broader considerations: Looking Ahead The introduction of ChatGPT Pulse and in-chat purchasing represents a new stage in how AI can support therapists, teachers, and health professionals. Instead of simply answering questions, AI is becoming a partner in staying informed, sourcing materials, and applying interventions quickly. This shift highlights a larger trend: the move toward integrated, proactive AI assistants that blend knowledge, tools, and actions in a single space. For those in therapy and education, engaging with these features early means shaping how they evolve—helping ensure they become useful, ethical, and empowering, rather than distracting or overwhelming. The future of practice will increasingly depend on tools that reduce burden, enhance access, and translate insights into action. ChatGPT Pulse offers an early glimpse into that future. 👉 Explore more about ChatGPT Pulse here:OpenAI Pulse announcement | TechRadar on Pulse | Tom’s Guide on in-chat shopping

English

Google Research “Learn Your Way” – Textbooks That Teach Themselves (For Students, Researchers, and Learners with Dyslexia)

Textbooks and PDFs are powerful tools, but they’re also rigid. Many learners skim, forget, or get overwhelmed by dense pages of text. Now imagine if those same materials could adapt to you. That’s what Google Research is building with Learn Your Way—a system that transforms PDFs and textbooks into interactive, adaptive lessons. From Static Reading to Adaptive Learning Upload a textbook or article, and “Learn Your Way” reshapes it into a dynamic learning experience. Instead of passively reading, you can: The result? Content feels less like a wall of words and more like a responsive tutor. The Evidence: Stronger Recall Google’s first efficacy study was striking: Why This Matters for Researchers Academics and professionals face the same problem as students: too much reading, too little time. Learn Your Way could transform: For early-career researchers, it could act as a study scaffold; for experienced academics, a tool to accelerate comprehension across new fields. Why This Matters for Individuals with Dyslexia Traditional textbooks are especially challenging for people with dyslexia, where dense text, long paragraphs, and lack of scaffolding can cause fatigue and frustration. Learn Your Way offers several benefits: This doesn’t replace structured literacy interventions, but it creates a more accessible environment for everyday studying, professional training, or even research reading. The Bigger Picture Learn Your Way moves education and research from “read and memorize” to “engage and adapt.” For: The Takeaway Education tools are evolving. Textbooks are no longer static—they’re starting to teach back. Whether you’re a student studying for exams, a researcher scanning through dozens of PDFs, or a learner with dyslexia navigating dense reading, Learn Your Way shows how adaptive AI can make knowledge not only more efficient but also more inclusive.

English

OpenAI Just Tested Whether AI Can Do Your Job (Spoiler: It’s Getting Close)

Artificial intelligence (AI) is no longer a futuristic idea—it is shaping the way professionals in every field approach their work. From engineers designing mining equipment to nurses writing care plans, AI is being tested against the real demands of professional practice. And now, researchers are asking a bold question: Can AI do your job? OpenAI’s latest study doesn’t give a simple yes or no. Instead, it paints a much more nuanced picture—AI is not yet a full replacement for human professionals, but it’s edging surprisingly close in some areas. For us as therapists, this raises both opportunities and challenges that are worth exploring. The Benchmark: Measuring AI Against Professionals To answer this question, OpenAI created a new framework called GDPval. Think of it as a “skills exam” for AI systems, but instead of testing algebra or trivia, the exam covered real-world professional tasks. The Results: Fast, Cheap, and Sometimes Surprisingly Good The study revealed a mix of strengths and weaknesses: When human experts compared AI outputs to human-created work, they still preferred the human versions overall. Yet, the combination of AI-generated drafts reviewed and refined by professionals turned out to be more efficient than either working alone. Why This Matters for Therapists So, what does this mean for us in speech therapy, psychology, occupational therapy, and related fields? AI is not going to replace therapists any time soon—but it is already shifting how we can work. Here are some examples of how this might apply in our daily practice: But here’s the critical caveat: AI’s work often looks polished on the surface but may contain subtle errors or missing details. Harvard Business Review recently described this problem as “workslop”—content that seems professional but is incomplete or incorrect. For therapists, passing along unchecked “workslop” could mean inaccurate advice to families, poorly designed therapy tasks, or even harm to clinical trust. This is where our professional expertise becomes more important than ever. The Therapist’s Role in the AI Era AI should be thought of as a bright but clumsy intern: That means our role doesn’t diminish—it evolves. Therapists who supervise, refine, and direct AI outputs will be able to reclaim more time for the heart of therapy: building relationships, delivering personalized interventions, and making evidence-based decisions. Instead of drowning in paperwork, we could spend more energy face-to-face with clients, coaching families, or innovating in therapy delivery. Looking Ahead Some AI experts predict that by 2026, AI may be able to match humans in most economically valuable tasks. While this sounds alarming, it doesn’t mean therapists will vanish from the workforce. Instead, it means that those who learn to integrate AI effectively will thrive—while those who resist may struggle to keep up. The takeaway for us is clear: Final Thought As therapists, our work is built on empathy, creativity, and nuanced understanding—qualities no AI can replicate. But AI can free us from repetitive tasks, give us faster access to resources, and help us innovate in service delivery. The future of therapy is not AI instead of us—it’s AI alongside us. And that collaboration, if used wisely, can give us more time, more tools, and ultimately, more impact for the people we serve.

English

PhD-Intelligences” Is Nonsense – What Demis Hassabis’ Statement Means for AI in Research and Healthcare

In a recent interview, Demis Hassabis, CEO of Google DeepMind, dismissed claims that today’s AI models possess “PhD-level intelligence.” His message was clear: while AI can sometimes match or outperform humans in narrow tasks, it is far from demonstrating general intelligence. Calling these models “PhD-intelligences,” he argues, is misleading and risks creating unrealistic expectations for what AI can do in fields like healthcare and research. Hassabis notes that models such as Gemini or GPT-style systems show “pockets of PhD-level performance” in areas like protein folding, medical imaging, or advanced problem-solving. However, these systems also fail at basic reasoning tasks, cannot learn continuously, and often make elementary mistakes that no human researcher would. According to Hassabis, true Artificial General Intelligence (AGI)—a system that can learn flexibly across domains—remains 5–10 years away. What This Means for Research and Healthcare AI’s current limitations don’t mean it has no place in our work. Instead, they point to how we should use it responsibly and strategically. Practical Takeaways: Example Applications by Discipline Field Current Benefits of AI Limitations / Risks Healthcare Research Protein structure prediction (e.g., AlphaFold); drug discovery pipelines; imaging diagnostics. Errors in generalization; opaque reasoning; bias in data. Therapy & Psychology Drafting therapy materials; generating behavior scenarios; transcribing sessions. Risk of over-reliance; errors in sensitive contexts. Special Education Differentiated content creation; progress tracking; accessible learning supports. Potentially inaccurate recommendations without context. Looking Ahead Even without AGI, today’s AI tools can dramatically accelerate workflows and augment human expertise. But the caution from Hassabis reminds us: AI is not a replacement for human intelligence—it is a partner in progress. As researchers and clinicians, our responsibility is two-fold: In our next editions, we’ll explore how to integrate AI into research more concretely, with examples from therapy and healthcare studies. References

English

Meta’s New Glasses Just Changed Everything – What It Means for Therapists and Patients

Meta’s latest innovation—the Ray-Ban Display AI glasses paired with a neural EMG wristband—is creating waves well beyond the tech world. For years, smart glasses promised more than they delivered, but this time, the combination of a heads-up lens display, AI integration, and subtle gesture control suggests wearables are stepping into a new era. For therapists and patients, the potential is huge. At the core of the innovation is a full-color display inside the right lens, giving users discreet, real-time access to information without reaching for a phone. Paired with the Meta Neural Band, which detects wrist and finger movements via electromyography (EMG), the glasses allow hands-free control—ideal for users with mobility limitations or professionals needing quick interactions. Live captions, translations, messaging, and navigation can now appear directly in your line of sight. Key Features Therapists Should Know: Clinical Applications and Patient Benefits Therapy Area Potential Benefits Considerations Speech & Language Therapy Live captions for hearing-impaired clients; on-screen prompts for language tasks; gesture-based engagement Display size/readability; learning curve; potential distraction Occupational Therapy EMG control supports limited dexterity; interactive visual prompts for task sequencing Calibration required; less effective with tremors/severe motor impairments Psychomotor Therapy Movement guidance in real time; visual cues for coordination exercises Must keep exercises embodied, not screen-bound Psychology & Special Education Personalized reminders, translations, discreet coping prompts; increased independence Privacy, data security, risk of over-reliance on prompts What to Watch For The Funny Part: During the demo, the glasses completely malfunctioned—the captions started translating “Hello” into what looked like Morse code! But here’s the twist: even in chaos, the potential for therapy applications shone through. Imagine hands-free prompts during speech therapy, or gesture-controlled task sequences for occupational therapy—this is just the beginning. The bigger picture? These glasses mark a step toward assistive augmented reality. As battery life improves and features like lip-reading captions, real-time therapy overlays, and telehealth integration emerge, therapists could gain a whole new medium for intervention. Awareness now is key—understanding what these devices can and cannot do will help us prepare for the future. Stay tuned for our next edition, where we’ll dive deeper into practical ways to integrate wearables into therapy sessions.

English

Claude AI — What’s New & How We Can Use It (SLPs, OTs, Educators, Psychologists)

Claude, by Anthropic, is one of the leading Large Language Models (LLMs). It has been evolving fast, and many updates are relevant for therapy, special education, psychology, and related fields. Here’s a summary of what’s new with Claude, plus ideas (and cautions) for how professionals like us can use it. Recent updates in Claude How these can help SLPs, OTs, Special Educators, Psychologists Here are some practical ways we might use Claude’s recent capabilities, plus what to be cautious about. Function / Task How Claude can support Things to watch / best practices Goal / IEP Planning Use Claude to draft or refine Individualized Education Program (IEP) goals, generate multiple options, suggest evidence-based strategies for goals in speech, fine motor, executive functioning, etc. Because of its improved context memory, Claude can remember student profile details across prompts to help maintain coherence. Always review drafts carefully; ensure the language matches legal/regulatory standards; verify that suggestions are appropriate for the individual child. Don’t rely on AI for diagnosis. Keep sensitive student info anonymized. Therapy Material Creation Generate therapy stimuli: e.g. social stories, visual supports, worksheets, scripts for practice, prompts for articulation or language, adapted texts. Longer context window means more ability to build complex lesson sets (e.g. a sequence of sessions) without re-uploading all the materials. Check for accuracy, cultural appropriateness, developmental level. Avoid overly generic content. Use human insight to adapt. Progress Monitoring & Data Analysis Claude can help pull together progress reports, analyze data (e.g. logs of student performance or assessment scores), spot trends, suggest modifications in therapy plans. With improved reasoning, it might help suggest when progress is stalled and propose alternative interventions. Be wary of over-interpreting AI suggestions. Ensure data quality. Maintain human responsibility for decisions. Supporting Learning & Generalization Use learning modes to help students think through tasks: rather than giving answers, Claude can scaffold reasoning, guide metacognitive strategies, support writing reflections. For older students, help them plan writing or projects with step-by-step reasoning. For psychologists, use it for psycho-educational support (e.g. helping students with ADHD plan tasks, break down executive functioning demands). Important: always ensure student is learning the process, not “cheating” or bypassing thinking. Monitor for bias or content that seems inappropriate. Confirm information (e.g. if medical or psychological content). Administrative / Documentation Efficiency Use Claude’s upgraded file tools to create formatted documents, progress notes, therapy plans, meeting summaries, parent-friendly reports. Memory and long context help keep consistent details so you don’t keep repeating basic background. Even here, you need to review for correctness. Also, check confidentiality and data protection policies. For example, do you have permission to include certain data? Ensure work complies with privacy laws. What to be cautious about & ethical considerations What to try soon References Anthropic. (2025, May 22). Introducing Claude 4. https://www.anthropic.com/news/claude-4 Anthropic Anthropic. (2025, August 12). Claude Sonnet 4 model now has a 1 million token context window. TechCrunch. TechCrunch Anthropic. (2025, August 11). Claude AI memory upgrade & incognito mode. The Verge. The Verge Anthropic. (n.d.). Claude for Education: Reimagining AI’s Role in K-12 Learning. Eduscape. Eduscape

English

AI & Scientific Research — What’s New, What’s Changing

What’s new in AI & research Another example is The AI Scientist-v2, which submitted fully AI-generated manuscripts to peer‐review workshops. Though human oversight was still needed in many parts, this is a milestone: an AI doing many steps that were traditionally human-only. arXiv There are also “virtual research assistants” being developed (e.g. at Oxford) that reduce workload by filtering promising leads in large datasets (like astronomical signals) so that scientists can focus their effort. Windows Central What this means (for us, in therapy & education & research) — “so what” What to watch next Here are some topics I’m planning to dive into in future issues: References Wei, J., Yang, Y., Zhang, X., Chen, Y., Zhuang, X., Gao, Y., Zhou, D., Ouyang, W., Dong, A., Cheng, Y., Sun, Y., Bai, L., Bowen, Z., Dong, N., You, C., Sun, L., Zheng, S., Ning, D., … & Zhou, D. (2025). From AI for Science to Agentic Science: A Survey on Autonomous Scientific Discovery. arXiv. arXiv Yamada, Y., Lange, R. T., Lu, C., Hu, S., Lu, C., Foerster, J., Ha, D., & Clune, J. (2025). The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search (arXiv preprint). arXiv “AI is Revolutionizing University Research: Here’s How.” TechRadar. (2025, September).

English

How AI Just Saved Brain Cells: What the NHS Stroke-Detection Tool Teaches Us About Timing in Therapy

When it comes to brain health, timing isn’t just important—it’s everything. A recent breakthrough in England demonstrates just how transformative artificial intelligence can be when speed and accuracy mean the difference between life-long disability and meaningful recovery. The NHS has introduced an AI-powered tool across all 107 stroke centres in England that can analyze CT scans in under a minute. By instantly identifying the type and severity of a stroke, doctors can make treatment decisions faster and more confidently. The results are remarkable: treatment time dropped from an average of 140 minutes to 79 minutes, and the proportion of patients recovering with little or no disability nearly tripled—from 16% to 48% (The Guardian, 2025). Why Therapists Should Pay Attention While most of us don’t work in emergency rooms, the lesson here applies powerfully to our field: the earlier the intervention, the better the outcome. Just as “time is brain” in stroke care, time is potential in developmental therapy. For children with speech delays, autism spectrum disorder (ASD), ADHD, or dyslexia, early intervention is proven to reshape developmental trajectories. Research consistently shows that children who receive targeted therapy early demonstrate stronger communication, social, and learning outcomes compared to those who start later. In swallowing therapy, catching a feeding issue before it escalates can prevent hospitalizations and improve nutritional health. AI’s success in stroke care reminds us of two things: Drawing Parallels for Therapy Imagine an AI assistant that quickly analyzes a child’s speech sample and highlights phonological processes or syntactic errors in minutes—leaving the therapist more time for direct intervention. Or a system that alerts you when a client’s attention patterns, logged across sessions, suggest the need for a strategy change. Like the NHS stroke tool, these systems wouldn’t “do therapy” for us—but they could give us insights faster, allowing us to act at the moment it matters most. Ethical Integration: Guardrails We Need The NHS model also teaches us about safe integration: AI works with clinicians, not instead of them. For therapy, this means: Takeaway Toolkit: “Timely AI Use in Therapy” Here are four reflective questions to guide safe, effective use of AI in your practice: Final Thoughts The NHS story is inspiring—not just because of its immediate life-saving impact, but because it paints a picture of how AI and clinicians can work together. For us in therapy, the lesson is clear: when interventions happen sooner, lives change more profoundly. With AI as a partner, not a substitute, we may be able to bring timely support to even more clients who need it.

English

When Law Meets AI: Illinois Bans AI Therapy—Here’s What It Means for Clinical Practice

AI is advancing faster than regulation can keep up, and mental health is now at the heart of this debate. In August 2025, Illinois became the third U.S. state (after Utah and Nevada) to ban the use of AI in therapy decision-making. The law prohibits licensed therapists from using AI for diagnosis, treatment planning, or direct client communication. Companies are also barred from marketing “AI therapy” services that bypass licensed professionals (Washington Post, 2025; NY Post, 2025). This move reflects growing concerns about “AI psychosis,” misinformation, and the lack of accountability when vulnerable people turn to chatbots for therapy. Why This Matters for Therapists Everywhere Even if you don’t practice in Illinois, the ripple effects are significant. Regulations often start locally before spreading nationally—or globally. It raises key questions for all of us: What’s Still Allowed Importantly, the Illinois law doesn’t ban AI altogether. Therapists may still use AI for: What’s explicitly prohibited is letting AI act as the therapist. This distinction reinforces what many of us already believe: AI can support our work—but empathy, relational attunement, and clinical reasoning cannot be automated. Therapist Responsibility: Transparency and Boundaries With or without regulation, therapists should: The Bigger Picture: Advocacy and Ethics While some view bans as overly restrictive, they reflect real concerns about client safety and misinformation. Rather than rejecting AI outright, therapists can play an advocacy role—helping shape policies that strike a balance between innovation and protection. We can imagine a future where regulators, clinicians, and developers collaborate to define “safe zones” for AI use in therapy. For example, AI could continue to support therapists with data organization, early screening cues, and progress tracking—while humans remain the ultimate decision-makers. Takeaway Roadmap: “Using AI Without Crossing the Line” Here’s a simple three-step check-in for ethical AI use: Final Thoughts The Illinois ban isn’t about shutting down technology—it’s about drawing clearer boundaries to protect vulnerable clients. For therapists, the message is simple: AI can be a tool, but never the therapist. As the legal landscape evolves, staying proactive, transparent, and ethical will ensure we keep both innovation and humanity at the heart of our practice.

Shopping Cart