English

English

Two approaches to using AI in therapy: procedural vs. collaborative (and how we actually benefit)

We keep noticing that when clinicians talk about “using AI,” we’re often talking about two very different approaches, even if we’re using the same tool. And the confusion usually shows up around one word: automation. People hear “automation” and imagine a cold replacement of therapy, or they assume it’s basically the same thing as collaboration. In practice, automation in clinical work is simpler and more grounded than that. It is not “AI doing therapy.” It is the clinician delegating repeatable steps in the workflow, then supervising the output the way we would supervise an assistant or a trainee. In procedural mode, AI becomes a substitute for execution. We ask, it answers; we paste, we send. The output is used for efficiency: quicker drafts, quicker wording, quicker structure. That can genuinely reduce load, especially on days when we’re holding multiple cases and still trying to document, plan, and communicate clearly. But procedural mode also has a built-in risk: it can bypass the step where we ask, “What claim did this just make, and do I actually have the clinical data to stand behind it?” In therapy, where work is high-stakes and context-sensitive, skipping that step is never a small thing. Collaborative mode looks different. Here, AI is treated more like a thinking partner that helps us refine what we already know. We provide context, constraints, and objectives, and we actively evaluate and revise what comes back. The benefit isn’t only speed, it’s quality. As goals become more complex, the work doesn’t disappear; it shifts upward into framing, supervision, and judgment. That shift is the point, because it mirrors what good therapy already is. The core value isn’t “doing tasks.” The core value is choosing what matters, staying accountable to the formulation, and tracking whether what we are doing is actually helping this client in this moment. With that clarity, the question “where does automation fit?” becomes easier: automation belongs around the session, not inside the relationship. It supports the repetitive work that quietly drains clinicians, so you show up with more focus and presence. In practical terms, this often starts with answering emails: drafting scheduling replies, boundaries, first-contact responses, follow-ups, and coordination messages with parents or schools. AI can give you a clean draft fast, but the clinician still protects tone, confidentiality, and the therapeutic frame before anything is sent. Automation can also support assessment workflows, especially the mechanical parts like cotation and report organization. It can help format tables, structure sections consistently, and draft neutral descriptions, saving time without pretending to “interpret.” Similarly, it can help with filling questions for you: generating intake questions, check-in prompts, or between-session reflection questions tailored to your model and the client’s goals. That doesn’t replace clinical judgment; it simply gives you a clearer scaffold for information-gathering and tracking change. Another high-impact area is session preparation. If you provide a brief, non-identifying summary of the last session, AI can help draft a focused plan: key themes to revisit, hypotheses to test, reminders of what was agreed on, and possible questions or interventions that match your orientation. The point is not to “script therapy,” but to reduce the mental load of reconstructing the thread so you can start the session grounded. More sensitive, but sometimes very helpful, is using automation around session recording and documentation (only with explicit consent and a privacy-safe system). AI can assist with transcripts, highlight themes, and draft a note structure or summary. Still, this must remain supervised: AI can miss nuance, misinterpret meaning, or phrase things too strongly. In clinical documentation, accuracy and accountability matter more than speed, so the clinician always verifies what’s written, especially around risk, safety planning, and any diagnostic or medical claim. Finally, automation can support what many clinicians want but struggle to do consistently: progress comparison over time. Whether you use outcome measures, session ratings, goals, homework follow-through, or narrative markers, AI can help summarize shifts from baseline, spot patterns across sessions, and draft a short “what’s improving / what’s stuck / what to adjust next” reflection. The tool organizes and surfaces patterns; you decide what it means and what the next clinical step is. All of this only works if we pay attention to data and privacy. We avoid entering identifying information unless we are using an approved, privacy-compliant system. We do not treat AI output as truth, especially for diagnosis, risk assessment, medication-related topics, or any medical claim. And we keep the clinician role explicit: AI can generate language, options, and structure, but we provide judgment, ethics, and accountability. This is also why many clinicians are drawn to running a private generative model locally on their laptop, offline, so data does not leave the device. Even then, strong device security and clear consent practices still matter, but the direction is sound: protect client information first, then build workflow support around it. When we use AI with this mindset, the payoff is real. We gain time and mental space for what cannot be automated: attunement, formulation, pacing, rupture-repair, and the relationship. The tool handles parts of the scaffolding and we protect the heart of therapy, which is slow, context-sensitive, and deeply human work.

English

Prism: the kind of writing workspace researchers wish existed when they’re trying to publish

If we’ve ever tried to write a real manuscript after a full day of research work, we already know the problem usually isn’t that we don’t have ideas. We do. The problem is that academic writing demands a specific kind of mental steadiness. We have to hold a thread, keep structure in mind, track definitions precisely, and build a coherent argument while our brains are already full of meetings, supervision, grant deadlines, data cleaning, reviewer comments, and the constant micro-decisions that come with running studies. So when OpenAI announced Prism, it caught our attention for a surprisingly practical reason. It sounds like it’s designed to reduce overwhelm in the writing process, not by writing the paper for us, but by making the environment less fragmenting and more supportive of sustained attention. OpenAI describes Prism as a free, cloud-based, LaTeX-native workspace for scientific writing and collaboration, with an AI assistant integrated into the document workflow. And that phrase, integrated into the workflow, matters. Many of us still write in a patchwork setup. The draft lives in one place, citations in another, PDFs in folders we swear are organized, tables in spreadsheets, figures in separate tools, and formatting rules that feel like a moving target. If we use AI, it often sits off to the side in a separate window, with no real awareness of what the document actually contains. Prism is pitching something different. One workspace where drafting, revising, compiling, and collaborating live together, so we don’t constantly switch contexts and lose momentum. That sounds less like automation and more like good research infrastructure. Something that helps us keep the argument intact while we spend our limited energy on what actually matters: methods, logic, interpretation, and the discipline of not overclaiming. What we also appreciate is that Prism seems aimed at the boring practical problems that quietly wreck productivity. Collaboration, comments, proofreading support, citation help, and literature workflow features are not flashy, but they are exactly the kind of friction that makes us close the laptop and tell ourselves we’ll do it tomorrow because we can feel the administrative drag consuming what’s left of our focus. And if we’ve ever co-authored a paper, we know how much time gets lost to version control, merging edits, and re-checking what the “current draft” even is. A shared cloud workspace can reduce that overhead by keeping writing and collaboration in one place. Here is where the researcher angle comes in. Researchers are trained to track nuance, uncertainty, and the limits of what data can actually support. Many of us can write well when we have space to think. But research rarely gives writing prime attention. Writing happens in stolen hours between analyses, teaching, project management, and funding applications. That changes what “helpful technology” looks like. We don’t just need a tool that generates text. We need a tool that helps us stay oriented so we can turn results into clear contributions publishable, teachable, and useful. Prism might support that kind of work, especially for researchers who publish, teach, supervise trainees, or collaborate across institutions and need their writing process to be less chaotic. If it truly reduces friction, it could help more of us finish what we start—not because the tool has better ideas than we do, but because it helps protect the continuity of our thinking. At the same time, we should say the quiet part out loud. A smoother writing workflow doesn’t automatically mean better science. AI can help us sound coherent and academic, and that can be useful, but it is also where risk shows up because polished writing can hide weak reasoning. So if we use Prism, we should treat it like a very fast assistant. It can reduce friction and help us express what we mean, but it is not the source of truth. We still own the reasoning, the claims, the citations, and the integrity of the work. And of course, Prism is not the only tool that exists. Most of us have already used other AI tools before, along with specific writing and reference managers that keep our workflow moving. What makes Prism feel different, at least from the way it is described, is the promise of one integrated workspace and the fact that it is free. If it delivers even half of that, we honestly cannot wait to explore it more. Where we land is simple. Prism sounds promising because it aims at the real pain points in research writing: context switching, formatting drudgery, collaboration friction, and the cognitive load of keeping a complex document coherent over time. Not magic. Not a replacement for expertise. But possibly the most researcher-friendly kind of productivity tool—the kind that helps us keep the thread.

English, Uncategorized

Learning Is Not One Size Fits All: Why “Learn Your Way” Feels Long Overdue

If textbooks worked the way they were supposed to, we wouldn’t be doing half the adaptations we do every day in therapy. We’ve all sat with a child or student who is bright, curious, and capable, yet completely blocked by long paragraphs, abstract language, or one rigid explanation. Somehow the learner is the one expected to adjust. We know better. Learning has never been one size fits all. Brains are messy, nonlinear, and wonderfully different. Some learners need to see it. Others need to hear it. Some need it explained three times in three ways before it clicks. Many need permission to approach information sideways rather than straight on. That’s why Google Research’s new project, Learn Your Way, caught our attention. It uses generative AI to turn static textbooks into interactive, personalized learning experiences. Instead of forcing every learner through the same path, the material adapts to how they think, ask questions, and make sense of the world. From a clinical point of view, this resonates immediately. What do we do in therapy all day if not this? We rephrase instructions. We simplify language. We add visuals. We slow things down or speed them up. We watch for that moment when a learner’s face changes, and you know something finally clicked. Textbooks have never done that—they cannot notice confusion or adjust. Until now. Traditional textbooks assume an ideal learner who reads fluently, processes quickly, and stays focused from start to finish. For our clients—especially those who are neurodivergent, have language difficulties, attention challenges, or learning differences—the textbook itself often becomes the barrier. Learn Your Way challenges that model. Learners can ask for a simpler explanation, request an example, explore a visual version, or connect it to something familiar. There’s no shame in asking again, no pressure to keep up with the page. That alone can change a learner’s relationship with learning. Emotionally, this matters. Many of the children and adults we work with carry years of quiet frustration, believing they are “not trying hard enough” when, in reality, the format never worked for them. Adaptive material communicates a different message: You are not the problem. The format was. From a language and communication standpoint, this is especially relevant. Dense syntax and abstract explanations are common barriers. AI that reduces linguistic load while preserving meaning can support comprehension without oversimplifying, benefiting learners with developmental language disorder, dyslexia, or second language needs. Of course, AI is not a therapist. It cannot replace human attunement, clinical reasoning, or relational safety. Personalization is not understanding a learner’s sensory profile, emotional state, or history. But as a tool, it has potential. We can imagine using adaptive explanations for carryover between sessions, guiding families toward resources that meet their child where they are, or collaborating with teachers using shared, flexible materials. What stands out most is the mindset shift. Learn Your Way reflects what clinicians have always known: variability is not the exception—it is the baseline. When learning environments are flexible, more learners succeed without needing to be fixed first. Textbooks were never neutral. They favored a narrow slice of learners while everyone else was expected to catch up. This move toward adaptive learning feels like common sense finally catching up. For those of us working daily with real brains, real struggles, and real potential, it feels less like the future and more like overdue validation.

English, Uncategorized

Guided Learning by Google Gemini. When Technology Starts to Resemble Good Teaching

As clinicians, we rarely teach the way textbooks do. We do not deliver information in one long explanation and hope it lands. We slow down. We check understanding. We adjust the language, the examples, the pacing. We scaffold. Learning, in real life, is guided. That is why Google Gemini’s newly launched feature, Guided Learning, stood out to us. Not because it is artificial intelligence, but because the learning model behind it feels familiar. Guided Learning allows users to explore any topic step by step, much like working with a patient, responsive tutor. Instead of overwhelming the learner with information, it builds understanding gradually and intentionally. From a clinical lens, this matters. We see every day that learning difficulties are rarely about lack of ability. They are about overload, poor sequencing, and mismatched delivery. Many learners disengage not because the content is too complex, but because it arrives too fast, too densely, or without enough support. Guided Learning addresses this by changing how information is delivered, not what is being taught. Rather than presenting a full explanation upfront, Gemini introduces concepts in stages. It pauses to check understanding before moving forward. If the learner struggles, it reframes or slows down. If they demonstrate confidence, it progresses. This mirrors how we work in therapy sessions, whether we are supporting language development, executive functioning, emotional insight, or academic skills. What also stood out to us is how active the learner becomes. Guided Learning does not position the user as a passive consumer of information. It asks questions, encourages reflection, and builds on responses. This aligns strongly with evidence from educational psychology showing that active engagement and retrieval are key to meaningful learning and retention. For many of the children, adolescents, and adults we work with, cognitive load is a significant barrier. Traditional learning platforms often assume that more information is better. Guided Learning takes the opposite approach. It prioritizes structure, pacing, and depth over volume. That shift alone can change how learners experience learning. From a language and communication perspective, this is particularly relevant. Dense language, abstract explanations, and limited context are common reasons learners disengage. A guided, adaptive approach allows for gradual exposure, repetition, and clarification. This is essential for learners with developmental language disorder, dyslexia, ADHD, or second language learning needs. There is also an emotional layer that deserves attention. Repeated experiences of confusion and failure shape how learners see themselves. When learning feels supported and predictable, confidence grows. Guided Learning reduces the feeling of being lost. It offers structure without rigidity, something we intentionally aim for in clinical work. How We Used Guided Learning We wanted to experience Guided Learning as users, not just read about it. Accessing it was refreshingly simple. We opened Google Gemini on the web, started a new conversation, and selected Guided Learning from the mode list. From there, we either asked a question or uploaded a document we wanted to study. There was no setup, no plugins, and no configuration. What we noticed immediately was the pacing. Gemini did not rush to provide a complete answer. It introduced the topic step by step, checked our understanding, and only moved forward when it made sense to do so. This alone made the experience feel more intentional and less overwhelming. What Makes Guided Learning Different The strength of Guided Learning lies in how it structures information. Lessons are organized with depth rather than surface summaries. Concepts are layered thoughtfully, allowing understanding to build naturally. There is also strong multimedia support. Depending on the topic, explanations may include images, videos, or interactive elements. This mirrors how we vary input in therapy based on the learner’s needs and preferences. Another notable feature is the use of short quizzes and reflective questions. These appear naturally within the learning flow and help consolidate understanding before moving on. This approach aligns well with research on retrieval practice and learning consolidation. Most importantly, the system adapts. When the learner demonstrates understanding, it progresses. When there is uncertainty, it slows down and reframes. That responsiveness is what makes the experience feel guided rather than scripted. Of course, Guided Learning is not therapy. It cannot replace clinical reasoning, individualized goal setting, or the therapeutic relationship. It does not account fully for sensory regulation needs, emotional states, or complex developmental histories. There is also a risk of over reliance if such tools are used without professional judgment. That said, as a supportive tool, the potential is clear. Guided Learning can support carryover between sessions, especially for older learners and adults. It can help clients build background knowledge, reinforce concepts introduced in therapy, or explore topics in a structured way. For clinicians, it can also serve as a learning companion for continuing education, allowing exploration of new topics without cognitive overload. What stands out most is the philosophy behind the feature. Guided Learning assumes that understanding is built, not delivered. That learning benefits from pacing, feedback, and structure. These are not new ideas for therapists. They are foundational to effective intervention. In many ways, this feature feels less like artificial intelligence and more like digital scaffolding. When used thoughtfully, it complements human teaching rather than competing with it. It reflects a growing alignment between technology and how learning actually works. For clinicians, the takeaway is not to replace our work with tools like this, but to integrate them intentionally. When technology supports the learning process rather than rushing it, it can become a meaningful ally. And that is a direction worth paying attention to.

English, Uncategorized

AI Fatigue in Clinicians Why More Tools Are Not Always Better and How to Choose What to Ignore

Over the past year, many clinicians have noticed a new kind of exhaustion creeping into their work. It is not the familiar emotional fatigue that comes from holding space for others, nor is it purely administrative burnout. It is something more subtle. A constant stream of new AI tools, updates, prompts, platforms, and promises, all claiming to make practice easier, faster, and smarter. Instead of relief, many clinicians feel overwhelmed, distracted, and unsure where to focus. This is what AI fatigue looks like in clinical practice. At its core, AI fatigue is not about technology itself. It is about cognitive overload. Clinicians are already managing complex caseloads, ethical responsibilities, documentation demands, and emotional labour. When AI enters the picture without clear boundaries or purpose, it can add noise rather than clarity. The result is not better care, but fragmented attention and reduced clinical presence. One of the main reasons AI fatigue develops is the assumption that more tools automatically mean better outcomes. In reality, clinical work does not benefit from constant switching. Each new platform requires learning, evaluation, and mental energy. When clinicians try to keep up with every new release, they often spend more time managing tools than thinking clinically. This erodes one of the most valuable resources in therapy. Deep, uninterrupted reasoning. Another contributor is the pressure to use AI simply because it exists. There is an unspoken fear of falling behind or not being innovative enough. But clinical excellence has never been about using the most tools. It has always been about using the right ones, deliberately and ethically. Innovation without intention rarely improves practice. It is also important to recognise that not all AI tools are designed with clinicians in mind. Many are built for speed, content generation, or surface-level productivity. Therapy, assessment, and diagnosis require something different. They require nuance, uncertainty, and tolerance for complexity. Tools that promise instant answers can subtly undermine reflective thinking, especially when clinicians are already tired. Choosing what to ignore is therefore not a failure. It is a clinical skill. A helpful starting point is to ask a simple question before adopting any AI tool. What cognitive load is this actually reducing? If a tool saves time on administrative tasks like drafting reports, summarising notes, or organising information, it may protect mental energy for clinical reasoning. If it adds another system to check, another output to evaluate, or another workflow to manage, it may be costing more than it gives. Another key filter is alignment with clinical values. Tools should support evidence-based thinking, not shortcut it. They should help clinicians think more clearly, not think less. If a tool encourages copying, over-reliance, or uncritical acceptance of outputs, it deserves skepticism. Good AI use feels supportive, not directive. It is also worth limiting the number of tools used at any one time. In practice, most clinicians only need one or two AI supports that fit naturally into their workflow. For example, one tool for structured thinking or documentation support. One tool for communication or explanation. Anything beyond that should earn its place clearly. AI fatigue also decreases when clinicians shift from tool hunting to purpose clarity. Instead of asking what new AI tool is available, ask where the friction points are in your own practice. Is it report writing? Parent communication? Case conceptualisation? Admin backlog? Start with the problem, not the platform. This alone filters out most unnecessary noise. Crucially, AI should never replace reflective pauses. Some of the most important clinical insights come from sitting with uncertainty, reviewing patterns over time, or discussing cases with colleagues. If AI use crowds out these processes, it is being misused. Technology should create space for thinking, not fill every gap. There is also a cultural aspect to address. Clinicians need permission to disengage from constant updates. Not every release is relevant. Not every feature needs testing. Staying informed does not mean staying flooded. Sustainable practice requires boundaries, including digital ones. Ultimately, the goal is not to become an AI-powered clinician. It is to remain a thoughtful, present, evidence-based one in a rapidly changing environment. AI can be a valuable support when used intentionally. It can reduce friction, organise complexity, and protect time. But only when clinicians remain in control of when, why, and how it is used. In a field built on human connection and clinical judgment, the most responsible use of AI may sometimes be choosing not to use it at all.

English, Uncategorized

Claude for Healthcare and ChatGPT Health: What the New Clinical AI Shift Really Looks Like

In the past week, the healthcare AI space has moved from possibility to intention. First came the launch of ChatGPT Health, a dedicated health-focused experience designed to help individuals understand their medical information. Shortly after, Anthropic introduced Claude for Healthcare, a platform built specifically for clinical, administrative, and research environments. Together, these releases signal a clear shift. AI is no longer being positioned as a general assistant that happens to talk about health. It is being shaped around the realities of healthcare itself. From a clinical and therapy perspective, this distinction matters. ChatGPT Health is centred on the personal health story. It creates a separate, protected health space within the app where users can connect medical records and wellness data. The emphasis is on interpretation rather than instruction. Lab results, lifestyle patterns, and health histories are translated into clear, accessible language. The experience is designed to help individuals and families arrive at appointments better prepared, with clearer questions and a stronger understanding of their own data. One of the defining features of ChatGPT Health is its focus on communication. The system adapts explanations to the user’s level of understanding and emotional state. This is particularly relevant in therapy contexts, where families often feel overwhelmed by medical language and fragmented information. By reducing confusion and cognitive load, the tool supports more meaningful conversations between clinicians and families. Importantly, it does not diagnose, prescribe, or replace professional care. Its role is interpretive and supportive. Claude for Healthcare operates from a very different starting point. It is built around healthcare systems rather than individual narratives. Its features are designed to handle the complexity of clinical infrastructure, including medical coding, scientific literature, regulatory frameworks, and administrative workflows. This positions Claude less as a conversational interpreter and more as a reasoning and synthesis tool for professionals. For clinicians, this means support with tasks that often sit in the background of care but consume significant time and mental energy. Summarising dense records, aligning documentation with evidence, navigating coverage requirements, and integrating research into clinical reasoning are all areas where Claude’s design is particularly strong. Its ability to maintain coherence across long, complex inputs mirrors how clinicians reason through cases over time rather than in isolated moments. A clear way to think about the difference Element ChatGPT Health Claude for Healthcare Primary user Individuals and families Clinicians, organisations, researchers Core role Interpretation and understanding Reasoning, synthesis, and structure Focus Personal health information Clinical systems and workflows Strength Communication and clarity Depth, coherence, and evidence alignment Therapy relevance Supporting family understanding and engagement Supporting clinical documentation and decision-making Ethical emphasis Individual data control and separation Enterprise compliance and regulatory alignment When comparing the two tools, the difference is not about which is better, but about what each is built to carry. ChatGPT Health carries the human side of health information. It helps people understand, reflect, and engage. Claude for Healthcare carries the structural side. It supports organisation, justification, and system-level reasoning. This distinction becomes especially relevant in therapy practice. ChatGPT Health can help families understand reports, track patterns, and prepare emotionally and cognitively for therapy sessions. Claude for Healthcare can support clinicians in ensuring that assessments, goals, and documentation are aligned with current evidence and regulatory expectations. One strengthens relational communication. The other strengthens clinical structure. Privacy and ethics are central to both platforms, but again approached differently. ChatGPT Health prioritises individual data separation and user control, reinforcing trust at a personal level. Claude for Healthcare focuses on enterprise-level security and compliance, reinforcing trust within healthcare organisations. Both approaches reflect the different problems each tool is designed to solve. What is essential to remember is that neither tool replaces clinical judgment. Therapy is not a data problem to be solved. It is a relational, contextual process that requires observation, interpretation, and ethical decision-making. AI can support thinking, reduce administrative burden, and organise information. It cannot read the room, sense emotional nuance, or build therapeutic alliance. What we are seeing now is the early shaping of two complementary roles for AI in healthcare. One supports understanding and engagement. The other supports reasoning and systems. Used thoughtfully, both can protect clinicians’ time and cognitive resources, allowing more space for what matters most in therapy. Deep thinking, human connection, and evidence-based care.

English, Uncategorized

Google Just Put AI Inside Gmail: Three Billion Inboxes Are About to Change.

Google has officially embedded AI into Gmail, and this is not just another productivity update. Email is one of the most cognitively demanding systems people use daily, and now AI is sitting directly inside it. With Gemini, users can summarise long email threads instantly, ask their inbox questions in plain English, write or polish emails for free, receive reply suggestions that actually sound like them, and check tone, grammar, and clarity. Soon, Gmail will also auto-filter clutter, flag VIP messages, and surface high-stakes emails. The rollout starts in U.S. English, with more languages coming, and some advanced features requiring Pro or Ultra plans. From a therapy perspective, this shift matters more than it appears. Email is not just communication. It is executive functioning in action. It requires planning, prioritisation, working memory, emotional regulation, and pragmatic language skills. For many clients, and many clinicians, email is a daily source of cognitive overload. What Gemini is doing is externalising parts of that cognitive load. Summarising threads reduces working memory demands. Asking the inbox questions bypasses inefficient search strategies. Tone and clarity checks support pragmatic language. Drafting assistance lowers initiation barriers. These functions closely mirror the supports we already use in therapy, making Gemini function like a cognitive scaffold rather than a replacement for thinking. So how might therapists actually benefit from this? For speech and language therapists, Gemini can support professional written communication without compromising clinical intent. Writing parent emails, school correspondence, or multidisciplinary updates often requires precise tone, clarity, and pragmatics. AI-assisted drafting and tone refinement can reduce linguistic load while allowing the therapist to retain full control over content and boundaries. Clinically, these same features can be used to model appropriate email responses with older clients or adolescents working on functional communication and pragmatic skills. For psychologists and mental health professionals, the benefit lies in cognitive and emotional regulation. Difficult emails often trigger avoidance, anxiety, or overthinking. AI-supported drafting can help clients initiate responses, reduce rumination, and focus on the message rather than the stress of wording. In therapy, this opens space to discuss decision-making, boundaries, and reflective use rather than avoidance. For neurodivergent clients, particularly those with ADHD or ASD, Gemini may reduce barriers related to initiation, organisation, and interpretation of long email threads. Used intentionally, it can support access without masking needs. Used uncritically, it risks bypassing skill development. This distinction is where clinical guidance matters. There are also ethical considerations we cannot ignore. Gmail is not a clinical platform. Identifiable client information should never be entered into AI systems without safeguards. AI assistance does not remove professional responsibility for confidentiality, judgment, or relational nuance. The larger shift is this. AI is no longer a separate tool we choose to open. It is becoming part of the cognitive environment our clients live in. That means therapy cannot ignore it. Our role is not to resist these tools or to hand thinking over to them. Our role is to help clients and clinicians use AI reflectively, as support rather than authority. Three billion inboxes are about to change. Human judgment, clinical reasoning, and ethical care still need to lead.

English, Uncategorized

Transforming Health Conversations: What ChatGPT Health Means for Clinical Practice

The way people seek and process health information is evolving. Increasingly, individuals turn to digital tools to understand symptoms, test results, and medical terminology before or after interacting with healthcare professionals. The introduction of ChatGPT Health reflects this shift and represents a more structured approach to how health related conversations are supported by AI. Health questions are rarely neutral. They are often driven by uncertainty, anxiety, or difficulty interpreting complex information. ChatGPT Health has been designed as a dedicated environment for these conversations, acknowledging that health information requires clearer boundaries, higher safety standards, and careful framing to avoid misunderstanding or harm. One of the most clinically relevant features is the option for users to connect their own health data. This may include laboratory results, sleep patterns, activity levels, or nutrition tracking. When information is grounded in personal context, explanations become more meaningful and cognitively accessible. From a therapeutic standpoint, this can reduce information overload and support clearer self reporting, particularly for individuals who struggle with medical language or fragmented recall. Privacy and user control are central to this design. Health related conversations are kept separate from other interactions, and users can manage or delete connected data at any time. Information shared within this space is protected and not used beyond the individual’s experience. This emphasis on consent and transparency is essential for maintaining trust in any clinical or health adjacent tool. ChatGPT Health is not positioned as a diagnostic or treatment system. However, its value for therapists lies in how it can support diagnostic thinking without replacing professional judgement. In clinical practice, many clients present with disorganised histories, vague symptom descriptions, or difficulty identifying patterns over time. AI supported tools can help clients structure information prior to sessions, such as symptom onset, frequency, triggers, functional impact, and response to interventions. This structured preparation can significantly improve the quality of clinical interviews and reduce time spent clarifying basic details. For therapists, this organised information can support hypothesis generation and differential thinking. Patterns emerging from sleep disruption, fatigue, emotional regulation difficulties, cognitive complaints, or medication adherence may prompt more targeted questioning or indicate the need for formal screening or referral. In this way, AI functions as a pattern recognition support tool rather than a diagnostic authority. This is particularly relevant in neurodevelopmental and mental health contexts. Recurrent themes related to executive functioning, sensory processing, emotional regulation, or communication breakdowns can help clinicians refine assessment planning and select appropriate tools. The AI does not label conditions or confirm diagnoses, but it may help surface clinically meaningful clusters that warrant professional evaluation. In speech and language therapy and related fields, this can enhance functional profiling. Clients may use the tool to articulate difficulties with comprehension, expression, voice fatigue, swallowing concerns, or cognitive load in daily communication. This can enrich case history data and support more focused assessment and goal setting. It is essential to clearly distinguish diagnostic support from diagnostic authority. ChatGPT Health should never be used to assign diagnoses, rule out medical conditions, or provide clinical conclusions. Instead, it can support therapists by helping clients organise experiences, improving symptom description, highlighting patterns for exploration, and increasing preparedness for assessment. Therapists remain responsible for interpretation, clinical reasoning, and decision making. Part of ethical practice will involve explicitly discussing these boundaries with clients and reinforcing that AI generated insights are informational, not diagnostic. For patients, this tool may increase health literacy, confidence, and engagement. Being better prepared for appointments and therapy sessions can reduce anxiety and support more collaborative care. However, patients also require guidance to avoid overinterpretation or false reassurance. Therapists play a key role in helping clients contextualise information, process emotional reactions to health data, and identify when professional medical input is necessary. The development of ChatGPT Health involved extensive collaboration with physicians across multiple specialties, shaping how uncertainty is communicated and when escalation to professional care is encouraged. This strengthens its role as a preparatory and reflective resource rather than a directive one. As AI continues to enter health and therapy spaces, its clinical value will depend on how clearly roles and boundaries are defined. When used as a tool for organisation, reflection, and hypothesis support, rather than diagnosis or treatment, systems like ChatGPT Health can enhance clinical efficiency, improve communication, and support more informed participation in care while keeping professional judgement firmly at the centre. The future of AI in healthcare is not about replacing diagnosis. It is about supporting better histories, clearer questions, and more thoughtful clinical reasoning.

English, Uncategorized

Teletherapy in 2026: Our Clinical Take on What AI Is Actually Changing

As therapists working daily in teletherapy, we have all felt the shift. AI is no longer something happening “out there” in tech headlines it is quietly entering our platforms, our workflows, and our clinical decision-making spaces. The question for us has never been whether AI will be used in therapy, but how it can be used without compromising ethics, clinical judgment, or the therapeutic relationship. Over the past year, we have actively explored, tested, and critically evaluated several AI-driven tools in teletherapy contexts. What stands out most is this: the most useful AI tools are not the loudest ones. They are the ones that reduce friction, cognitive load, and therapist burnout while preserving our role as the clinical authority. Expanding Access Without Diluting Care One of the most meaningful developments we’ve seen recently is how AI is being used to expand access to therapy rather than replace it. Platforms such as Constant Therapy have expanded their AI-driven speech and cognitive therapy programs into additional languages, including Spanish and Indian English. This matters clinically. It allows us to assign culturally and linguistically relevant home practice that aligns with what we are targeting in sessions, instead of relying on generic or mismatched materials. From our experience, this kind of AI-supported practice increases carryover without increasing preparation time something teletherapy clinicians deeply need. Conversational AI That Supports Continuity, Not Dependency Mental health platforms like Wysa, particularly with the introduction of Wysa Copilot, reflect a growing shift toward hybrid models where AI supports therapists rather than attempts to replace them. These systems help structure between-session support, guide reflective exercises, and support homework completion, all while keeping the clinician in the loop. When we tested similar conversational AI tools, what we valued most was not the chatbot itself, but the continuity. Clients arrived to sessions more regulated, more reflective, and more ready to engage because the therapeutic thread had not been completely paused between sessions. Speech and Language AI: Practice, Not Diagnosis Advances in automatic speech recognition have significantly improved the quality of AI-assisted speech practice tools. In articulation and fluency work, we’ve used AI-supported practice platforms to increase repetition, consistency, and feedback during teletherapy homework. Clinically, we see these tools as structured practice partners not assessors and certainly not diagnosticians. They help us gather cleaner data and observe patterns, but interpretation remains firmly in our hands. When used this way, AI becomes an efficiency tool rather than a clinical shortcut. Voice Biomarkers as Clinical Signals, Not Labels Another emerging area is the use of voice biomarkers tools that analyze vocal features to flag possible emotional or mental health risk markers. Tools like Kintsugi and Ellipsis Health are increasingly discussed in clinical AI spaces. When we explored these tools, we found them useful as conversation starters, not conclusions. In teletherapy, where subtle nonverbal cues can be harder to read, having an additional signal can help us ask better questions earlier in the session. We are very clear, however: these tools support clinical curiosity; they do not replace clinical judgment. Ethics, Regulation, and Our Responsibility Not all AI adoption has been smooth and rightly so. In 2025, several regions introduced restrictions on AI use in psychotherapeutic decision-making. From our perspective, this is not a step backward. It reflects a necessary pause to protect clients, clarify consent, and reinforce professional boundaries. As therapists, we carry responsibility not just for outcomes, but for process. Any AI tool we use must be transparent, ethically integrated, and clearly secondary to human clinical reasoning. What We’re Taking Forward Into Our Telepractice Based on what we’ve tested and observed, these are the principles guiding our use of AI in teletherapy: We use AI to reduce administrative and cognitive load, not to replace thinking.We choose tools grounded in clinical logic, not generic productivity hype.We prioritize transparency with families and clients about how technology is used.We treat AI outputs as data points, never as decisions. What feels different about teletherapy in 2026 is not the presence of AI it’s the maturity of how we engage with it. When AI is positioned as a background support rather than a clinical authority, it allows us to show up more present, more regulated, and more attuned to our clients. Teletherapy does not need less humanity. It needs protection of it. Used responsibly, AI helps us do exactly that.

English, Uncategorized

Wrapping Up 2025 A Year of AI in Therapy What We Learned and What We Expect Next

As we reach the end of the year, many of us are reflecting not just on our caseloads or outcomes, but on how much our daily practice has shifted. 2025 was not the year AI took over therapy. Instead, it was the year AI quietly settled into our workflows and pushed us as clinicians to become more intentional about protecting clinical judgment while embracing useful innovation. From speech therapy and mental health to teletherapy platforms, AI moved from experimental to practical. What matters now is how we, as therapists, choose to use it. AI This Year From Hype to Real Clinical Use One of the most noticeable changes in 2025 is how AI tools are being designed around clinicians rather than instead of them. Platforms such as Wysa, particularly through clinician supported tools like Wysa Copilot, reflect this shift. These systems are no longer simple chatbots. They now function as structured supports that help maintain therapeutic continuity between sessions while keeping clinicians in control. From our own testing and use, the value has not been in AI talking to clients, but in how it supports reflection, homework follow through, and emotional regulation between sessions. Clients arrive more prepared, and sessions feel less like a restart and more like a continuation. Speech and Language Practice Where AI Truly Helps In speech and language therapy, AI had its strongest impact this year in practice intensity and consistency. AI assisted articulation and voice practice tools now offer more accurate feedback and structured repetition that is difficult to achieve consistently between teletherapy sessions. We have used these tools as practice partners rather than assessors. They help us collect clearer data and observe patterns over time, while interpretation remains human. Their strength lies in freeing our cognitive space so we can focus on planning, adapting, and responding within sessions. Accessibility and Reach A Quiet Win Another important development this year has been the expansion of AI driven therapy platforms into additional languages and regions. Tools like Constant Therapy expanding into multiple languages signal something important. AI can reduce access barriers without lowering clinical standards. For teletherapy, this has translated into better carryover, more culturally relevant practice materials, and stronger engagement outside live sessions. Voice Based AI and Emotional Signals Used With Caution 2025 also brought increased attention to voice based AI tools that analyze speech patterns for emotional or mental health signals. Tools such as Kintsugi and Ellipsis Health are often mentioned in this context. From our experience, these tools work best as signals rather than answers. In teletherapy, where subtle cues can be harder to detect, they can guide deeper clinical questioning. They do not diagnose, and they should never replace observation, clinical interviews, or professional judgment. Ethics and Regulation Took Center Stage This year also reminded us that innovation without boundaries is risky. Increased regulation around AI use in therapy particularly related to crisis detection, consent, and transparency has been a necessary step. As clinicians, this aligns with what we already practice. Therapeutic work requires accountability, clarity, and human responsibility. AI must remain secondary to the therapeutic relationship. How We Are Using AI Going Forward As we close the year, these principles guide our clinical use of AI. We use AI to reduce administrative and cognitive load rather than replace thinking.We choose tools grounded in clinical logic and therapeutic models.We remain transparent with clients and families about AI use.We treat AI outputs as supportive data rather than clinical decisions. When used this way, AI becomes an ally rather than a distraction. Looking Ahead If 2025 was the year of testing and learning, the year ahead will likely focus on refinement. We expect clearer standards, better clinician informed design, and deeper conversations around ethics, inclusion, and sustainability. Most importantly, we expect the focus to return again and again to what matters most. Human connection, clinical reasoning, and ethical care. AI will continue to evolve. Our role as therapists remains unchanged. We interpret. We adapt. We connect.

Shopping Cart