
AI in Clinical Care—Already Here, Already Changing Practice
Artificial Intelligence is no longer theoretical. It’s embedded in our therapy rooms, electronic records, and clinical tools. From speech-language pathology to neuropsychology, AI is reshaping how we assess, document, and intervene.
The question is no longer whether therapists will use AI—but how they’re doing so already, and how to do it responsibly without compromising therapeutic presence or judgment.
As highlighted in our previous issue, AI is influencing not only our tools, but our decisions. Many clinicians now use AI-supported platforms—sometimes unknowingly—raising important questions about transparency, ethics, and outcomes.
Schubert et al. (2025) remind us that moving from passive use to informed application requires structured education. Knowing how AI works isn’t enough; we need to understand how to reason with it, critique it, and lead its ethical use.
Where AI Is Already Supporting Therapy
In real-world practice, AI is already making a difference:
- Clinical software analyzes session notes to flag trends.
- Voice tools track acoustic markers over time.
- Adaptive therapy apps tailor content based on client responses.
Jha and Topol (2024) note that such tools are improving efficiency in fields reliant on pattern recognition. AI can surface meaningful shifts, propose next steps, and adapt tasks in real time. But it cannot make clinical decisions.
An algorithm may detect a phonological pattern or attentional lapse. Yet the therapist must decide: Is this clinically relevant? Is it consistent with the client’s goals, history, or needs?
AI can suggest. The therapist must interpret.
Why Clinical Judgment Still Leads
AI handles large data sets—but it cannot read between the lines. It can’t recognize when a data point reflects fatigue, emotional strain, cultural difference, or an artifact.
Clinical work is not just about data. It is about human context, developmental history, motivation, and values. AI cannot weigh competing goals or make value-based decisions.
As Schubert and colleagues (2025) propose, responsible AI use develops across three levels:
- Basic competence: Safe use and awareness of limitations
- Proficient use: Ability to interpret, critique, and apply results
- Expert engagement: Participating in the development of clinically meaningful tools
This framework positions clinicians as decision-makers—not passive tool users, but ethical leaders.
Personalization Without Losing the Personal
AI can make therapy more adaptive. Some apps modify pacing or feedback in real time—slowing exercises during stuttering episodes, or increasing visual prompts for distracted learners.
These features enhance responsiveness. But adaptation alone isn’t therapy. It becomes meaningful only when interpreted and guided by a clinician.
Therapists remain essential in deciding whether a pattern is significant, whether a tool supports or distracts from the goal, and how to communicate these decisions to clients and families.
As Topol (2024) states, AI should inform—not replace—clinical reasoning.
Will AI Replace Therapists? Not Likely.
Concerns about AI replacing clinicians are understandable—but not supported by current evidence.
The World Health Organization (2024) affirms that best outcomes occur when clinicians retain authority and AI acts as a support. AI enhances—not diminishes—the role of skilled professionals.
Clinicians with AI literacy are better equipped to:
- Evaluate tool limitations
- Prevent misuse
- Advocate for inclusive and equitable AI design
By engaging with AI critically and ethically, therapists remain stewards of care—not spectators to technological change.
AI Engagement Doesn’t Require Coding—It Requires Questions
You don’t need programming skills to use AI well. But you do need to ask critical questions:
- How was this tool trained?
- On which populations?
- What assumptions and biases are built into its output?
Functional AI literacy includes understanding key concepts like algorithmic bias, training data, and model reliability. It also involves separating evidence-based innovation from marketing hype.
As WHO (2024) reminds us: clinicians are responsible for the tools they choose, even when the AI is invisible.
Coming Next: Evaluating AI Tools Before You Use Them
In our next article, we’ll explore how to critically evaluate an AI product before introducing it to clients. With new technologies entering the field monthly, we must remain discerning.
We’ll cover:
- What to ask before adopting a tool
- How to interpret claims and results
- How to balance innovation with evidence
AI can’t replace what makes therapy powerful—but when used well, it can enhance connection, clarity, and compassion.
To find out more join our AI webinars for therapists! go to Courses to find out more details!
References
Jha, S., & Topol, E. J. (2024). Adapting clinical practice to artificial intelligence: Opportunities, challenges, and ethical considerations. The Lancet Digital Health, 6(3), e175–e183. https://doi.org/10.1016/S2589-7500(24)00020-9
Schubert, T., Oosterlinck, T., Stevens, R. D., Maxwell, P. H., & van der Schaar, M. (2025). AI education for clinicians. eClinicalMedicine, 79, 102968. https://doi.org/10.1016/j.eclinm.2024.102968
World Health Organization. (2024). Ethics and governance of artificial intelligence for health: Guidance and tools. https://www.who.int/publications/i/item/9789240077925