When Algorithms Meet Interaction : A Clinician‑Researcher’s Critical Reading of ESLA’s 2026 AI Position Paper

In a busy speech and language therapy week, “AI” doesn’t arrive as a philosophical debate. It arrives as a friction point: Can we trust automated transcription for a multilingual child? Should we trial a screening app to reduce waiting lists? Can a generative tool draft home-program instructions without drifting beyond evidence? ESLA’s position paper, Shaping the Future of Speech and Language Therapy with Artificial Intelligence (published March 6, 2026), lands right in this reality. It acknowledges that AI is rapidly reshaping healthcare, and it correctly frames SLT as a profession built on interaction, context, and judgement. The problem is that when clinicians turn to a position paper, many of us aren’t only looking for a vision, we’re looking for direction.

To ESLA’s credit, the paper avoids the two common extremes: adopting tools because they’re new, or rejecting them because they feel risky. It states a sensible professional stance: AI should support rather than replace therapists, and implementation must respect human rights, dignity, diversity, and inclusion. But here’s the tension: those principles are true, and still too general to steer day-to-day decisions. In practice, “support not replace” needs to translate into operational boundaries: Which tasks are acceptable to automate? Which tasks must remain clinician-led? What level of human oversight is “enough” in real services under pressure? Without that, the document risks becoming a shared set of values that everyone agrees with, while actual practice drifts in whichever direction vendors, budgets, or time constraints push it.

ESLA outlines opportunities (screening, assessment support, intervention planning, outcome monitoring, personalised care, hybrid delivery, access). Again, plausible, and in many areas, exciting. Yet the paper could have pushed further into the uncomfortable practicalities: the reality that “personalisation” can become “fit to the majority,” especially for multilingual children, minoritised dialects, AAC users, complex disability profiles, and culturally specific communication norms. If ESLA’s goal is equity, we need stronger statements on minimum evidence expectations, not just aspirations. What counts as sufficient validation for multilingual speech recognition? What sample sizes? What types of accents/dialects? What error rates are acceptable and acceptable for whom? Otherwise, “equity” stays a hope rather than a measurable requirement.

Where the paper feels strongest is its ethical backbone, privacy, consent, data ownership, bias, unequal access, and over-reliance. But ethically strong does not automatically mean clinically usable. This is where many clinicians will say: It’s nice, but… c’est flou encore. We wanted more “what to do / what not to do.” We wanted a toolkit. We wanted ESLA to name the concrete risks that show up in real workflows: staff uploading voice samples into unknown cloud services; student clinicians pasting identifiable details into chatbots; managers adopting “AI screening” because it’s cheaper, then letting it quietly become gatekeeping. A position paper can’t solve everything, but it can draw bright lines and this one often speaks in principles where the field is asking for guardrails.

What would make this paper more actionable is a clinician-facing set of guidelines that reads like a decision support tool, not a manifesto. For example: a “red / amber / green” use-case list (e.g., green = admin summarisation with de-identified data; amber = draft therapy materials with clinician review; red = automated diagnosis/eligibility decisions). A privacy verification checklist (Where is data stored? Is it used to train models? Can we opt out? Who is the data processor/controller? Is encryption stated? What is retention/deletion policy? Can we get a Data Processing Agreement? Is there audit logging?). A minimum evaluation standard (evidence thresholds, bias testing expectations, multilingual performance reporting, disclosure requirements). And even a “vendor questions” one-pager managers can use before procurement. Without these, professional leadership remains a principle, but clinicians and services are left to invent policy locally, inconsistently, and often under time pressure.

ESLA’s conclusion calls for deliberate, collaborative, ethical adoption so AI strengthens participation, dignity, and inclusion. That’s a good direction. But the next step should be explicit: an implementation appendix that turns values into practice. Because the profession doesn’t just need permission to engage with AI, we need structure to do it safely. Until then, ESLA’s paper functions as a strong ethical positioning statement, but not yet as the practical compass many of us hoped for. In other words, the paper is important, timely, and well-intentioned, but it still needs more actionable guidance for real-world practice.

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart