
The EU Artificial Intelligence Act (Regulation 2024/1689), passed in June 2024, is already reshaping how AI is used in therapy. While the legislation outlines clear rules for transparency, risk classification, data governance, and human oversight, the real test lies in how clinics across Europe are putting these principles into action.
Clinicians—especially those working in neurodevelopmental, rehabilitative, and mental health services—are asking a vital question: How do we comply with new AI regulations while preserving the human-centered nature of therapy?
Translating Regulation Into Clinical Reality
Across Europe, clinics are moving from high-level compliance to day-to-day operational changes in how AI tools are selected, monitored, and disclosed. In Belgium, for instance, a pediatric neurorehabilitation center has adopted a formal internal review process. Before any AI-assisted tool is used with children, teams assess the system’s training data, analyze its outcomes across diverse populations, and require therapists to demonstrate understanding of the AI model’s functionality and limits.
These steps go beyond mere legal checklists. Under the AI Act, many digital therapy tools—including those used for speech analysis, attention monitoring, or adaptive content delivery—fall into the “high-risk” category. This classification requires clinics to apply standards of robustness, explainability, and human oversight (European Union, 2024). As a result, some clinics now treat AI tools like they would Class II medical devices: requiring structured evaluation, documentation, and clinician sign-off before use.
Training the Clinician, Not Just the Tool
In Denmark, a national therapy center network has launched mandatory AI ethics workshops. These don’t aim to turn therapists into data scientists but to equip them with foundational AI literacy. Therapists learn to ask critical questions:
- Was the model trained on monolingual or multilingual data?
- Does the tool generalize across neurodiverse or developmentally atypical populations?
- Could its outputs reflect bias, artifact, or irrelevant variability?
This emphasis on reflective practice aligns with WHO recommendations (2024), which stress that clinicians—not algorithms—must remain the final decision-makers. AI can suggest, but it cannot interpret. A fluency tracker may flag increased pause time, but it’s the therapist who determines whether the change reflects anxiety, illness, or simply a noisy environment.
Training also now includes simulated case studies. For instance, therapists might explore how two similar speech samples receive different AI scores and must trace the model’s reasoning—a process that builds their confidence in evaluating AI reliability and limitations (Schubert et al., 2025).
Embedding Transparency Into Client Care
Clinics in the Netherlands, France, and Germany are leading on transparency. Informed consent now includes plain-language disclosures when AI tools are involved. Families are told if AI contributes to scoring, tailoring interventions, or flagging areas of concern. This kind of transparency, especially in pediatric and disability services, builds trust and satisfies Article 52 of the Act: patients have the right to know when AI is influencing care (European Union, 2024).
Some platforms are going further: In the Netherlands, a widely used SLP support app now includes pop-up explanations showing how progress scores are generated and interpreted. This allows families to discuss uncertainties with therapists and contribute to decisions, rather than passively accepting algorithmic output.
Addressing Access and Digital Equity
While AI tools can optimize therapy, they may also widen the digital divide. In response, clinics in Poland, Slovakia, and Hungary are piloting hybrid care models—combining traditional therapy with AI-supported modules that require minimal hardware or bandwidth. These systems use offline-first design, printable practice modules, or text-based feedback to serve rural or low-resource areas.
Furthermore, multilingual and cross-cultural validity is becoming a focus. As Pérez and Cheung (2025) point out, many existing AI tools have poor generalization beyond English or neurotypical data. SLPs and OTs across Europe are beginning to collaborate with developers to improve training datasets, ensuring tools work equitably across languages, dialects, and developmental profiles.
Accountability and Oversight in Action
Traceability—a core principle of the EU AI Act—is being operationalized via updated clinical documentation. In a multidisciplinary clinic in Munich, therapists now log every AI-assisted decision, whether in assessment, goal-setting, or therapy delivery. This includes:
- The tool used and its intended function.
- Whether its recommendation was accepted, rejected, or modified.
- A clinical rationale for the decision.
These records serve not only legal protection but also longitudinal quality review. For instance, if an AI tool routinely flags phonological issues in bilingual children where clinicians find none, the system may require retraining or discontinuation. As Topol (2024) emphasizes, human oversight is not a safeguard—it’s a necessity.
Emerging Lessons from Early Implementation
From these clinic-led efforts, several themes are emerging:
- Start small: Clinics have more success when introducing one AI tool at a time.
- Build reflection time: Weekly team meetings to share AI-related insights help surface subtle risks or opportunities.
- Stay skeptical: A CE-mark or compliance label doesn’t guarantee clinical utility.
- Document AI use: Not just for liability, but to build evidence and share best practices.
AI as a Partner, Not a Replacement
The EU AI Act is more than a regulatory hurdle—it’s a catalyst for ethical, inclusive innovation. By mandating transparency, clinician oversight, and data accountability, it challenges the therapy field to move slowly and wisely, even amid rapid technological change.
European clinicians are not just adapting to AI—they are shaping it. By speaking up about equity gaps, demanding better training data, and insisting on tools that reflect clinical nuance, therapists are reclaiming their role as co-creators, not passive users. The future of AI in therapy will not be about automation—it will be about augmentation, grounded in clinical judgment and compassionate care.
Coming Next
How clinics are designing therapist-led systems to evaluate and audit AI tools—without overwhelming paperwork or technical complexity.
To find out more join our AI webinars for therapists! go to Courses to find out more details!
References
- European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L168, 1–158. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
- Schubert, T., Oosterlinck, T., Stevens, R. D., Maxwell, P. H., & van der Schaar, M. (2025). AI education for clinicians. eClinicalMedicine, 79, 102968. https://doi.org/10.1016/j.eclinm.2024.102968
- World Health Organization. (2024). Ethics and governance of artificial intelligence for health: Guidance and tools. WHO Publications. https://www.who.int/publications/i/item/9789240077925
- Topol, E. J. (2024). AI in healthcare: Balancing innovation with responsibility. New England Journal of Medicine, 390(3), 205–213. https://doi.org/10.1056/NEJMp2401002
- Pérez, A., & Cheung, J. (2025). Linguistic fairness in clinical AI: Challenges and strategies for multilingual populations. Journal of Medical AI Ethics, 2(1), 15–27.