
As therapists working daily in teletherapy, we have all felt the shift. AI is no longer something happening “out there” in tech headlines it is quietly entering our platforms, our workflows, and our clinical decision-making spaces. The question for us has never been whether AI will be used in therapy, but how it can be used without compromising ethics, clinical judgment, or the therapeutic relationship.
Over the past year, we have actively explored, tested, and critically evaluated several AI-driven tools in teletherapy contexts. What stands out most is this: the most useful AI tools are not the loudest ones. They are the ones that reduce friction, cognitive load, and therapist burnout while preserving our role as the clinical authority.
Expanding Access Without Diluting Care
One of the most meaningful developments we’ve seen recently is how AI is being used to expand access to therapy rather than replace it. Platforms such as Constant Therapy have expanded their AI-driven speech and cognitive therapy programs into additional languages, including Spanish and Indian English. This matters clinically. It allows us to assign culturally and linguistically relevant home practice that aligns with what we are targeting in sessions, instead of relying on generic or mismatched materials.
From our experience, this kind of AI-supported practice increases carryover without increasing preparation time something teletherapy clinicians deeply need.
Conversational AI That Supports Continuity, Not Dependency
Mental health platforms like Wysa, particularly with the introduction of Wysa Copilot, reflect a growing shift toward hybrid models where AI supports therapists rather than attempts to replace them. These systems help structure between-session support, guide reflective exercises, and support homework completion, all while keeping the clinician in the loop.
When we tested similar conversational AI tools, what we valued most was not the chatbot itself, but the continuity. Clients arrived to sessions more regulated, more reflective, and more ready to engage because the therapeutic thread had not been completely paused between sessions.
Speech and Language AI: Practice, Not Diagnosis
Advances in automatic speech recognition have significantly improved the quality of AI-assisted speech practice tools. In articulation and fluency work, we’ve used AI-supported practice platforms to increase repetition, consistency, and feedback during teletherapy homework.
Clinically, we see these tools as structured practice partners not assessors and certainly not diagnosticians. They help us gather cleaner data and observe patterns, but interpretation remains firmly in our hands. When used this way, AI becomes an efficiency tool rather than a clinical shortcut.
Voice Biomarkers as Clinical Signals, Not Labels
Another emerging area is the use of voice biomarkers tools that analyze vocal features to flag possible emotional or mental health risk markers. Tools like Kintsugi and Ellipsis Health are increasingly discussed in clinical AI spaces.
When we explored these tools, we found them useful as conversation starters, not conclusions. In teletherapy, where subtle nonverbal cues can be harder to read, having an additional signal can help us ask better questions earlier in the session. We are very clear, however: these tools support clinical curiosity; they do not replace clinical judgment.
Ethics, Regulation, and Our Responsibility
Not all AI adoption has been smooth and rightly so. In 2025, several regions introduced restrictions on AI use in psychotherapeutic decision-making. From our perspective, this is not a step backward. It reflects a necessary pause to protect clients, clarify consent, and reinforce professional boundaries.
As therapists, we carry responsibility not just for outcomes, but for process. Any AI tool we use must be transparent, ethically integrated, and clearly secondary to human clinical reasoning.
What We’re Taking Forward Into Our Telepractice
Based on what we’ve tested and observed, these are the principles guiding our use of AI in teletherapy:
We use AI to reduce administrative and cognitive load, not to replace thinking.
We choose tools grounded in clinical logic, not generic productivity hype.
We prioritize transparency with families and clients about how technology is used.
We treat AI outputs as data points, never as decisions.
What feels different about teletherapy in 2026 is not the presence of AI it’s the maturity of how we engage with it. When AI is positioned as a background support rather than a clinical authority, it allows us to show up more present, more regulated, and more attuned to our clients.
Teletherapy does not need less humanity. It needs protection of it. Used responsibly, AI helps us do exactly that.
