
We keep noticing that when clinicians talk about “using AI,” we’re often talking about two very different approaches, even if we’re using the same tool. And the confusion usually shows up around one word: automation. People hear “automation” and imagine a cold replacement of therapy, or they assume it’s basically the same thing as collaboration. In practice, automation in clinical work is simpler and more grounded than that. It is not “AI doing therapy.” It is the clinician delegating repeatable steps in the workflow, then supervising the output the way we would supervise an assistant or a trainee.
In procedural mode, AI becomes a substitute for execution. We ask, it answers; we paste, we send. The output is used for efficiency: quicker drafts, quicker wording, quicker structure. That can genuinely reduce load, especially on days when we’re holding multiple cases and still trying to document, plan, and communicate clearly. But procedural mode also has a built-in risk: it can bypass the step where we ask, “What claim did this just make, and do I actually have the clinical data to stand behind it?” In therapy, where work is high-stakes and context-sensitive, skipping that step is never a small thing.
Collaborative mode looks different. Here, AI is treated more like a thinking partner that helps us refine what we already know. We provide context, constraints, and objectives, and we actively evaluate and revise what comes back. The benefit isn’t only speed, it’s quality. As goals become more complex, the work doesn’t disappear; it shifts upward into framing, supervision, and judgment. That shift is the point, because it mirrors what good therapy already is. The core value isn’t “doing tasks.” The core value is choosing what matters, staying accountable to the formulation, and tracking whether what we are doing is actually helping this client in this moment.
With that clarity, the question “where does automation fit?” becomes easier: automation belongs around the session, not inside the relationship. It supports the repetitive work that quietly drains clinicians, so you show up with more focus and presence. In practical terms, this often starts with answering emails: drafting scheduling replies, boundaries, first-contact responses, follow-ups, and coordination messages with parents or schools. AI can give you a clean draft fast, but the clinician still protects tone, confidentiality, and the therapeutic frame before anything is sent.
Automation can also support assessment workflows, especially the mechanical parts like cotation and report organization. It can help format tables, structure sections consistently, and draft neutral descriptions, saving time without pretending to “interpret.” Similarly, it can help with filling questions for you: generating intake questions, check-in prompts, or between-session reflection questions tailored to your model and the client’s goals. That doesn’t replace clinical judgment; it simply gives you a clearer scaffold for information-gathering and tracking change.
Another high-impact area is session preparation. If you provide a brief, non-identifying summary of the last session, AI can help draft a focused plan: key themes to revisit, hypotheses to test, reminders of what was agreed on, and possible questions or interventions that match your orientation. The point is not to “script therapy,” but to reduce the mental load of reconstructing the thread so you can start the session grounded.
More sensitive, but sometimes very helpful, is using automation around session recording and documentation (only with explicit consent and a privacy-safe system). AI can assist with transcripts, highlight themes, and draft a note structure or summary. Still, this must remain supervised: AI can miss nuance, misinterpret meaning, or phrase things too strongly. In clinical documentation, accuracy and accountability matter more than speed, so the clinician always verifies what’s written, especially around risk, safety planning, and any diagnostic or medical claim.
Finally, automation can support what many clinicians want but struggle to do consistently: progress comparison over time. Whether you use outcome measures, session ratings, goals, homework follow-through, or narrative markers, AI can help summarize shifts from baseline, spot patterns across sessions, and draft a short “what’s improving / what’s stuck / what to adjust next” reflection. The tool organizes and surfaces patterns; you decide what it means and what the next clinical step is.
All of this only works if we pay attention to data and privacy. We avoid entering identifying information unless we are using an approved, privacy-compliant system. We do not treat AI output as truth, especially for diagnosis, risk assessment, medication-related topics, or any medical claim. And we keep the clinician role explicit: AI can generate language, options, and structure, but we provide judgment, ethics, and accountability. This is also why many clinicians are drawn to running a private generative model locally on their laptop, offline, so data does not leave the device. Even then, strong device security and clear consent practices still matter, but the direction is sound: protect client information first, then build workflow support around it.
When we use AI with this mindset, the payoff is real. We gain time and mental space for what cannot be automated: attunement, formulation, pacing, rupture-repair, and the relationship. The tool handles parts of the scaffolding and we protect the heart of therapy, which is slow, context-sensitive, and deeply human work.
