When Law Meets AI: Illinois Bans AI Therapy—Here’s What It Means for Clinical Practice

AI is advancing faster than regulation can keep up, and mental health is now at the heart of this debate. In August 2025, Illinois became the third U.S. state (after Utah and Nevada) to ban the use of AI in therapy decision-making. The law prohibits licensed therapists from using AI for diagnosis, treatment planning, or direct client communication. Companies are also barred from marketing “AI therapy” services that bypass licensed professionals (Washington Post, 2025; NY Post, 2025).

This move reflects growing concerns about “AI psychosis,” misinformation, and the lack of accountability when vulnerable people turn to chatbots for therapy.

Why This Matters for Therapists Everywhere

Even if you don’t practice in Illinois, the ripple effects are significant. Regulations often start locally before spreading nationally—or globally. It raises key questions for all of us:

  • Where is the line between acceptable AI use (e.g., admin support, note-taking, scheduling) and impermissible clinical use?
  • How do we ensure transparency when using AI-generated resources?
  • What safeguards protect our clients from assuming AI tools are substitutes for licensed care?

What’s Still Allowed

Importantly, the Illinois law doesn’t ban AI altogether. Therapists may still use AI for:

  • Administrative tasks (scheduling, billing, documentation).
  • Session preparation (creating therapy materials, handouts, or worksheets).
  • Professional support (summarizing research, drafting progress notes, brainstorming ideas).

What’s explicitly prohibited is letting AI act as the therapist. This distinction reinforces what many of us already believe: AI can support our work—but empathy, relational attunement, and clinical reasoning cannot be automated.

Therapist Responsibility: Transparency and Boundaries

With or without regulation, therapists should:

  1. Be transparent with clients. If AI is used to draft a worksheet or generate suggestions, let families know. Transparency builds trust.
  2. Maintain professional review. Always edit, refine, and clinically validate any AI-generated output before bringing it into therapy.
  3. Stay informed. Laws are changing quickly. Being proactive about compliance protects your license and your clients.

The Bigger Picture: Advocacy and Ethics

While some view bans as overly restrictive, they reflect real concerns about client safety and misinformation. Rather than rejecting AI outright, therapists can play an advocacy role—helping shape policies that strike a balance between innovation and protection.

We can imagine a future where regulators, clinicians, and developers collaborate to define “safe zones” for AI use in therapy. For example, AI could continue to support therapists with data organization, early screening cues, and progress tracking—while humans remain the ultimate decision-makers.

Takeaway Roadmap: “Using AI Without Crossing the Line”

Here’s a simple three-step check-in for ethical AI use:

  • Ask: Am I using AI to replace or support my professional role?
  • Check: Could this output cause harm or mislead my client if not reviewed?
  • Communicate: Have I disclosed the role of AI in creating this material or plan?

Final Thoughts

The Illinois ban isn’t about shutting down technology—it’s about drawing clearer boundaries to protect vulnerable clients. For therapists, the message is simple: AI can be a tool, but never the therapist. As the legal landscape evolves, staying proactive, transparent, and ethical will ensure we keep both innovation and humanity at the heart of our practice.

Leave a Comment

Your email address will not be published. Required fields are marked *