
Cybersecurity isn’t a niche IT problem anymore. It’s a condition of modern life: banking, education, government services, and healthcare all run on fragile layers of software we rarely see. Most people only notice this fragility when something goes wrong—an outage, a breach, a locked account, a system that suddenly can’t be trusted. The risk is broad, but the consequences aren’t evenly distributed.
In healthcare, that unevenness is felt fast. When digital systems fail, care doesn’t politely pause until things come back online. People still show up distressed, unsafe, or mid-crisis, and clinicians still have to hold decisions with incomplete information. The “technical incident” becomes a human one, often within minutes.
That’s why therapists should care even when the conversation sounds far away from our day-to-day work. In therapy, cybersecurity rarely announces itself as “cyber.” It shows up as a session abruptly canceled because scheduling is down, a telehealth link that fails at the last moment, or a clinic suddenly unable to access notes. It also shows up as a client asking, quietly but directly, whether their messages are truly private.
Against that background, Anthropic’s April 7, 2026 announcement of Project Glasswing is more than tech news. They described an unreleased model, Claude Mythos Preview, and emphasized that it will not be made generally available. Instead, it’s being routed through a restricted program, framed around defensive use. When an AI lab decides a model is too capable to release, that’s a signal about where the threat landscape is heading.
The key reason given for the lockdown is simple and unsettling: Anthropic presents Mythos Preview as able to find serious vulnerabilities with very little human steering. In plain terms, it can reportedly spot weak points in software faster and more autonomously than earlier systems. Even if the intention is defense, the capability itself matters, because capabilities tend to spread, and because attackers also adapt.
Anthropic’s examples are the kind that make non-technical people uneasy for good reason. They highlight weaknesses in widely used foundational software and describe cases where issues persisted for years, even decades, without being caught. That’s the uncomfortable truth about digital infrastructure: many systems we treat as stable are stitched together from codebases with long histories, uneven maintenance, and hidden complexity.
If that still feels abstract, bring it back to the tools we actually use. Telehealth platforms rely on browsers, operating systems, servers, and third-party libraries. Scheduling systems and patient portals depend on integrations and APIs that can quietly multiply risk. A vulnerability “somewhere upstream” can become downtime, data exposure, or service disruption right where clients meet care.
There’s also a structural question that matters for healthcare: who gets access to the strongest protective tools, and when? Restricting a high-capability model may reduce immediate misuse, but it also concentrates power and expertise in a small set of organizations. Smaller clinics and vendors can end up dependent on security timelines, priorities, and disclosure decisions they can’t easily see or influence. That gap, between ethical expectations and technical realities—can become a trust problem.
Practically, this pushes us toward a more explicit, system-level view of clinical risk. We can’t patch operating systems, but we can treat cybersecurity maturity as part of quality of care. That means asking better procurement questions, requiring clear incident response commitments from vendors, and maintaining downtime protocols that protect continuity. It also means reducing “shadow tools” and unmanaged AI add-ons that expand the attack surface without oversight.
Ethically, the goal isn’t to panic, it’s to insist on defensible trust. In clinical contexts, “trustworthy” should mean there are decision trails we can explain: what system was used, what data moved, what safeguards existed, what logging and auditing were in place, and how errors or incidents will be corrected and disclosed. Clients shouldn’t have to rely on invisible infrastructure and hope for the best; they deserve care systems built to fail safely.
Project Glasswing is a preview of a new phase: AI is not only changing clinical tools, but also changing the security environment those tools sit inside. Patient trust depends on confidentiality, integrity, and availability, and those depend on infrastructure now being stress-tested by increasingly autonomous systems. For therapists, the task is to keep the clinical frame intact as the technical frame accelerates: protect continuity, protect privacy, and advocate for systems we can actually stand behind.
