Anthropic’s recent disclosure of an AI bug‑finding model deemed too “dangerous” for public release is forcing companies to rethink their cybersecurity programs and operations. Instead of releasing the model broadly, the company has convened a closed group of industry leaders across cloud infrastructure, operating systems, networking and finance to deploy its Claude Mythos Preview model in controlled testing of critical software. A week after Anthropic’s announcement, OpenAI likewise revealed it had provided a select group access to its own unreleased model for cyber testing and remediation. As AI accelerates vulnerability discovery and exploits, the Cybersecurity Law Report discussed the implications with legal and cybersecurity leaders from Akin, Alston & Bird, A&O Shearman, Cloud Security Alliance, Cyber Threat Alliance, Debevoise and Paul Weiss. This article examines how Mythos-class models may alter expectations for cyber programs and create pressure on existing vulnerability-sharing frameworks. It also outlines concrete steps CISOs, GCs and boards should consider as AI compresses vulnerability discovery and exploitation timelines. See our two-part series on AI agent security: “Companies See Rogue Incidents but Lag on Controls” (Mar. 18, 2026), and “What CISOs and GCs Need to Know to Defend the Enterprise” (Mar. 25, 2026).
