·13 min read·By Bloo

Project Glasswing: The New Disclosure Architecture

Glasswing limits Mythos to 12 vetted partners. Learn what this signals, and why coordinated disclosure can't survive AI-scale discovery.

When Anthropic announced Claude Mythos Preview in April 2026, the model came with an unusual companion announcement. Rather than releasing the model commercially or restricting it to internal use, Anthropic established a coalition called Project Glasswing, twelve organizations that would receive controlled access to Mythos for defensive purposes. AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic itself.

This is not a typical product launch. It is closer to a treaty. The structure of Glasswing, who is in, who is not, what they can do, what they cannot, represents an early answer to a question the industry has been avoiding: how do you govern AI capability that is too dangerous to release broadly but too valuable to lock away entirely?

The answer Anthropic has constructed is interesting, deliberate, and almost certainly temporary. This article unpacks what Glasswing is, what disclosure problem it tries to solve, and why the architecture it represents will need to evolve dramatically as AI vulnerability discovery proliferates beyond the coalition's walls.

What Project Glasswing is and who it includes

Glasswing is a defensive-use-only coalition with controlled access to Claude Mythos Preview. The structure has several distinctive features.

Membership is small and curated. Twelve organizations, chosen for two qualities. First, they own or maintain enough critical software infrastructure that broad zero-day discovery in their products serves the public good. Cloud platforms (AWS, Google, Apple), operating system vendors (Microsoft, Apple, the Linux Foundation indirectly), browser vendors (Google, Apple), security infrastructure providers (CrowdStrike, Palo Alto), networking vendors (Cisco, Broadcom), GPU and AI infrastructure (NVIDIA), and a major financial institution (JPMorgan Chase) representing the regulated industries most exposed to AI-era threats. Second, they have the institutional discipline to handle vulnerability disclosure responsibly, track records of coordinated disclosure programs, internal security teams capable of triaging at scale, and relationships with the broader security community.

Access is defensive-only. Coalition members can use Mythos to find vulnerabilities in their own products and in critical infrastructure they steward. They cannot use it offensively. They cannot sell access to others. They cannot publish the model, fine-tune it, or extract its capabilities into other systems.

The economics are non-commercial. Anthropic is providing $100 million in usage credits to the coalition rather than selling access. This is significant. It removes commercial incentives that would otherwise distort how members use the capability. It also signals that Anthropic views Mythos-class capability as too consequential to operate as a normal product line.

Coordinated disclosure is the dominant operating model. Vulnerabilities discovered by coalition members get reported to affected vendors through standard coordinated disclosure channels, with patches typically released before public CVE assignment. The thousands of vulnerabilities Mythos has already found will work their way through this process over the coming months.

The structure is a deliberate firebreak. It keeps Mythos-class offensive capability concentrated in a small set of organizations with strong defensive incentives, while still allowing the capability to do useful work in the world.

The disclosure problem it is trying to solve

To understand why Anthropic constructed Glasswing this way, you have to understand the disclosure problem that AI vulnerability discovery creates, which is genuinely new in some ways and an acute version of an old problem in others.

The old problem: any individual researcher who finds a serious vulnerability faces a choice about disclosure. Sell to the highest bidder, publish for credit, report quietly to the vendor. The norms developed over the last twenty years, coordinated disclosure with reasonable timelines, bug bounty programs to align incentives, security conference talks to spread knowledge, work tolerably well when the discoverer is a human researcher and the discovery rate is bounded by human time.

The new problem: when an AI system can find thousands of zero-days in weeks, the disclosure problem changes shape. The volume overwhelms standard coordinated disclosure infrastructure. Vendors cannot patch fast enough. Researchers cannot responsibly sit on stockpiles. The disclosure timeline arithmetic, 90 days from report to public disclosure, assumes a finite, manageable inflow that AI discovery breaks.

There is also a more specific problem: dual use. Mythos can find vulnerabilities to enable defense. Mythos can find the same vulnerabilities to enable attack. The model itself has no inherent commitment to either side. The disclosure problem becomes a distribution problem, who gets the capability, under what constraints, with what accountability.

Glasswing is one possible answer. Restrict the capability to a small set of vetted defensive actors. Channel the discoveries through coordinated disclosure to affected vendors. Buy time for the broader software ecosystem to absorb the wave of findings before similar capability is in attacker hands. The goal is not to prevent attacker access permanently, that is impossible. The goal is to give defenders a head start.

Why traditional coordinated disclosure cannot scale to AI volumes

Coordinated disclosure as practiced for the last twenty years has a specific operational shape. A researcher reports a vulnerability to the vendor. The vendor acknowledges, validates, and develops a patch. The patch ships, often coordinated with the researcher to allow simultaneous public disclosure. CVE assignment, advisory publication, and customer notification follow. Disclosure timelines are typically 60-120 days, with the researcher and vendor negotiating extensions if patching is genuinely complex.

This model works because the volume is bounded. Major operating system vendors handle perhaps a few hundred coordinated disclosures per year per product. Browser vendors handle similar volumes. Open-source projects handle far fewer. The vendor security teams that triage and patch these reports are sized for this volume. The CVE infrastructure is sized for this volume. The customer notification systems are sized for this volume.

The Glasswing coalition members are about to test what happens when a single research source produces thousands of high-quality zero-day reports against a single vendor in a few months. The answer, in some cases, is that the coordinated disclosure infrastructure will struggle. Vendors will need to dramatically scale their security response teams. Patch release cadences will need to accelerate. Customer notification systems will need to handle volume that prior architecture choices did not anticipate.

Coalition vendors are sophisticated organizations that can probably absorb this stress with effort. The harder problem is what happens at the next layer down. Open-source maintainers receiving AI-discovered vulnerability reports against their projects are not similarly resourced. A popular open-source library maintained by two volunteers cannot absorb a hundred AI-discovered zero-day reports without breaking. The disclosure infrastructure assumes the receiving end has roughly bounded capacity. AI discovery does not respect that assumption.

The Glasswing coalition is, among other things, an attempt to handle this problem by concentrating discovery in organizations with the institutional capacity to manage their own ecosystem of disclosures responsibly. Microsoft can patch its own products at scale. Microsoft can also help downstream maintainers in its ecosystem cope. The same logic applies to Apple, Google, and the Linux Foundation. The coalition's institutional weight is a significant feature, not just a credentialing mechanism.

The firebreak: how long will Mythos-class restraint hold

The honest assessment is that Anthropic's restraint with Mythos is real, important, and temporary.

Real, because Anthropic has explicitly committed to not releasing Mythos commercially. The company has built a model with significant offensive capability and is voluntarily forgoing the revenue that broader release would generate. This is unusual behavior in a competitive AI market and deserves to be acknowledged as such.

Important, because the restraint creates a window, measured in months to years, during which the broader software ecosystem can absorb the defensive benefits of Mythos discoveries before similar capability appears in less restrained hands. Patches get shipped. Defensive infrastructure gets upgraded. Architectural responses get planned and implemented. The longer the firebreak holds, the more defensive work gets done before the offensive capability proliferates.

Temporary, because the underlying capability is not unique to Anthropic. Other frontier AI labs are training models with similar capabilities. Open-weights models are advancing rapidly, and while they currently lag commercial frontier models by some margin, the gap has been narrowing consistently. Academic research is producing techniques that approximate aspects of Mythos-class capability without requiring frontier-scale models. Within 18 to 24 months, capability of this class will exist outside the Glasswing coalition. Some of that capability will be in defensive hands. Some will be in offensive hands. The firebreak is a window, not a permanent barrier.

The right way to think about Glasswing is as a model for the architecture that will need to exist after the firebreak fails. Coalition-based access control, defensive-use commitments, structured disclosure pathways, these patterns are likely to be replicated and adapted as the broader AI security ecosystem matures. The specific Glasswing implementation is one experiment. The general approach is going to be the template for governing dual-use AI capabilities going forward.

When the firebreak fails: what wider access will mean

The interesting strategic question is what happens at the moment when Mythos-class capability becomes broadly available. There is no precise prediction, but the rough shape is foreseeable.

Three populations will gain access in roughly this order. First, other AI labs will release commercially competitive models with similar capabilities under their own restrictions. These restrictions will be commercial, paid access, terms of service prohibiting offensive use, monitoring for misuse, rather than coalition-based. The restrictions will be partially effective and frequently violated.

Second, open-weights models will reach approximately Mythos-class capability and will be available without restriction. Once a model is downloadable, the only constraints on its use are those imposed by the user's own organization, which means effectively no constraints in many cases.

Third, open-source projects will replicate enough of the Mythos architecture in distilled or specialized form that anyone with modest compute resources can run vulnerability discovery against arbitrary code. This is the equivalent of the Metasploit moment for AI vulnerability discovery, the moment when sophisticated capability becomes available to a much broader audience through open tooling.

When all three populations exist, the offensive landscape will look meaningfully different from today. Mid-tier ransomware crews will have AI vulnerability discovery capability. Hacktivist collectives will have it. State-sponsored actors below the top tier will have it. Insider threats with technical sophistication will have it. The defensive question is no longer "is the capability available to attackers", yes, obviously, but "how does the defensive ecosystem operate when the offensive capability is broadly distributed."

The Glasswing structure is not a long-term answer to this question. It is a short-term mechanism for getting through the transition period with as much defensive infrastructure built as possible.

Policy implications: regulators, liability, and the SBOM era

Glasswing also has policy implications that go beyond the immediate question of vulnerability disclosure. Three are worth highlighting.

Regulatory expectations are about to shift. Financial regulators (SEC, OCC, FFIEC, DORA) and critical infrastructure regulators are paying attention to AI vulnerability discovery. Expect explicit regulatory guidance on AI-discovered vulnerability handling, disclosure timelines, and AI agent governance within 12 months. Financial services will be first because the regulatory infrastructure is most mature. Other regulated sectors will follow.

The specific shape of the regulation is hard to predict, but the questions are predictable. How fast must disclosed vulnerabilities be patched in regulated systems? What disclosure obligations exist when an institution discovers AI-found vulnerabilities in its own infrastructure? What governance must exist around AI agents deployed in security operations? What documentation must exist for retrospective queries against historical telemetry? Each of these questions has a regulatory answer that is currently undefined and will be defined through some combination of agency rulemaking, enforcement actions, and industry standards over the next 18 months.

Software liability shields are getting weaker. The implicit "security is hard, bugs happen, no warranty expressed or implied" defense that has protected the software industry for forty years is becoming politically untenable when AI demonstrates that most bugs were findable all along. The EU is likely to lead on aggressive software liability legislation. US sector-specific liability regimes are likely to follow in financial services, healthcare, and critical infrastructure. The era of broad blanket liability immunity for software vendors is probably ending.

This will reshape vendor incentives in ways that are mostly good for security but disruptive for the software industry. Vendors with credible security investment programs will be able to defend their liability posture. Vendors without will face increasing pressure. The market will reward security investment in ways it has not historically.

SBOM mandates were the warm-up. The Software Bill of Materials mandates of 2023-2025 were the regulatory pre-positioning for the AI vulnerability discovery era. SBOMs only matter as raw material for actually doing something with the dependency information, and AI vulnerability discovery is exactly what makes SBOMs operationally valuable. Expect SBOM requirements to extend, deepen, and start carrying enforcement teeth. Expect the gap between organizations with comprehensive SBOM programs and those without to widen rapidly.

What enterprises should do before Mythos-class capability proliferates

The window between today and broad availability of Mythos-class capability is the most defensively valuable time the industry will have for the foreseeable future. Three things enterprises should do with that window.

Brief executives on the changing threat model. The conversation needs to happen at the board and CEO level, not just within the security organization. The Glasswing announcement is a useful reference point because the board has probably already heard about it from financial press coverage and the CrowdStrike/Palo Alto stock movements. Use the moment.

Make the architectural commitments now. The substrate work: full-fidelity telemetry retention, machine-readable history, predictable economics, takes time to plan and implement. The enterprises that start now will be substantially ahead of the ones that wait until the first major incident forces the conversation.

Engage with the disclosure ecosystem. If you operate critical infrastructure, you may already be in conversation with vendors about their AI-discovered vulnerability programs. If you are not, you should be. Vendor security posture in the AI era is going to be a meaningful procurement criterion, and the enterprises that develop sophisticated views on it early will make better vendor selection decisions.

The Glasswing coalition is operating in a window the rest of the industry can use. The window will not stay open indefinitely. The work to use it productively starts now.


Stay ahead of cyber threats

Get the latest threat intelligence, research insights, and security updates delivered to your inbox.

We respect your privacy. Unsubscribe at any time. Privacy Policy

We use cookies to provide essential site functionality and, with your consent, to analyze site usage and enhance your experience. View our Privacy Policy