·12 min read·By Bloo

Patch Window Collapsed: AI-Native Incident Response Now

The patch window between disclosure and exploitation has collapsed to hours. Learn why traditional IR breaks, and what replaces it now.

Anthropic's Claude Mythos Preview has, in a few weeks, autonomously discovered thousands of zero-day vulnerabilities across every major operating system and browser, including a 27-year-old bug in OpenBSD that survived decades of human review. Project Glasswing keeps the model in responsible hands for now: AWS, Apple, Google, Microsoft, JPMorgan, the Linux Foundation, a handful of others. That firebreak holds until it doesn't. Mythos-class capability will be in attacker hands inside two years, probably less.

The industry response so far has been predictable and insufficient: buy more exposure management, shorten patch cycles, run more scans. All true. All inadequate. The harder truth is that the entire architecture of enterprise incident response was built for a world where vulnerability discovery took weeks and exploit development took longer. That world ended this month.

This article is the operational defender's playbook for the world that replaced it. We cover the three things that have to change in incident response, the substrate they have to run on, and the 90-day plan for getting there.

How long the patch window used to be: and why that world ended

For most of the last twenty years, the time between public CVE disclosure and observed in-the-wild exploitation followed a roughly predictable distribution. Critical vulnerabilities in widely deployed software might see exploits within days. Less attractive targets might see months pass before any practical exploitation. The median, depending on which study you trust, sat somewhere between 22 and 50 days.

This window, the n-day exploit gap, was the foundation underneath every patch SLA in the enterprise world. Microsoft's Patch Tuesday cadence works because most n-days take longer than a month to weaponize. Vulnerability management programs that prioritize criticals within 7 days work because the n-day window typically gives you that 7 days. The whole concept of "exploitation likelihood" scoring in CVSS depends on the assumption that exploits take time and effort to develop.

That assumption no longer holds. Mythos and the next generation of similar models can generate working exploits autonomously in hours from a CVE description, sometimes from the patch diff alone. Anthropic's published research is explicit about this. The model writes n-day exploits well enough that the company itself is warning defenders that their patch cycles need to compress dramatically.

The window did not narrow gradually. It collapsed. A CVE published on Tuesday morning may have a working exploit before the end of business that day. The implications are not subtle: if your patch SLA assumes 7 days, you are now operating with a 7-day exposure window in a world where the threat window is 7 hours.

Why Anthropic itself is telling defenders to tighten patch cycles

There is something unusual about the current situation: the company that built the offensive capability is telling defenders how to respond to it. Anthropic's published guidance to defenders is unambiguous and worth reading carefully.

Tighten the patch enforcement window. Enable auto-update wherever it is operationally tolerable. Treat dependency bumps that carry CVE fixes as urgent rather than routine maintenance. Drive down time-to-deploy for security updates as a top operational priority.

This is the company that built Mythos telling enterprises that the n-day exploit window has collapsed. It is not marketing. It is technical guidance from the people who measured the capability. When the model maker is publishing defensive recommendations alongside the model announcement, you should treat those recommendations with more weight than the average vendor advisory.

The implication for security operations: every patch process inside an enterprise, change management approvals, maintenance windows, staged rollouts, regression testing, needs to be re-examined against an n-day window measured in hours. Not because every CVE will be exploited that fast, but because the worst-case window now sits inside operational lead times that used to be considered fast.

The real bottleneck moves upstream

Here is what the conventional response is missing. The first reaction across the industry has been "patch faster, scan more." That reaction is correct as far as it goes. It does not go far enough.

When CVE volume goes up by an order of magnitude, and it will, as AI vulnerability discovery proliferates, the bottleneck in incident response stops being detection. Detection becomes commoditized. Every vendor has signatures, every scanner has updated rules, every endpoint agent flags the new CVE within hours of disclosure. The bottleneck moves upstream to a different question.

Every fresh zero-day disclosure now triggers the same retrospective question, and you have minutes to answer it: did this vulnerability touch my environment in the last six months, twelve months, three years? Which workloads were affected? Which identities? Which data flows? Was it being exploited before it was disclosed?

This is the question that distinguishes operational incident response from theatrical incident response. Theater answers "are we vulnerable now?" Operational IR answers "were we exploited already?" The first question can be answered by a scanner. The second question can only be answered by reasoning over historical telemetry.

Most security stacks cannot answer the second question. Not because the data was never collected, but because it was dropped, sampled, or tiered to cold storage where retrieval takes hours and costs more than the answer is worth. SIEMs built on per-GB ingestion economics actively penalize the retention depth that AI-era IR demands. Cold storage is fine for compliance theater. It is useless when an autonomous attacker is already three steps ahead and you have an hour to determine blast radius.

This is the unglamorous shift the industry is missing. The headline is "AI finds bugs faster." The structural reality is that retention depth and machine-reasonable history have just become the constraint on whether you can respond at all.

Three requirements for AI-native incident response

The shift from human-paced to AI-paced incident response forces three architectural changes. They are not optional. Each addresses a specific failure mode that legacy IR architectures cannot survive in the new threat environment.

Full-fidelity retention measured in years, not days. Your AI agents, and you will have AI agents in your SOC within twelve months, whether you plan for them or not, cannot look backward across the full window in which a newly disclosed vulnerability could have been exploited if that data is missing. Sampled data produces sampled answers. Dropped data produces no answers. The retention horizon for AI-era IR is the longest plausible exploitation horizon for vulnerabilities that might be disclosed tomorrow. That is a horizon measured in years, not days.

Telemetry structured for machine consumption, not human dashboards. The agentic SOC tools entering the market right now, Dropzone, Prophet, Crogl, Cogent, and a dozen others, are reasoning engines that sit on top of whatever substrate you give them. Give them entity-resolved, cross-domain history and they produce real answers. Give them raw log fragments scattered across seven tools and you have automated the wrong thing faster. The structural property that matters for machine reasoning is not "the data exists somewhere." It is "the data exists in a form an agent can reason over without spending its first ten reasoning steps trying to figure out what it is looking at."

Predictable economics that don't punish you for keeping the data. The perverse incentive at the heart of legacy SIEM economics, pay more to retain more, so retain less, is the single biggest defensive liability in an AI vulnerability era. Every time an organization decides to drop a high-volume telemetry source to control SIEM ingestion costs, it is making a future blast radius determination harder. The organizations that win the next 24 months will be the ones whose architecture rewards keeping everything.

What most security stacks cannot do when a Mythos-class CVE drops

Run this thought experiment for your own environment. A new critical CVE drops at 9 AM Tuesday. By 11 AM Tuesday, exploit code is circulating. The board, the CISO, and the audit committee all want the same answer by end of day: were we exploited? Are we still being exploited? What is the blast radius?

In most enterprises today, the answer to that question takes days, not hours. The reasons are predictable.

Telemetry from the affected workloads has been tiered to cold storage after 30 days, and pulling it back into a queryable state takes hours. The data exists in three different systems, endpoint, network, cloud, and reconciling identity across them requires manual analyst work. The SIEM was throttled six months ago to control costs, so the highest-volume signals were dropped or sampled. The retention policy, written for compliance rather than IR, kept logs for 90 days when the question requires looking back 12 months. The agents that could theoretically answer the question have to be hand-fed the right data.

Each of these failure modes is individually understandable and collectively catastrophic. Together they mean that the most critical question in modern incident response, did this touch us, and when, cannot be answered fast enough to matter.

The organizations that have already addressed this, usually by accident, because they happened to retain comprehensive telemetry for other reasons, are dramatically better positioned. The organizations that haven't are about to have an uncomfortable year.

The board question coming in the next 90 days

Within ninety days, your board will ask some version of this question: what is our exposure to AI-discovered vulnerabilities, and how fast can we determine blast radius when one drops?

The honest answer for most enterprises today is "weeks, and we would be guessing." That answer is about to become unacceptable. Not because of a new regulation, though SEC, OCC, FFIEC, and DORA scrutiny is coming, but because the threat model has changed and the answer to the question is now a leading indicator of organizational competence.

The boards that ask this question and get a "weeks, and we would be guessing" answer will start asking why. The CISOs who can articulate the architectural shift required: full-fidelity telemetry, machine-readable history, predictable economics, will get the budget to build the response. The CISOs who try to answer with "we need more SIEM licenses" will not.

This is the conversation worth preparing for now. The Mythos news cycle gives you a concrete reference point your board has already heard about. The architectural argument lands more cleanly when the news has primed the audience for it.

Action steps: a 30-60-90 plan for CISOs

Days 0–30: Audit and triage.

Inventory current patch SLAs against the new threat model. For criticals, the target needs to compress toward hours, not days. Identify the operational obstacles, change management bureaucracy, regression testing gaps, downtime tolerance, that prevent it. Audit auto-update posture across the environment. Identify systems where auto-update is technically possible but disabled for organizational reasons. Make a list of the highest-risk legacy code and dependencies in production. These are the most likely targets for the AI-discovered CVE wave.

Audit telemetry retention. Document, for each major data source, how long data is kept in queryable storage versus cold archive. Calculate the lookback window your IR team can practically achieve when answering "did this CVE touch us in the last X months." If the answer is shorter than 12 months, you have a structural exposure to the new threat model.

Days 31–60: Compress and prepare.

Pilot tightened patch cycles on a defined subset of the environment. Measure the operational friction, what breaks, what blocks, what needs change management reform. Use the pilot to make the case for the broader rollout in days 60–90.

Begin the architectural conversation about telemetry substrate. The right time to evaluate alternatives to your current SIEM data layer is before the next budget cycle, not after a board incident. Get the architecture team and the security team aligned on what AI-era IR actually requires. We cover the architectural specifics in AI-Native Incident Response Needs Full-Fidelity History.

Brief the board. Not as a request for budget, as a heads-up about a changing threat model. Frame it concretely: here is the question that will be asked in the next 90 days, here is what we can answer today, here is what we need to be able to answer.

Days 61–90: Commit and execute.

Move from pilot to production on tightened patch cycles. The friction will be real. The answer is not to back down from the targets but to fix the operational obstacles. Most of those obstacles, change management cycles, manual approvals, fragile regression testing, are themselves products of the old threat model. They were calibrated for n-day windows that no longer exist.

Make the architectural commitment on the substrate. If your current telemetry layer cannot deliver full-fidelity retention at predictable cost, plan the migration. The migration takes time. The threat does not wait.

The shift no one wants to name

The next 24 months will produce a clear divide in enterprise security. Organizations that treat AI vulnerability discovery as a tooling problem will buy more tools and stay one cycle behind every disclosure. Organizations that treat it as an architecture problem will rebuild the substrate and spend the next decade with a structural advantage.

The pillar piece in this series, What Claude Mythos Means for the Future of Cybersecurity, covers the strategic case. This piece is the operational corollary. The patch window collapsed. The board question is coming. The architecture has to change. The plan above is what the work actually looks like.

The right time to start was last quarter. The next-best time is now.


Stay ahead of cyber threats

Get the latest threat intelligence, research insights, and security updates delivered to your inbox.

We respect your privacy. Unsubscribe at any time. Privacy Policy

We use cookies to provide essential site functionality and, with your consent, to analyze site usage and enhance your experience. View our Privacy Policy