How to Prepare for the AI-Discovered CVE Wave
AI-discovered CVE volume is about to surge. Learn the architectural and operational changes enterprises need now, before the firehose hits production.
Public CVE volume has been rising steadily for years, somewhere between 25,000 and 30,000 published annually, and climbing. AI vulnerability discovery is about to bend that curve sharply upward. The Mythos coalition partners alone are sitting on thousands of zero-days that will progress through coordinated disclosure over the coming year. That is just the responsible-disclosure half of the wave. The other half, vulnerabilities discovered by AI tools in the hands of attackers, will not be cataloged at all until the exploits are observed in the wild.
Most enterprise vulnerability management programs are already behind. CVE backlogs of months are common. The ratio of published critical and high CVEs to available analyst time was already structurally adverse before AI vulnerability discovery accelerated the rate of disclosure. The honest assessment is that an order-of-magnitude increase in CVE volume will break most current VM programs unless the operational model changes.
This article is the practical 90-day readiness plan for that change. It is not a discussion of theory. It is what to do, in what order, with what resources.
Why CVE volume is about to increase by an order of magnitude
The conventional CVE generation pipeline involves human security researchers finding bugs through manual review, fuzzing, and tooling, then either disclosing them through coordinated vendor channels or publishing them through bug bounty programs and security conferences. The total throughput is gated by the population of researchers doing this work and the time each researcher spends per bug.
AI vulnerability discovery removes both gates. The "researcher" is software that runs in parallel at compute cost. The "time per bug" collapses from weeks to hours. The throughput multiplier, conservatively, is somewhere between 10x and 100x for the categories of bugs AI discovery handles well, which appears to include most memory safety vulnerabilities, many logic flaws, and a meaningful fraction of protocol and parser issues.
Three populations of AI users will drive the volume increase.
Restricted defensive coalitions like Project Glasswing, discovering thousands of zero-days in critical infrastructure software, disclosing through coordinated channels, producing a steady stream of high-severity CVEs over the next 12-18 months. The disclosure timing is roughly predictable.
Vendors using AI on their own codebases, finding bugs in their own products and patching them, often disclosing CVEs in the process. This stream is harder to predict because vendors choose their own disclosure cadence, but it is large.
Security researchers using AI as a force multiplier, bug bounty hunters, academic researchers, independent security consultants. This population will drive a long-tail increase in disclosed CVEs across third-party software, libraries, and dependencies.
Each population produces CVEs that flow into your vulnerability management process the same way. None of them care about your team's capacity. The aggregate volume increase, sustained over the next 18 months, is going to test every assumption built into current VM programs.
The triage problem: most VM programs are already behind
Before the AI vulnerability discovery wave hits, walk through the current state of vulnerability management in most enterprises. The pattern is consistent:
A scanning infrastructure that produces some volume of findings. A ticketing system that holds the findings. An analyst team that triages findings, prioritizes some, escalates some, accepts the rest as "low risk." A patching pipeline that handles a fraction of the prioritized findings within SLA, with the rest aging into backlog. A reporting layer that produces metrics for the CISO and the board, usually showing improvement quarter-over-quarter on whichever dimension is currently being measured.
The structural problems are well-known. Triage is the bottleneck. Analyst time is the binding constraint. The backlog grows because new findings arrive faster than old findings get resolved. Prioritization heuristics, CVSS score, EPSS, asset criticality, help at the margin but do not solve the fundamental volume mismatch.
Now imagine that finding volume goes 10x. The triage queue fills 10x faster. The backlog grows 10x faster. The patching pipeline, which was already a bottleneck, becomes a structural impossibility. The metrics start showing degradation that no amount of analyst overtime can reverse. Within a quarter, the program looks broken. Within two quarters, it is broken.
The conventional response is "add more analysts" or "buy better prioritization." Both responses are inadequate to the scale of the change. You cannot hire your way out of a 10x volume increase. You cannot prioritize your way to ignoring the bottom 90% of findings when some of those findings are critical zero-days disclosed to the Glasswing coalition six months ago.
The actual response requires changing the operational model. Five steps, in order.
Step one: continuous asset and dependency inventory
The first step is the unglamorous one. Before you can triage CVEs against your environment, you have to know what your environment actually contains.
Most enterprises have an asset inventory that is somewhere between "stale" and "fictional." Cloud assets that were spun up by individual teams without going through asset management. Software dependencies buried inside containers that were last rebuilt eighteen months ago. Third-party services whose actual code is not under enterprise control but whose vulnerabilities flow into the enterprise risk profile. Shadow IT. Forgotten subsidiaries. Acquired companies whose environments were never fully integrated.
In a low-volume CVE world, inventory gaps are tolerable because the analyst team can manually research the population of affected assets per CVE. In a high-volume world, this is impossible. You need a programmatically-queryable inventory that an AI agent can use to answer "which of our systems are affected by this CVE" automatically. If the answer requires a human to walk down the hall and ask the platform team, you are not ready for the wave.
The inventory needs three properties. Comprehensiveness: it has to include cloud workloads, on-premise systems, SaaS dependencies, and the dependency graphs of the software running in each. Currency, it has to update continuously, ideally driven by automated discovery rather than manual entry. Queryability, it has to be accessible to automated tooling, not locked inside a CMDB that requires login credentials and SQL queries to use.
Most enterprises will need to invest substantially in this layer before the rest of the readiness plan becomes operational. There is no shortcut. The good news: this investment pays for itself across many other operational requirements. SBOM compliance, third-party risk management, and incident scoping all benefit from the same inventory infrastructure.
Step two: historical lookback: did this CVE touch us before disclosure?
This is the question that distinguishes operational readiness from theatrical readiness. When a critical CVE is disclosed, the immediate question is "are we vulnerable now." The harder, more important question is "were we exploited before this was disclosed."
Vulnerabilities discovered by AI tools in the hands of responsible actors get coordinated disclosure timelines. Vulnerabilities discovered by AI tools in the hands of attackers do not. They get exploited silently for months before they are observed and cataloged. By the time the CVE is published, the exploitation has already happened. The question for the SOC is no longer "patch fast", it is "look back across our telemetry and tell me whether this got us already."
Most enterprises cannot answer that question fast enough to matter. Telemetry from the affected workloads has been tiered to cold storage after 30 or 90 days. Pulling it back into queryable form takes hours and costs money. The data is fragmented across endpoint, network, identity, and cloud control plane systems with no shared schema. The retention horizon was set by compliance policy, which only requires keeping certain log types for certain durations, not the comprehensive multi-domain telemetry that retrospective threat hunting requires.
Building the lookback capability is the highest-leverage investment defensible in 2026. It requires three properties:
Retention horizon measured in years. The window of "vulnerabilities that might be disclosed tomorrow that could have been exploited at some past point" is not 90 days. It is closer to 18-36 months for any vulnerability class where pre-disclosure exploitation is plausible.
Full-fidelity, not sampled. Sampling makes the math easier on storage costs and breaks the math on retrospective threat hunting. If 1% of events are missing, you cannot definitively answer "did this attack pattern occur in our environment over the past year." You can only answer "we did not see it in the events we sampled."
Queryable in minutes, not hours. The economic and operational shift is from "data is in cold storage, we can pull it back if we need to" to "data is in queryable storage continuously." This changes both architecture and budget assumptions.
We cover the architectural specifics in AI-Native Incident Response Needs Full-Fidelity History.
Step three: patch pipeline compression and auto-update posture
The triage and lookback layers determine what you need to act on. The patching layer determines whether you can act on it fast enough to matter.
In a world where n-day exploits ship within hours of CVE publication, the patching pipeline has to compress correspondingly. Auto-update everywhere it is operationally tolerable. Treat dependency bumps that carry CVE fixes as P0 incidents, not routine maintenance. Eliminate change management bureaucracy that adds days of latency without reducing risk.
The honest assessment is that most enterprise patch processes are calibrated for an n-day window measured in days or weeks. A 7-day SLA for criticals presupposed that exploits took longer than 7 days to develop. That presupposition is dead. The processes have to be re-engineered around new latency targets, measured in hours for criticals, measured in days for highs, with the bar continuing to compress as the threat environment continues to evolve.
The work to compress patching is operationally hard. Change management cycles exist for reasons. Regression testing protects against operational outages. Maintenance windows accommodate business processes that cannot tolerate unplanned downtime. None of these reasons go away. The job is to engineer around them rather than abandon them, staged rollouts that compress while preserving safety, automated regression testing that validates patches without human gating, change management exceptions for security patches that meet defined criteria.
This work also requires executive sponsorship. The friction is mostly organizational, not technical. The CISO needs the CIO and the COO aligned on the new latency targets, with explicit acceptance that some operational disruption is the cost of the new threat model. We cover the operational specifics in Patch Window Collapsed: AI-Native Incident Response Now.
Step four: AI-native AppSec: find your own bugs first
The wave of AI-discovered CVEs that hits public disclosure is the wave you can prepare for. The wave that hits your specific environment as a custom-targeted exploit, never to appear in any public CVE database, is the wave you cannot. The defense against that wave is to find your own bugs first, before someone with worse intentions finds them for you.
This is what AI-native AppSec means. Not running scanners on a quarterly cadence. Not pen testing once a year. Continuous AI-powered discovery against your own codebases, your own infrastructure, your own custom integrations and APIs. Finding the bugs that an attacker with Mythos-class capability would find, and patching them before that attacker arrives.
This is a meaningful budget commitment. The AI vulnerability discovery tooling market is in early days, but it is moving fast. Bishop Fox, Checkmarx, and others are building products in this space. The pricing is not yet predictable. The capability is real and improving quickly. The question for security leaders is when, not whether, to make the commitment.
A reasonable framing: pre-disclosure discovery shifts from "nice to have" to "strategic capability." The enterprises that get good at it over the next 18 months will have a structural advantage in the AI threat era. The ones that delay because the budget conversation is hard will discover the cost of delay when an unpatched zero-day they could have found themselves shows up in the wild.
Step five: telemetry substrate: the layer that makes the above possible
The first four steps share a common dependency. Continuous inventory requires telemetry. Historical lookback requires telemetry. Patch effectiveness measurement requires telemetry. AI-native AppSec produces telemetry that has to be reasoned over. Every step in the readiness plan rests on having a telemetry substrate that supports the work.
The substrate requirements are specific and they break most existing security data architectures. Full-fidelity retention measured in years. Entity-resolved data that can be reasoned over by AI agents. Predictable economics that do not punish you for keeping the data. Cross-domain integration so that endpoint, network, identity, cloud, and application telemetry can be queried as a coherent whole rather than as seven disconnected silos.
This is the architectural commitment that most enterprises need to make in the next 12 months and that most are putting off because it is expensive and it is not the urgent fire. The window in which this work is optional is closing. The enterprises that wait until their first major AI-discovered CVE incident to make the substrate commitment will be doing the work under board pressure rather than as deliberate strategy.
The category: telemetry substrate, system of record for enterprise telemetry, machine-reasonable telemetry intelligence, is being defined now. Bloo exists in this category for exactly this set of requirements. We cover the architectural specifics in Telemetry Intelligence: The Next Layer of Enterprise Infrastructure and Bloo: The System of Record for Enterprise Telemetry.
A 90-day readiness checklist
If this article had to compress to a single page on a CISO's desk, it would look like this:
Days 0–30, Inventory and lookback audit. Document the current asset inventory's completeness, currency, and queryability. Document the current telemetry retention horizon and how long retrospective queries actually take. Identify the gaps. Brief the CISO on the readiness state with concrete numbers.
Days 31–60, Patch pipeline compression pilot. Pick a defined subset of the environment. Compress the patch SLA to the new threat model targets. Measure the operational friction. Identify which obstacles are technical (real) and which are organizational (negotiable). Build the case for organization-wide compression based on pilot data.
Days 61–90, Substrate decision and budget commitment. Evaluate telemetry substrate options against the requirements: full-fidelity retention, year-scale lookback, machine-reasonable structure, predictable economics. Make the architectural decision. Lock the budget commitment in the next planning cycle. Begin the migration plan.
Across all 90 days, board and executive alignment. The AI-discovered CVE wave is a board-level threat model change. Use the news cycle to get the conversation onto the agenda. Frame the readiness plan in terms the board can act on: here is the question that will be asked in 90-180 days, here is what we can answer today, here is what we are doing to be able to answer better.
The 90 days do not get you to fully prepared. They get you to a credible plan, a concrete inventory of gaps, and the executive alignment needed to fund the multi-quarter work that follows. The enterprises that do this work proactively in 2026 will have meaningful structural advantages by 2027. The enterprises that wait will spend 2027 doing the same work under crisis conditions, at higher cost and lower quality.
The time to start is now.
Related reading
- What Claude Mythos Means for the Future of Cybersecurity. The pillar piece on the broader strategic shift.
- AI Vulnerability Discovery: The New Defender Economics. The economic model behind the wave.
- Patch Window Collapsed: AI-Native Incident Response Now. The patch compression playbook.
- AI-Native Incident Response Needs Full-Fidelity History. The substrate the readiness plan depends on.
- Inside the Zero-Days Claude Mythos Discovered. What the wave actually looks like.
- Project Glasswing: The New Disclosure Architecture. How disclosure flows shape the wave.