·4 min read·Blog·AI & Cybersecurity

Software Liability Is Finally Coming. AI Made It Inevitable.

The software industry has operated under one of the most generous liability regimes of any major industry for forty years. The implicit defense has always been some version of: security is hard, bugs are unavoidable, no warranty expressed or implied, and any claim that a vendor "should have caught" a vulnerability runs into the wall of "well, nobody else caught it either, so what reasonable standard of care could you possibly hold us to?"

That defense just got fragile in a specific way. AI vulnerability discovery is demonstrating, in public, that vast numbers of long-standing bugs in production software were findable all along. The 27-year-old OpenBSD bug. The 16-year-old FFmpeg flaw. Thousands of zero-days across every major operating system and browser, found in weeks. The implication that hangs over every disclosure is uncomfortable: if Mythos can find this in hours, why didn't the vendor find it in years?

The honest answer is that the vendor's security investment was bounded by economic rationality under the old threat model. Allocating ten more security engineers to a codebase to find one more obscure bug per year was not a defensible business decision. The cost-benefit math gave permission to leave many findable bugs undiscovered.

That math is about to be rewritten in a courtroom. Or, more precisely, in a legislature. The EU is going to lead. The Cyber Resilience Act and the Product Liability Directive updates already in motion are pre-positioning for exactly this conversation. Once AI-era discoverability becomes the implicit standard against which vendor security investment is measured, the legal and regulatory pressure to require vendors to either match that standard or accept liability for the gap becomes very hard to resist politically.

US sector-specific regulation will follow, building on the SEC cybersecurity disclosure rules adopted in 2023. Financial services first, because the regulatory infrastructure is most mature and the political appetite for vendor liability in the wake of cyber incidents is highest. Healthcare next, because medical device security has been a slow-motion crisis for a decade and AI-era vulnerability discovery accelerates it. Critical infrastructure after that, because the national security framing makes the politics easier than in commercial markets.

The blanket immunity that has protected software vendors since the 1980s is not going to evaporate overnight. It will erode unevenly, jurisdiction by jurisdiction, sector by sector, often through enforcement actions and case law rather than clean legislative reform. But the direction of the erosion is now set, and the slope is going to steepen.

The SBOM mandates of 2023-2025 were the warm-up. Software Bills of Materials only matter as raw material for actually doing something with the dependency information, and AI vulnerability discovery is exactly what makes SBOMs operationally valuable. Expect SBOM requirements to extend, deepen, and start carrying enforcement teeth. Expect the gap between vendors with comprehensive SBOM programs and those without to widen rapidly, and to start showing up in liability allocation when something goes wrong.

For enterprise buyers, this changes vendor selection in three ways:

Vendor security posture becomes a procurement criterion. The question "what is your strategy for AI-discovered vulnerabilities in your product" is going to become standard in RFPs by Q4 2026. Vendors with credible answers will win deals against vendors without.

Indemnification language gets re-examined. The standard software contract limits vendor liability for security failures to direct damages, often capped at a fraction of license fees. That language was acceptable when the vendor's security investment was bounded by reasonable cost-benefit math. It looks much less acceptable when AI tooling makes the gap between "what was found" and "what was findable" obviously enormous.

Open-source dependencies become a sharper governance question. When vendor liability shifts toward "you should have caught more," vendors are going to push that liability downstream toward the open-source projects whose code they ship. The whole conversation about funding open-source security is about to get much more serious, very quickly.

The era when software was a special category of product, exempt from the liability standards that apply to physical goods, is ending. AI did not cause that ending, political pressure on software security has been building for years. AI removed the strongest defense the industry had against the pressure.

Read the deep dive: Project Glasswing: The New Disclosure Architecture, covers the regulatory and liability implications of AI-discovered vulnerabilities in detail, including SBOM mandates and sector-specific regulation timelines.

Related articles

Attribution Is About to Become Useless

When any teenager with API access can execute attacks previously requiring nation-state resources, 'who did this' becomes nearly meaningless.

The Perimeter Is Finally, Actually Dead

We've been saying the perimeter is dead for fifteen years while operating as if it weren't. AI vulnerability discovery just made the bluff impossible.

We use cookies to provide essential site functionality and, with your consent, to analyze site usage and enhance your experience. View our Privacy Policy