A comforting story is forming in industry coverage of AI vulnerability discovery. It goes like this: yes, attackers will get AI tools, but defenders will get the same tools, the asymmetry will balance out, the net effect will be neutral or even slightly favorable to defenders because we have more resources. Don't worry about it.
This story is wrong in a specific and important way that almost nobody is naming clearly.
Defenders use AI within bureaucratic, audited, governance-bound constraints. The list is long and individually defensible. Model risk committees that evaluate whether the AI agent's behavior is predictable enough to deploy. SOC2 audits that document what the agent has access to and how it is monitored. Change management cycles that require sign-off before the agent's behavior changes. Procurement processes that take quarters to evaluate, contract, and deploy a new tool. Legal review of indemnification and data handling. Compliance review of regulatory implications. Privacy review of what the agent sees. Each of these layers exists for good reasons. Together they add up to a deployment friction measured in months.
Attackers operate under none of these constraints. A criminal group can spin up Mythos-class capability the moment such capability becomes available outside the Glasswing coalition, with no governance, no audit, no policy review, no procurement cycle. The AI agent that takes a Fortune 500 enterprise nine months to deploy through the proper channels takes an attacker an afternoon. The same capability. Different deployment friction. Wildly different effective speed.
This is the asymmetry no one wants to name in a quarterly earnings call. It is the structural reality underneath the cheerful "AI helps both sides" narrative. The same capability is force-multiplied for offense and friction-multiplied for defense, by the entirely reasonable governance overhead that responsible enterprises operate under.
The asymmetry compounds in a second way that is even worse. Attackers iterate fast. They try things, see what works, throw away what doesn't, and improve continuously. Defenders cannot iterate fast on AI agents that touch production security infrastructure, because the failure modes are uncomfortable. An AI agent that incorrectly auto-quarantines a production database is a major incident. An AI agent that incorrectly tells the SOC "no unusual activity detected" while a breach is in progress is a catastrophic incident. The error tolerance for defensive AI deployment is much lower than the error tolerance for offensive AI deployment, which means the deployment cycle is slower, which means defenders are always behind the curve in real-time capability.
There are honest responses to this asymmetry that the industry is mostly not having. The first is to stop pretending it doesn't exist. Strategy that depends on "AI helps both sides" is bad strategy because it is built on a false premise. The strategy needs to be designed around the reality that defenders operate under friction attackers don't.
The second is to architect around the asymmetry rather than try to overcome it. The defensive layers that work against AI-enabled attackers are mostly architectural rather than agent-based. Network segmentation that limits blast radius. Identity hygiene that prevents lateral movement. Comprehensive telemetry that makes retrospective analysis possible. Immutable infrastructure that makes recovery faster than re-securing. These are the layers that hold up regardless of how fast the attacker's AI moves, because they are properties of the environment rather than properties of the defender's response speed.
The third response, which is going to take longer to mature, is to figure out how to govern defensive AI deployment in ways that preserve the necessary oversight while reducing the friction. SOC2-style audits that approve agent behavior categories rather than individual deployments. Change management that distinguishes "agent updating its detection rules" from "agent taking a destructive action." Model risk frameworks that allow AI agents to operate within bounded autonomy without requiring committee review of every decision. The work to build these frameworks is starting now. It will not be finished in time for the first wave of AI-enabled attacks, but it will matter for the second wave.
The honest message to CISOs: assume the asymmetry exists, plan around it, do not let your strategy depend on the AI agents you deploy keeping pace with the AI agents your attackers deploy. They will not, structurally, for a long time.
Read the deep dive: AI Vulnerability Discovery: The New Defender Economics, full analysis of why scarcity ending changes everything, including the asymmetry problem and what to architect around it.