·5 min read·Blog·Events

Nullcon 2026: What Day Zero and the CXO Track Signal for Detection Engineering

Siddhant

Threat Researcher

CommunityEvents
  • Bloo Team Networking at Nullcon 2026

I attended Nullcon Goa 2026 this year across Day Zero and the CXO track, representing Bloo Systems. What stood out wasn’t a single “hot” exploit or a single vendor pitch – it was a consistent convergence: leaders and practitioners are no longer debating whether attacks are sophisticated; they’re debating whether our defense organizations are fast, instrumented, and governed enough to keep up.

Two personal moments anchored the week for me:

  • Speaking with Col. Tarun Uppal on cybercrime tracking reinforced a simple truth: defensive automation has to operate at machine speed. In 2026, “human-speed” defense is a strategic liability.
  • In the CXO track, connecting with Dr. Sanjay Bahl was a reminder that resilience is not a product feature. You can’t build advanced detection on a shaky foundation – the fundamentals decide how far you can scale.

So what does this mean for Detection Engineering, Threat Research, and the vendor ecosystem in 2026?

1. Day Zero reframed “vulnerabilities” into “executive decision-making”

Nullcon’s Day Zero positioning is explicit: it’s about “where zero-day exploits meet real-world executive decision-making,” translating attacker-first discoveries into strategy, governance, investment priorities, and boardroom conversations.

For detection teams, this is a signal that the boundary between “research” and “operations” is collapsing. Detection engineering can’t be a backlog of rules; it has to be a risk translation layer that connects exploit mechanics to business exposure.

What changes for Detection Engineering

  • From CVE bingo to exploit-aware telemetry: “Is this vuln on our stack?” becomes “Do we have the telemetry to observe exploit primitives?” (process lineage, token changes, unusual module loads, suspicious API call patterns, identity misuse).
  • Weak signals become first-class citizens: Day Zero’s emphasis on “distinguishing weak signals from noise” is a detection mandate. It pushes teams to build detections that are explainable, evidence – backed, and designed for triage speed – not just clever correlations.
  • Tabletops should include detection design: If leadership tabletop exercises simulate CISO/Legal/Business decisions, detection teams should simulate the other side: What evidence can we produce in 15 minutes? In 60? What can we not prove at all?

2. The CXO track made “resilience” measurable: speed, clarity, and coordination

The CXO agenda reads like a set of constraints the detection function must satisfy:

  • Tight reporting timelines, legal exposure, and board communication pressure
  • AI-generated deception eroding identity and trust
  • Dependency risk (SaaS, open-source, third parties) becoming the primary battleground
  • Adversaries winning with “legitimate” techniques: credential abuse, APIs, admin tools, and low-noise chains
  • Critical infrastructure digitization and vendor ecosystems expanding the blast radius

The big shift: incidents are no longer a purely technical story. They are a coordination story under time pressure.

What changes for Threat Research

Threat research that can’t be operationalized quickly becomes commentary. Research has to ship with:

  • Detection hypotheses (what should be observable)
  • Telemetry requirements (what must be collected and retained)
  • Investigation paths (what evidence proves or falsifies the story)
  • Response-safe triggers (what’s safe to automate, what needs a human gate)

In other words: research must arrive “pre-wired” for engineering. We have been walking this path with our webinars where we talk about the latest attack patterns industry wise along with raw telemetry backed detection strategies to complete the detection story.

3. AI deception isn’t “an AI problem.” It’s a trust-and-identity detection problem

One CXO session theme is blunt: AI-generated deception is weaponizing human trust at scale—deepfakes, synthetic voice, hyper-personalized social engineering. For detection engineering, this expands “identity” from a user account into a system of trust signals.

What detection teams should do next

  • Model “trust events” as telemetry: identity proofing changes, out-of-band verification steps, executive comms anomalies, unusual helpdesk resets, MFA method changes, device enrollment spikes.
  • Hunt the post-deception chain: deepfakes are an entry technique; the detection value is often in the follow-on actions – OAuth consent abuse, mailbox rules, token theft, privilege escalation, data staging.
  • Build “prove it” investigations: if a detection asserts “impersonation,” it should ship with the raw breadcrumbs needed to validate quickly – audio/video artifacts might be outside the SIEM, but the identity and access trail is not.

4. Supply chain + vendor ecosystems are now a detection surface, not a procurement checkbox

The CXO sessions repeatedly stress that attackers are weaponizing dependencies: SaaS providers, open-source libraries, cloud partners, contractors, and misconfigurations. This means detection programs must expand past “our endpoints” into our relationships.

What changes for engineering and operations

  • Continuous awareness beats periodic audits: detection should surface vendor and SaaS behavior drift (new admin grants, new integrations, impossible travel, API key creation, anomalous data pulls).
  • Dependency mapping becomes a hunting primitive: you can’t triage what you can’t enumerate. Asset and identity inventory isn’t paperwork; it’s your ability to draw a blast radius in minutes.
  • Demand forensic readiness from vendors: “we detected suspicious activity” is not enough. The question becomes: can the platform produce evidence you can defend? Logs, lineage, and attribution matter.

5. The 2026 operating model: machine-speed automation on top of human-grade fundamentals

If I compress Nullcon 2026 into a single sentence, it’s this:

Automation is mandatory – but only after you’ve built a foundation that makes automation safe.

That’s where the two personal takeaways connect:

  • Col. Tarun Uppal Sir’s point pushes us toward machine-speed defensive action.
  • Dr. Sanjay Bahl Sir’s point forces us to confront whether our fundamentals (telemetry, coverage, evidence, governance) are strong enough to trust that speed.

For detection engineering leaders, the path forward is not “more alerts” and not “more AI.” It’s:

  • Fewer, more defensible detections
  • Shorter time-to-evidence
  • Clearer ownership of automated decisions
  • Telemetry coverage that follows attacker behavior (identity + APIs + LOTL), not product categories

Now it’s back to the lab: turning conference signals into research that ships, and detections that hold up under pressure.

Related articles

The Explainability Gap: Why AI in Your SIEM Needs to Show Its Work

In 2026, the marketing gloss of “AI-Powered Security” has finally started to wear off, leaving organizations with a stark reality: we are no longer just managing logs; we are managing automated logic. As Agentic AI becomes a native participant in our Security Operations Centers (SOC), the decision to “AI” your SIEM is no longer a […]

Detecting Covert Exfiltration Through Kernel Signature Analysis: A Dual-Stream Network Research Lab

Executive Summary In the ever-evolving landscape of cybersecurity, adversaries continuously refine their techniques to evade detection. One of the most challenging threats to detect is low-and-slow data exfiltration – attacks that deliberately mimic legitimate traffic patterns to avoid triggering security controls. This blog post presents a research methodology for distinguishing between legitimate TCP streams and […]

Project MSFDefender

The Threat Research & Intelligence (TRI) team at Bloo conducted a structured evaluation of Windows payloads from the Metasploit Framework. The intent was not exploitation for its own sake, but defensive research to observe how these payloads behave at runtime and to collect high-quality endpoint telemetry that could directly support the  Detection Engineering (DE) team. […]

We use cookies to provide essential site functionality and, with your consent, to analyze site usage and enhance your experience. View our Privacy Policy