·6 min read·By Platform Engineering

SIEM Pricing Models Compared: What Breaks at Scale

From Splunk to Sentinel to Google SecOps, SIEM pricing models all have tradeoffs. Learn which punishes growth and what works.

SIEM pricing has become one of the most complex and consequential line items in enterprise security budgets. The pricing model a vendor uses determines not just what you pay, it determines what telemetry you can afford to collect, how long you can retain it, and whether growth in data volume is a business risk.

This guide breaks down the four main SIEM pricing models, compares them at enterprise scale, and explains what a fundamentally different economic model looks like.

The four main SIEM pricing models explained

Enterprise SIEM platforms use one of four pricing approaches, each with distinct cost dynamics.

Per-GB ingestion pricing charges based on the volume of data ingested into the platform daily or monthly. This is the most common model, used by Splunk (in its traditional licensing), Microsoft Sentinel, and others. Cost is directly proportional to data volume. The advantage is simplicity: usage is easy to measure. The disadvantage is that cost scales linearly (or worse) with the growth in enterprise telemetry, and that growth is rarely optional.

Workload or compute-based pricing charges based on the processing resources consumed rather than the raw data volume. Splunk's SVU (Splunk Virtual Compute) model is the most prominent example. The advantage is that organizations that process data efficiently pay less per byte. The disadvantage is that cost is harder to predict, because it depends on query patterns, detection rule complexity, and search behavior, not just volume.

Events-per-second (EPS) pricing charges based on the rate of events processed. IBM QRadar uses this model. Cost is tied to throughput rather than volume, which decouples pricing from raw data size. But the mapping between actual telemetry volume and EPS is not linear, event density varies by source, making budget forecasting difficult.

Annual flat-rate or tiered commitment pricing offers a fixed annual cost for a defined capacity tier. Google SecOps (formerly Chronicle) popularized this approach. The advantage is budget predictability within the contracted tier. The disadvantage emerges when volumes exceed the commitment, additional capacity typically costs more per unit, and the tier structure may not align with the organization's actual growth trajectory.

Ingestion-based pricing: how it works and where it breaks

Per-GB ingestion pricing dominates the SIEM market. Its mechanics are simple: every gigabyte of telemetry sent to the platform is counted and billed.

At moderate volumes, 500 GB to 2 TB per day, ingestion pricing is manageable for most enterprise budgets. The cost is predictable enough to plan, and the total is within the range of typical security infrastructure spending.

The model breaks at scale. Enterprise telemetry volumes grow 20-30% annually, driven by cloud migration, endpoint expansion, SaaS adoption, and identity proliferation. At 5 TB/day, an organization paying $2-4 per GB ingested is spending $3.6 million to $7.3 million per year on ingestion alone, before storage, compute, or staffing. At 10 TB/day, the math becomes untenable for most budgets.

Organizations respond by managing the data, not the threat. They implement pre-ingestion filtering, exclude high-volume sources, reduce telemetry resolution, and build complex routing pipelines to minimize what reaches the SIEM. Each of these actions reduces cost. Each also reduces the completeness of the security record.

The structural problem is not the price per gigabyte, it is that the pricing model makes completeness expensive. And completeness is exactly what detection, investigation, compliance, and AI-driven operations require.

Workload and compute pricing: hidden complexity

Compute-based pricing, where the cost is tied to processing resources rather than raw data volume, was introduced to address the limitations of per-GB models. In theory, it allows organizations to ingest more data if they process it efficiently.

In practice, compute-based pricing introduces a different kind of unpredictability. Cost depends on how data is queried, how many detection rules execute, how frequently dashboards refresh, and how search-heavy the analyst workflow is. Two organizations ingesting the same volume can pay dramatically different amounts based on their usage patterns.

This makes budgeting harder, not easier. An incident investigation that requires extensive historical search can spike compute costs unexpectedly. A new detection rule that runs across a broad data set can increase steady-state costs without any change in data volume. And the relationship between data volume and cost, while less direct than per-GB pricing, is still present: more data means more processing, more indexing, and more storage.

Flat-rate and tiered commitment models: what the fine print says

Flat-rate pricing appears to solve the predictability problem. Pay a fixed annual fee, ingest as much as the tier allows, and budget without volume anxiety.

The reality is more nuanced. Flat-rate models are typically structured as tiered commitments with defined capacity bands. An organization that grows beyond its committed tier faces one of two outcomes: paying for the next tier up (which may represent a significant step increase) or accepting volume limits that reintroduce the filtering and exclusion behaviors of per-GB models.

Flat-rate models also tend to bundle capabilities differently. The base tier may not include extended retention, advanced analytics, or the full feature set. Each capability layer adds cost. The "flat rate" is flat for a defined scope, and scope expansion is where costs grow.

For organizations with stable, predictable telemetry volumes, tiered commitments offer genuine value. For organizations in growth phases, cloud migration, acquisition integration, new business unit onboarding, the rigidity of commitment tiers can be a constraint.

A real-world cost comparison: 5 TB/day across major vendors

At 5 TB/day, a common volume for mid-to-large enterprises, the cost differences between pricing models are significant.

Splunk (per-GB model): At estimated rates of $2-4 per GB ingested, 5 TB/day translates to approximately $3.6 million to $7.3 million per year in ingestion costs alone. Infrastructure, storage, and staffing add another 30-50%.

Microsoft Sentinel (per-GB with commitment tiers): Sentinel's pay-as-you-go rate is approximately $2.76 per GB. Commitment tiers reduce this to roughly $1.50-2.00 per GB at the 500 GB/day level, with further discounts at higher tiers. At 5 TB/day with optimal tier commitment, annual cost is approximately $2.7 million to $3.6 million for ingestion. Azure infrastructure costs are additional.

Google SecOps (flat-rate): Pricing is negotiated and not publicly disclosed at scale. Published estimates suggest annual costs of $1 million to $3 million for mid-size deployments, but organizations consistently report that enterprise-scale pricing approaches that of other platforms once retention, support, and feature tiers are included.

Bloo (predictable, volume-independent): Bloo's cost scales with time, not volume. At 5 TB/day, the cost does not increase proportionally with each additional terabyte. Full-fidelity retention in hot, searchable storage is included. There is no separate storage tier, no cold archive surcharge, and no ingestion penalty.

The comparison is most meaningful when measured as total cost of ownership, including the value of the telemetry you can now retain that was previously excluded.

What predictable telemetry economics actually looks like

Predictable pricing for enterprise telemetry means that cost does not scale with data volume. It means that the decision to collect a new data source, increase telemetry fidelity, or extend retention is not a budgetary event. It means that growth, in data sources, in volume, in retention requirements, does not introduce cost uncertainty.

Bloo implements this model architecturally. By deploying inside the customer's cloud and optimizing for storage efficiency rather than ingestion throughput, Bloo's economics are tied to the infrastructure footprint, not to the data volume flowing through it. Organizations pay for the system, not for each byte it processes.

This changes the relationship between security and budget in a fundamental way. Instead of asking "how much telemetry can we afford to collect?" organizations ask "how much telemetry do we need?" The answer, consistently, is "all of it."

How to use this guide in a vendor negotiation

If you are currently negotiating a SIEM contract renewal or evaluating new platforms, this framework helps structure the conversation.

First, calculate your actual daily volume across all telemetry sources, including the ones you currently exclude from SIEM. This is your true volume requirement, not your SIEM-constrained volume.

Second, apply each vendor's pricing model to the true volume, not the filtered volume. This reveals the real cost of full coverage under each model.

Third, factor in retention. Most SIEM pricing covers 30-90 days of hot retention. Add the cost of extending to 12 months, two years, or whatever your compliance requirements mandate.

Fourth, include operational costs, the staff and infrastructure required to manage data pipelines, tune rules within volume constraints, and maintain the SIEM environment.

Finally, compare the total cost against an architecture where Bloo handles collection, retention, and structuring, and the SIEM operates as a detection and alerting layer on top of Bloo's enriched data feed. The comparison is not SIEM vs. Bloo. It is SIEM-as-everything vs. SIEM-on-substrate. The economics differ substantially.

Stay ahead of cyber threats

Get the latest threat intelligence, research insights, and security updates delivered to your inbox.

We respect your privacy. Unsubscribe at any time. Privacy Policy

We use cookies to provide essential site functionality and, with your consent, to analyze site usage and enhance your experience. View our Privacy Policy