140x cheaper than Datadog: why storing observability data on-prem makes sense

I’ve heard this story many times from production engineers: ‘We use tools like Datadog and NewRelic, but to keep costs from skyrocketing, we’re only monitoring our most critical services. We’re storing just 10% of our logs and traces and only the metrics we consider essential.

It’s a frustrating situation. Engineers want full visibility across their systems, but cloud storage costs make it impossible to monitor everything. They’re forced to make tough choices, often sacrificing data that could be key to resolving issues quickly and keeping services reliable. And as systems grow more complex, this compromise becomes even harder to manage.

Let’s put aside the debate about vendor pricing and look at this from an engineering perspective. Why do we choose a cloud monitoring platform in the first place? Mostly, it’s to avoid the hassle of managing observability tools and databases ourselves, so we can focus on our core business. But with today’s infrastructure tools, managing observability in-house is much easier than it used to be. So, is relying on a cloud platform still worth it?

If your infrastructure is already in the cloud, you have all the building blocks needed to manage observability yourself—like Kubernetes, scalable storage, and autoscalers. Imagine a complete observability platform that could store all your data within your own setup, giving you full visibility into your applications. (Spoiler: it exists, and it’s called Coroot 😎)

With this setup, let’s dive into the cost comparison. By storing and processing data locally, how much could you actually save? Could managing observability on-premise be the key to slashing your observability costs while gaining unlimited insights into your systems? Let’s break down the numbers to see the impact.

To explore, we’ll start with a small-scale setup. At Coroot, we run a demo application on an EKS cluster with five c6a.xlarge EC2 nodes (each with 4 CPU cores and 8GB of memory). This setup includes 15 services and 5 databases that constantly interact, handling around 300 requests per second. Our applications are fully instrumented with Coroot and OpenTelemetry, capturing everything needed to troubleshoot issues: traces, metrics, logs, and continuous profiling data.

Let’s take a closer look at the data volume collected over 30 days. Below is a breakdown of data types, quantities, ingestion volumes, and storage requirements:

Data Type Quantity Data Ingested (30 days) Data Stored (30 days)
Metrics 15,000 metrics at a 15-second resolution 3 GB 2GB
Logs 1.6B events 400 GB 64 GB (compressed, replicated)
Traces 7B spans 2 TB 900 GB (compressed, replicated)
Profiles 500 GB 100 GB (compressed, replicated)
Quantity Data Ingested (30 days) Data Stored (30 days)
Metrics
15,000 metrics at a 15-second resolution 3 GB 2 GB
Logs
1.6B events 400 GB 64 GB (compressed, replicated)
Traces
7B spans 2 TB 900 GB (compressed, replicated)
Profiles
500 GB 100 GB (compressed, replicated)

Now that we know the data volume, let’s calculate what it would cost to store it all in Datadog. With Datadog, observability costs go beyond just service fees, we also need to consider AWS egress charges for data ingestion.

Cost Component Quantity Datadog Cost (monthly) AWS Egress Cost (monthly)
Infrastructure Monitoring 5 hosts $75 -
Universal Service Monitoring 5 hosts $65 ?
Log Management 1.6B events, 400 GB $4,040 ($40 Ingestion + $4,000 Indexing) $36
APM (Traces) 7B spans, 2 TB $17,767 ($155 APM hosts + $125 Span Ingestion + $17,487 Span indexing for 30-day retention) $180
Continuous Profiler 5 hosts, 500 GB $95 $45
Total $22,042 $261
QuantityDatadog Cost (monthly)AWS Egress Cost (monthly)
Infrastructure Monitoring
5 hosts$75
Universal Monitoring
5 hosts$65?
Log Management
1.6B events, 400 GB$4,040 ($40 Ingestion + $4,000 Indexing)$36
APM (Traces)
7B spans, 2 TB$17,767 ($155 APM hosts + $125 Span Ingestion + $17,487 Span indexing for 30-day retention)$180
Continuous Profiler
5 hosts, 500 GB$95GB$95
Total
 $22,042$261

In total, you’d be paying an extra $22,303 per month just for observability, while the base cloud infrastructure costs only $680 per month. It’s ridiculous to spend so much on monitoring a small-scale app!

Datadog often comes up in discussions about high observability costs, but it’s not alone. With any cloud-based observability service, the price can quickly become substantial. Let’s calculate the cost for the same setup using Grafana Cloud to see how it compares.

Cost Component Quantity Grafana Cloud Cost (monthly) AWS Egress Cost (monthly)
Metrics 15,000 active series $120 $0.3
Logs 1.6B events, 400 GB $200 $36
Traces 7B spans, 2 TB $1000 $180
Profiles 500 GB $250 $45
User access 3 active users $24 -
Total $1594 $261
Quantity Grafana Cloud Cost (monthly) AWS Egress Cost (monthly)
Metrics
15 000 active series $120 $0.3
Logs
1.6B events, 400 GB $200 $36
Traces
7B spans, 2 TB $1000 $180
Profilers
500 GB $250 $45
User Access
3 active users $24
Total
$1594 $261

For our small setup, using Grafana Cloud would cost about $1,855 per month, including egress fees. While that’s cheaper than Datadog, it’s still nearly 300% of the base infrastructure costs.

So, how can we drastically reduce these observability costs? The first step is to avoid transferring data outside our infrastructure, which eliminates expensive egress fees. The second step is to apply data compression: logs, traces, and profiles can all be compressed significantly, reducing storage needs and costs.

From our samples, we’ve seen impressive compression ratios:

  • Logs: 12x compression
  • Traces: 5.5x compression
  • Profiles: 12x compression

 

Now, let’s look at how the costs change when we store all observability data locally within our own infrastructure. To do this, we need an additional 1.1 TB of storage (including replication) and some compute resources to run the monitoring components:

  • Prometheus: 0.2 CPU core, 200 MB of memory
  • ClickHouse (2 shards, 2 replicas): 0.8 CPU core, 4 GB of memory
  • Coroot: 0.2 CPU core, 1 GB of memory

 

With Coroot, there are two options for on-premise observability:

  1. Coroot Community Edition (Free): Includes all core observability features, ideal for small teams.
  2. Coroot Enterprise Edition ($1 per CPU core): Adds additional features, priced at $1 per CPU core. For a setup with 5 nodes (4 CPU cores each), this totals $20 per month.

 

Here’s the cost breakdown for using Coroot on AWS:

Cost Component Quantity AWS Cost (monthly) Coroot Cost (monthly)
Storage (EBS gp3) 1.1 TB $88 -
Compute 2 vCPUs and 8 GB of RAM, distributed across multiple nodes for high availability $54 (equivalent to 1 t3a.large instance) -
Coroot Enterprise Subscription - - $0 for Community Edition,
$20 for Enterprise Edition
Quantity AWS Cost (monthly) Coroot Cost (monthly)
Storage (EBS gp3)
1.1 TB $88
Compute
2 vCPUs and 8 GB of RAM, distributed across multiple nodes for high availability $54 (equivalent to 1 t3a.large instance)
Coroot Enterprise Subscription
$0 for Community Edition,
$20 for Enterprise Edition

By choosing Coroot’s on-premise setup, your total monthly cost is $142 with the Community Edition or $162 with the Enterprise Edition, compared to $22,303 with Datadog and $1,855 with Grafana Cloud.

Achieving a 140x cost reduction compared to Datadog (and 11x compared to Grafana Cloud) might seem like magic or a calculation error, but it’s not. We’re still storing the same amount of data and using similar resources for processing. The difference comes from avoiding the extra costs cloud-based observability platforms impose, especially the high storage premiums, data egress fees, and per-metric charges. By managing observability within our own infrastructure, we eliminate these expenses and benefit from predictable, straightforward pricing.

I was planning to explore observability costs for larger environments, but seeing $22k per month for a small app was shocking enough. I don’t even want to imagine the costs at a bigger scale. As engineers, you can see the difference from this example alone.

At Coroot, we’re rethinking observability from both a usability and cost perspective. Our goal is to help users find root causes instantly. However, the high costs of storing and processing telemetry data make this challenging. For accurate insights, we need to analyze large amounts of detailed data, but if storage costs are too high, we’re forced to compromise on visibility.

That’s why we’ve come to believe that storing data locally is the most effective way to cut observability costs. By keeping data in-house, we avoid the high premiums of cloud storage, making full visibility affordable and sustainable, without sacrificing data quality or depth.

In line with this approach, we’ve also decided not to charge based on the amount of ingested data for on-prem solutions. Instead, our pricing depends only on the infrastructure size, specifically the number of CPU cores. This lets teams gather all the data they need without worrying about fluctuating costs when data volumes increase.

Ready to explore affordable, full-featured observability? Try Coroot Community Edition for free, or start a free trial of Coroot Enterprise Edition for advanced capabilities.