Datadog is the new Oracle

As I was working in the early and mid 2000s in the database space, there was Oracle which was a “big bad wolf”, having a very close grip on their customers and not hesitating to squeeze more juice from them at every opportunity. This, of course, was bad for innovation as we had been getting more and more data to store in process, in many cases it was prohibitively costly to do, thanks to Oracle.

When there is a pain, alternatives tend to arise, MySQL and PostgreSQL were born to underpin “Web 2.0” allowing companies like Facebook or Twitter to store and process data much more cost effectively than was ever possible with Proprietary solutions.

As I look at Observability space, I’m having “de ja vu” with the relation industry is having with Datadog. There are a fair amount of scary stories on the internet about obscene observability bills.

These days Observability is well understood as a mission critical function, where if you want to keep your application up, performing and secure you need to have great observability, which is a good thing. Yet, for many this level of “good observability” remains unreachable, because it is just too damn expensive.

Over the years I spoke to countless folks which have to limit their Observability, either not monitoring all nodes and services or drastically filtering logs and tracing they store, to fit their bill, rather than being able to get observability they want to have.

The good news, same as with Databases a couple of decades ago, is that the true Open Source revolution is coming to the Observability space. There are already many Open Source observability building blocks – Prometheus, Grafana, OpenTelemetry(OTEL) among others, yet complete observability platforms such as Datadog, Dynatrace or New Relic remain mostly proprietary and very expensive.

Yet things are changing – over the last few years we have seen a number of Open Source (Open Core) Observability startups striving to become Open Source alternatives to DataDog. Signoz, NetData and of course Coroot are among the companies working to make it happen.

In the next 5 years I’m expecting Open Source observability platforms to become a lot more mature and take a significant portion of the market and what is even more important is – empower even more Developers to have Observability they want and deserve without breaking the bank!

So how does such a host name to IP resolution process work ? There are basically two ways – you provide a static name to host mapping, i.e. through /etc/hosts file, which is very fast and reliable, but of course does not scale, or you can rely on DNS (Domain Name System) to resolve host names into addresses.

In environments which rely on DNS for name resolution for some of the services, which is the majority of environments, it is mission critical, as if it goes down or malfunctioning many things break.

DNS Latency is added latency for many type user faced requests, such as HTTP but it can easily remain unseen when only performance of actual HTTP request latency is measured.

Even though DNS is mission critical it is often not covered very well by Observability platforms, which tend to focus on the interactions after connection was established. Coroot was no exception… That is until now!

Share this post

Related posts