TSL - Time Series Language

TSL: a developer-friendly Time Series query language for all our metrics

At the Metrics team we have been working on time series for several years. From our experience the data analytics capabilities of a Time Series Database (TSDB) platform is a key factor to create value from your metrics. And these analytics capabilities are mostly defined by the query languages they support. 

TSL stands for Time Series Language. In a few words, TSL is an abstracted way, under the form of an HTTP proxy, to generate queries for different TSDB backends. Currently it supports Warp 10’s WarpScript and  Prometheus’ PromQL query languages but we aim to extend the support to other major TSDB.

To better understand why we created TSL, we are reviewing some of the TSDB query languages supported on OVH Metrics Data Platform. When implementing them, we learnt the good, the bad and the ugly of each one. At the end, we decided to build TSL to simplify the querying on our platform, before open-sourcing it to use it on any TSDB solution. 

Why did we decide to invest some of our Time in such a proxy? Let me tell you the story of the OVH metrics protocol!

Upgrading vSphere

How we’ve updated 850 vCenter in 4 weeks

Handling release management on enterprise software isn’t an easy job: updating infrastructures, coping with the fear of not being supported by the software editor, upgrading licenses to be compatible with new versions, and taking all precautions to rollback if something isn’t working as expected…

With OVH Private Cloud, we take away from you this responsibility, we are managing this time-costing and stressful aspect to allow you to concentrate in your business and your production.

But, this doesn’t mean it’s not a challenge for us neither.

OVH & Apache Flink

Handling OVH’s alerts with Apache Flink

OVH relies extensively on metrics to effectively monitor its entire stack. Whenever they are low-level or business centric, they allow teams to gain insight into how our services are operating on a daily basis. The need to store millions of datapoints per second has produced the need to create a dedicated team to build a operate a product to handle that load: Metrics Data Platform. By relying on Apache Hbase, Apache Kafka and Warp 10, we succeeded in creating a fully distributed platform that is handling all our metrics… and yours!

After building the platform to deal with all those metrics, our next challenge was to build one of the most needed feature for Metrics: Alerting.