Using OpenEBS as TSDB for Prometheus

    Prometheus is the mostly widely used application for scraping cloud native application metrics. Prometheus and OpenEBS together provide a complete open source stack for monitoring. In this solution, OpenEBS is used as Prometheus TSDB, where all the metrics are permanently stored on local Kubernetes cluster.

    When using OpenEBS as TSDB, following are the advantages:

    • All the data is stored locally and managed natively to Kubernetes

    • No need of externally managed Prometheus storage

    • Start with small storage and expand the size of TSDB as needed on the fly

    • Take backup of the Prometheus metrics periodically and back them up to S3 or any object storage so that restoration of the same metrics is possible to the same or any other Kubernetes cluster

    Deployment model

    As shown above, OpenEBS volumes need to be configured with three replicas for high availability. This configuration work fine when the nodes (hence the cStor pool) is deployed across Kubernetes zones.

    1. Install OpenEBS : If OpenEBS is not installed on the Kubernetes already, start by OpenEBS on all or some of the cluster nodes. If OpenEBS is already installed, go to step 2.

    2. Create Storage Class :

      StorageClass is the interface through which most of the OpenEBS storage policies are defined. See Prometheus Storage Class section below.

    3. Configure PVC : Prometheus needs only one volume to store the data. See PVC example spec below.

    4. Launch and test Prometheus:

      Run to see Prometheus running. For more information on configuring more services to be monitored, see Prometheus documentation.

    Reference at

    A live deployment of Prometheus using OpenEBS volumes as highly available TSDB storage can be seen at the website www.openebs.ci

    Deployment YAML spec files for Prometheus and OpenEBS resources are found

    OpenEBS-CI dashboard of Prometheus

    Monitor OpenEBS Volume size

    It is not seamless to increase the cStor volume size (refer to the roadmap item). Hence, it is recommended that sufficient size is allocated during the initial configuration.

    Monitor cStor Pool size

    As in most cases, cStor pool may not be dedicated to just Prometheus alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See

    Maintain volume replica quorum during node upgrades

    cStor volume replicas need to be in quorum Prometheus application is deployed as and cStor volume is configured to have . Node reboots may be common during Kubernetes upgrade. Maintain volume replica quorum in such instances. See here for more details.

    ,

    Sample YAML specs

    Sample cStor Pool spec

    Prometheus StorageClass

    PVC spec for Prometheus

    See the sample spec files for Grafana using cStor here.