How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes

    Install Elastic search and Kibana

    1. Create a Kubernetes namespace for monitoring tools

    2. Add the helm repo for Elastic Search

      1. helm repo update
    3. Install Elastic Search using Helm

      By default, the chart creates 3 replicas which must be on different nodes. If your cluster has fewer than 3 nodes, specify a smaller number of replicas. For example, this sets the number of replicas to 1:

      1. helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set replicas=1

      Otherwise:

      1. helm install elasticsearch elastic/elasticsearch -n dapr-monitoring

      If you are using minikube or simply want to disable persistent volumes for development purposes, you can do so by using the following command:

    4. Install Kibana

      1. helm install kibana elastic/kibana -n dapr-monitoring
    5. Ensure that Elastic Search and Kibana are running in your Kubernetes cluster

      1. $ kubectl get pods -n dapr-monitoring
      2. NAME READY STATUS RESTARTS AGE
      3. elasticsearch-master-0 1/1 Running 0 6m58s
      4. kibana-kibana-95bc54b89-zqdrk 1/1 Running 0 4m21s
    1. Install config map and Fluentd as a daemonset

      Download these config files:

      Apply the configurations to your cluster:

      1. kubectl apply -f ./fluentd-config-map.yaml
      2. kubectl apply -f ./fluentd-dapr-with-rbac.yaml

    Install Dapr with JSON formatted logs

    1. Install Dapr with enabling JSON-formatted logs

      1. helm repo add dapr https://dapr.github.io/helm-charts/
      2. helm repo update
      3. helm install dapr dapr/dapr --namespace dapr-system --set global.logAsJson=true
    2. Enable JSON formatted log in Dapr sidecar

      Add the dapr.io/log-as-json: "true" annotation to your deployment yaml. For example:

      1. apiVersion: apps/v1
      2. kind: Deployment
      3. name: pythonapp
      4. namespace: default
      5. labels:
      6. app: python
      7. spec:
      8. replicas: 1
      9. selector:
      10. matchLabels:
      11. app: python
      12. template:
      13. metadata:
      14. labels:
      15. annotations:
      16. dapr.io/enabled: "true"
      17. dapr.io/app-id: "pythonapp"
      18. ...
    1. Port-forward from localhost to svc/kibana-kibana

      1. $ kubectl port-forward svc/kibana-kibana 5601 -n dapr-monitoring
      2. Forwarding from 127.0.0.1:5601 -> 5601
      3. Forwarding from [::1]:5601 -> 5601
      4. Handling connection for 5601
      5. Handling connection for 5601
    2. Browse to http://localhost:5601

    3. Expand the drop-down menu and click Management → Stack Management

    4. On the Stack Management page, select Data → Index Management and wait until dapr-* is indexed.

      Index Management view on Kibana Stack Management page

    5. Once dapr-* is indexed, click on Kibana → Index Patterns and then the Create index pattern button.

    6. Kibana define an index pattern page

    7. Configure the primary time field to use with the new index pattern by selecting the @timestamp option from the Time field drop-down. Click the Create index pattern button to complete creation of the index pattern.

    8. The newly created index pattern should be shown. Confirm that the fields of interest such as scope, type, app_id, level, etc. are being indexed by using the search box in the Fields tab.

      View of created Kibana index pattern

    9. To explore the indexed data, expand the drop-down menu and click Analytics → Discover.

    10. In the search box, type in a query string such as and click the Refresh button to view the results.

      Using the search box in the Kibana Analytics Discover page

    References