Example: Add logging and metrics to the PHP / Redis Guestbook example

    • A running instance of the
    • Elasticsearch and Kibana
    • Filebeat
    • Metricbeat
    • Packetbeat
    • Start up the PHP Guestbook with Redis.
    • Install kube-state-metrics.
    • Create a Kubernetes Secret.
    • Deploy the Beats.
    • View dashboards of your logs and metrics.

    Before you begin

    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

    To check the version, enter kubectl version.

    Additionally you need:

    Start up the PHP Guestbook with Redis

    This tutorial builds on the tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the Cleanup steps. Come back to this page when you have the guestbook running.

    Add a Cluster role binding

    Create a cluster level role binding so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system).

    Kubernetes is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in.

    1. git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
    1. kubectl get pods --namespace=kube-system -l app.kubernetes.io/name=kube-state-metrics

    Output:

    1. NAME READY STATUS RESTARTS AGE
    2. kube-state-metrics-89d656bf8-vdthm 1/1 Running 0 21s

    Clone the Elastic examples GitHub repo

    1. git clone https://github.com/elastic/examples.git

    The rest of the commands will reference files in the examples/beats-k8s-send-anywhere directory, so change dir there:

    1. cd examples/beats-k8s-send-anywhere

    Create a Kubernetes Secret

    A Kubernetes Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image; putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.

    Self managed

    Switch to the Managed service tab if you are connecting to Elasticsearch Service in Elastic Cloud.

    Set the credentials

    There are four files to edit to create a k8s secret when you are connecting to self managed Elasticsearch and Kibana (self managed is effectively anything other than the managed Elasticsearch Service in Elastic Cloud). The files are:

    1. ELASTICSEARCH_HOSTS
    2. ELASTICSEARCH_PASSWORD
    3. ELASTICSEARCH_USERNAME
    4. KIBANA_HOST

    Set these with the information for your Elasticsearch cluster and your Kibana host. Here are some examples (also see )

    ELASTICSEARCH_HOSTS

    1. A nodeGroup from the Elastic Elasticsearch Helm Chart:

      1. ["http://elasticsearch-master.default.svc.cluster.local:9200"]
    2. A single Elasticsearch node running on a Mac where your Beats are running in Docker for Mac:

      1. ["http://host.docker.internal:9200"]
      1. ["http://host1.example.com:9200", "http://host2.example.com:9200"]

    Edit ELASTICSEARCH_HOSTS:

    1. vi ELASTICSEARCH_HOSTS

    ELASTICSEARCH_PASSWORD

    Just the password; no whitespace, quotes, < or >:

    1. <yoursecretpassword>

    Edit ELASTICSEARCH_PASSWORD:

    1. vi ELASTICSEARCH_PASSWORD

    ELASTICSEARCH_USERNAME

    Just the username; no whitespace, quotes, < or >:

    1. <your ingest username for Elasticsearch>

    Edit ELASTICSEARCH_USERNAME:

    KIBANA_HOST

    1. The Kibana instance from the Elastic Kibana Helm Chart. The subdomain default refers to the default namespace. If you have deployed the Helm Chart using a different namespace, then your subdomain will be different:

      1. "kibana-kibana.default.svc.cluster.local:5601"
    2. A Kibana instance running on a Mac where your Beats are running in Docker for Mac:

      1. "host.docker.internal:5601"
    3. Two Elasticsearch nodes running in VMs or on physical hardware:

      1. "host1.example.com:5601"

    Edit KIBANA_HOST:

    1. vi KIBANA_HOST

    Create a Kubernetes Secret

    This command creates a Secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:

    1. --from-file=./ELASTICSEARCH_HOSTS \
    2. --from-file=./ELASTICSEARCH_PASSWORD \
    3. --from-file=./ELASTICSEARCH_USERNAME \
    4. --from-file=./KIBANA_HOST \
    5. --namespace=kube-system

    Managed service

    This tab is for Elasticsearch Service in Elastic Cloud only, if you have already created a secret for a self managed Elasticsearch and Kibana deployment, then continue with Deploy the Beats.

    There are two files to edit to create a Kubernetes Secret when you are connecting to the managed Elasticsearch Service in Elastic Cloud. The files are:

    1. ELASTIC_CLOUD_AUTH
    2. ELASTIC_CLOUD_ID

    Set these with the information provided to you from the Elasticsearch Service console when you created the deployment. Here are some examples:

    ELASTIC_CLOUD_ID

    1. devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ==

    ELASTIC_CLOUD_AUTH

    Just the username, a colon (:), and the password, no whitespace or quotes:

    1. elastic:VFxJJf9Tjwer90wnfTghsn8w

    Edit the required files:

    1. vi ELASTIC_CLOUD_ID
    2. vi ELASTIC_CLOUD_AUTH

    Create a Kubernetes Secret

    This command creates a Secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:

    1. kubectl create secret generic dynamic-logging \
    2. --from-file=./ELASTIC_CLOUD_ID \
    3. --from-file=./ELASTIC_CLOUD_AUTH \
    4. --namespace=kube-system

    Manifest files are provided for each Beat. These manifest files use the secret created earlier to configure the Beats to connect to your Elasticsearch and Kibana servers.

    About Filebeat

    Filebeat will collect logs from the Kubernetes nodes and the containers running in each pod running on those nodes. Filebeat is deployed as a . Filebeat can autodiscover applications running in your Kubernetes cluster. At startup Filebeat scans existing containers and launches the proper configurations for them, then it will watch for new start/stop events.

    1. - condition.contains:
    2. kubernetes.labels.app: redis
    3. config:
    4. - module: redis
    5. log:
    6. input:
    7. type: docker
    8. - ${data.kubernetes.container.id}
    9. slowlog:
    10. enabled: true
    11. var.hosts: ["${data.host}:${data.port}"]

    This configures Filebeat to apply the Filebeat module redis when a container is detected with a label app containing the string redis. The redis module has the ability to collect the log stream from the container by using the docker input type (reading the file on the Kubernetes node associated with the STDOUT stream from this Redis container). Additionally, the module has the ability to collect Redis slowlog entries by connecting to the proper pod host and port, which is provided in the container metadata.

    1. kubectl create -f filebeat-kubernetes.yaml

    Verify

    1. kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic

    About Metricbeat

    Metricbeat autodiscover is configured in the same way as Filebeat. Here is the Metricbeat autodiscover configuration for the Redis containers. This configuration is in the file metricbeat-kubernetes.yaml:

    This configures Metricbeat to apply the Metricbeat module redis when a container is detected with a label tier equal to the string backend. The module has the ability to collect the info and keyspace metrics from the container by connecting to the proper pod host and port, which is provided in the container metadata.

    Deploy Metricbeat

    1. kubectl create -f metricbeat-kubernetes.yaml

    Verify

    1. kubectl get pods -n kube-system -l k8s-app=metricbeat

    About Packetbeat

    Packetbeat configuration is different than Filebeat and Metricbeat. Rather than specify patterns to match against container labels the configuration is based on the protocols and port numbers involved. Shown below is a subset of the port numbers.

    Note: If you are running a service on a non-standard port add that port number to the appropriate type in filebeat.yaml and delete/create the Packetbeat DaemonSet.

    1. packetbeat.interfaces.device: any
    2. packetbeat.protocols:
    3. - type: dns
    4. ports: [53]
    5. include_authorities: true
    6. include_additionals: true
    7. - type: http
    8. ports: [80, 8000, 8080, 9200]
    9. - type: mysql
    10. ports: [3306]
    11. - type: redis
    12. ports: [6379]
    13. packetbeat.flows:
    14. timeout: 30s
    15. period: 10s

    Deploy Packetbeat

    1. kubectl create -f packetbeat-kubernetes.yaml

    Verify

    1. kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic

    View in Kibana

    Open Kibana in your browser and then open the Dashboard application. In the search bar type Kubernetes and click on the Metricbeat dashboard for Kubernetes. This dashboard reports on the state of your Nodes, deployments, etc.

    Search for Packetbeat on the Dashboard page, and view the Packetbeat overview.

    Similarly, view dashboards for Apache and Redis. You will see dashboards for logs and metrics for each. The Apache Metricbeat dashboard will be blank. Look at the Apache Filebeat dashboard and scroll to the bottom to view the Apache error logs. This will tell you why there are no metrics available for Apache.

    To enable Metricbeat to retrieve the Apache metrics, enable server-status by adding a ConfigMap including a mod-status configuration file and re-deploy the guestbook.

    Scale your Deployments and see new pods being monitored

    List the existing Deployments:

    1. kubectl get deployments

    The output:

    1. NAME READY UP-TO-DATE AVAILABLE AGE
    2. frontend 3/3 3 3 3h27m
    3. redis-master 1/1 1 1 3h27m
    4. redis-slave 2/2 2 2 3h27m

    Scale the frontend down to two pods:

    1. kubectl scale --replicas=2 deployment/frontend

    The output:

    1. deployment.extensions/frontend scaled

    Scale the frontend back up to three pods:

    1. kubectl scale --replicas=3 deployment/frontend

    View the changes in Kibana

    See the screenshot, add the indicated filters and then add the columns to the view. You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc.

    Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.

    1. Run the following commands to delete all Pods, Deployments, and Services.

      1. kubectl delete deployment -l app=redis
      2. kubectl delete service -l app=redis
      3. kubectl delete deployment -l app=guestbook
      4. kubectl delete service -l app=guestbook
      5. kubectl delete -f filebeat-kubernetes.yaml
      6. kubectl delete -f metricbeat-kubernetes.yaml
      7. kubectl delete -f packetbeat-kubernetes.yaml
      8. kubectl delete secret dynamic-logging -n kube-system

      The response should be this:

    What’s next