Moving logging subsystem resources with node selectors

    You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.

    For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

    Prerequisites

    • The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default.

    Procedure

    1. Edit the custom resource (CR) in the openshift-logging project:

      1. apiVersion: logging.openshift.io/v1
      2. kind: ClusterLogging
      3. ...
      4. spec:
      5. collection:
      6. logs:
      7. fluentd:
      8. resources: null
      9. type: fluentd
      10. logStore:
      11. elasticsearch:
      12. nodeCount: 3
      13. nodeSelector: (1)
      14. node-role.kubernetes.io/infra: ''
      15. tolerations:
      16. - effect: NoSchedule
      17. key: node-role.kubernetes.io/infra
      18. value: reserved
      19. - effect: NoExecute
      20. key: node-role.kubernetes.io/infra
      21. value: reserved
      22. redundancyPolicy: SingleRedundancy
      23. resources:
      24. limits:
      25. cpu: 500m
      26. memory: 16Gi
      27. requests:
      28. cpu: 500m
      29. memory: 16Gi
      30. storage: {}
      31. type: elasticsearch
      32. visualization:
      33. kibana:
      34. node-role.kubernetes.io/infra: ''
      35. tolerations:
      36. - effect: NoSchedule
      37. key: node-role.kubernetes.io/infra
      38. value: reserved
      39. - effect: NoExecute
      40. key: node-role.kubernetes.io/infra
      41. value: reserved
      42. proxy:
      43. resources: null
      44. replicas: 1
      45. resources: null
      46. type: kibana
      47. ...

    Verification

    For example:

    • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

      1. $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      2. kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
    • You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

      1. $ oc get nodes

      Example output

      Note that the node has a node-role.kubernetes.io/infra: '' label:

      1. $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
      1. kind: Node
      2. apiVersion: v1
      3. metadata:
      4. name: ip-10-0-139-48.us-east-2.compute.internal
      5. selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
      6. uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
      7. resourceVersion: '39083'
      8. creationTimestamp: '2020-04-13T19:07:55Z'
      9. labels:
      10. node-role.kubernetes.io/infra: ''
    • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

      1. $ oc get pods

      Example output

    • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

      1. $ oc get pod kibana-7d85dcffc8-bfpfp -o wide

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      2. kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
    • After a few moments, the original Kibana pod is removed.

      1. $ oc get pods
      1. NAME READY STATUS RESTARTS AGE
      2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
      3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
      4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
      5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
      6. fluentd-42dzz 1/1 Running 0 29m
      7. fluentd-d74rq 1/1 Running 0 29m
      8. fluentd-m5vr9 1/1 Running 0 29m
      9. fluentd-nkxl7 1/1 Running 0 29m
      10. fluentd-pdvqb 1/1 Running 0 29m
      11. kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s