Observability — Logging FAQ

    If you are using the KubeSphere internal Elasticsearch and want to change it to your external alternate, follow the steps below. If you haven’t enabled the logging system, refer to KubeSphere Logging System to setup your external Elasticsearch directly.

    1. First, you need to update the KubeKey configuration. Execute the following command:

    2. Comment out , es.elasticsearchMasterXXX and status.logging, and set es.externalElasticsearchUrl to the address of your Elasticsearch and es.externalElasticsearchPort to its port number. Below is an example for your reference.

      1. apiVersion: installer.kubesphere.io/v1alpha1
      2. kind: ClusterConfiguration
      3. metadata:
      4. name: ks-installer
      5. namespace: kubesphere-system
      6. ...
      7. ...
      8. common:
      9. es:
      10. # elasticsearchDataReplicas: 1
      11. # elasticsearchDataVolumeSize: 20Gi
      12. # elasticsearchMasterReplicas: 1
      13. # elasticsearchMasterVolumeSize: 4Gi
      14. elkPrefix: logstash
      15. externalElasticsearchUrl: <192.168.0.2>
      16. externalElasticsearchPort: <9200>
      17. ...
      18. status:
      19. ...
      20. # logging:
      21. # enabledTime: 2020-08-10T02:05:13UTC
      22. # status: enabled
      23. ...
    3. Rerun ks-installer.

      1. kubectl rollout restart deploy -n kubesphere-system ks-installer
    4. Remove the internal Elasticsearch by running the following command. Please make sure you have backed up data in the internal Elasticsearch.

    How to change the log store to Elasticsearch with X-Pack Security enabled

    You need to update the KubeKey configuration and rerun ks-installer.

    1. Execute the following command:

      1. kubectl edit cc -n kubesphere-system ks-installer
    2. Comment out status.logging and set a desired retention period as the value of es.logMaxAge (7 by default).

      1. kind: ClusterConfiguration
      2. metadata:
      3. namespace: kubesphere-system
      4. ...
      5. spec:
      6. ...
      7. common:
      8. es:
      9. ...
      10. logMaxAge: <7>
      11. ...
      12. status:
      13. ...
      14. # logging:
      15. # enabledTime: 2020-08-10T02:05:13UTC
      16. # status: enabled
      17. ...
    3. Rerun ks-installer.

    I cannot find logs from workloads on some nodes using Toolbox

    If you deployed KubeSphere through and are using symbolic links for the docker root directory, make sure all nodes follow the same symbolic links. Logging agents are deployed in DaemonSets onto nodes. Any discrepancy in container log paths may cause collection failures on that node.

    1. docker info -f '{{.DockerRootDir}}'

    If the log search page is stuck when loading, check the storage system you are using. For example, a misconfigured NFS storage system may cause this issue.

    Toolbox shows no log record today

    Check if your log volume exceeds the storage limit of Elasticsearch. If so, you need to increase the Elasticsearch disk volume.

    There can be several reasons for this issue:

    • Network partition
    • Invalid Elasticsearch host and port
    • The Elasticsearch health status is red

    How to make KubeSphere only collect logs from specified workloads

    The KubeSphere logging agent is powered by Fluent Bit. You need to update the Fluent Bit configuration to exclude certain workload logs. To modify the Fluent Bit input configuration, run the following command:

    1. kubectl edit input -n kubesphere-logging-system tail

    Update the field Input.Spec.Tail.ExcludePath. For example, set the path to to exclude any log from system components.