Observability — Logging FAQ
- How to change the log store to the external Elasticsearch and shut down the internal Elasticsearch
- How to modify the log data retention period
- The log search page in Toolbox gets stuck when loading
- I see Internal Server Error when viewing logs in Toolbox
If you are using the KubeSphere internal Elasticsearch and want to change it to your external alternate, follow the steps below. If you haven’t enabled the logging system, refer to KubeSphere Logging System to setup your external Elasticsearch directly.
First, you need to update the KubeKey configuration. Execute the following command:
Comment out ,
es.elasticsearchMasterXXX
andstatus.logging
, and setes.externalElasticsearchUrl
to the address of your Elasticsearch andes.externalElasticsearchPort
to its port number. Below is an example for your reference.apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
...
...
common:
es:
# elasticsearchDataReplicas: 1
# elasticsearchDataVolumeSize: 20Gi
# elasticsearchMasterReplicas: 1
# elasticsearchMasterVolumeSize: 4Gi
elkPrefix: logstash
externalElasticsearchUrl: <192.168.0.2>
externalElasticsearchPort: <9200>
...
status:
...
# logging:
# enabledTime: 2020-08-10T02:05:13UTC
# status: enabled
...
Rerun
ks-installer
.kubectl rollout restart deploy -n kubesphere-system ks-installer
Remove the internal Elasticsearch by running the following command. Please make sure you have backed up data in the internal Elasticsearch.
How to change the log store to Elasticsearch with X-Pack Security enabled
You need to update the KubeKey configuration and rerun ks-installer
.
Execute the following command:
kubectl edit cc -n kubesphere-system ks-installer
Comment out
status.logging
and set a desired retention period as the value ofes.logMaxAge
(7
by default).kind: ClusterConfiguration
metadata:
namespace: kubesphere-system
...
spec:
...
common:
es:
...
logMaxAge: <7>
...
status:
...
# logging:
# enabledTime: 2020-08-10T02:05:13UTC
# status: enabled
...
Rerun
ks-installer
.
I cannot find logs from workloads on some nodes using Toolbox
If you deployed KubeSphere through and are using symbolic links for the docker root directory, make sure all nodes follow the same symbolic links. Logging agents are deployed in DaemonSets onto nodes. Any discrepancy in container log paths may cause collection failures on that node.
docker info -f '{{.DockerRootDir}}'
If the log search page is stuck when loading, check the storage system you are using. For example, a misconfigured NFS storage system may cause this issue.
Toolbox shows no log record today
Check if your log volume exceeds the storage limit of Elasticsearch. If so, you need to increase the Elasticsearch disk volume.
There can be several reasons for this issue:
- Network partition
- Invalid Elasticsearch host and port
- The Elasticsearch health status is red
How to make KubeSphere only collect logs from specified workloads
The KubeSphere logging agent is powered by Fluent Bit. You need to update the Fluent Bit configuration to exclude certain workload logs. To modify the Fluent Bit input configuration, run the following command:
kubectl edit input -n kubesphere-logging-system tail
Update the field Input.Spec.Tail.ExcludePath
. For example, set the path to to exclude any log from system components.