Configuring the logging collector

    You can configure the CPU and memory limits for the log collector and move the log collector pods to specific nodes. All supported modifications to the log collector can be performed though the stanza in the ClusterLogging custom resource (CR).

    The supported way of configuring OpenShift Logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OKD releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design.

    You can view the Fluentd logging collector pods and the corresponding nodes that they are running on. The Fluentd logging collector pods run only in the openshift-logging project.

    Procedure

    • Run the following command in the openshift-logging project to view the Fluentd logging collector pods and their details:

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none>
    3. fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none>
    4. fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none>
    5. fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none>
    6. fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>

    The log collector allows for adjustments to both the CPU and memory limits.

    Procedure

    1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

      1. $ oc -n openshift-logging edit ClusterLogging instance
      1. $ oc edit ClusterLogging instance
      2. apiVersion: "logging.openshift.io/v1"
      3. kind: "ClusterLogging"
      4. metadata:
      5. name: "instance"
      6. namespace: openshift-logging
      7. ...
      8. spec:
      9. collection:
      10. logs:
      11. fluentd:
      12. resources:
      13. limits: (1)
      14. memory: 736Mi
      15. cpu: 100m
      16. memory: 736Mi
      1Specify the CPU and memory limits and requests as needed. The values shown are the default values.

    OpenShift Logging includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors:

    • the size of Fluentd chunks and chunk buffer

    • the Fluentd chunk flushing behavior

    • the Fluentd chunk forwarding retry behavior

    Fluentd collects log data in a single blob called a chunk. When Fluentd creates a chunk, the chunk is considered to be in the stage, where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue, where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured.

    By default in OKD, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval. By default, Fluentd retries chunk flushing indefinitely. In OKD, you cannot change the indefinite retry behavior.

    These parameters can help you determine the trade-offs between latency and throughput.

    • To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system.

    • To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries.

    You can configure the chunking and flushing behavior using the following parameters in the ClusterLogging custom resource (CR). The parameters are then automatically added to the Fluentd config map for use by Fluentd.

    Table 1. Advanced Fluentd Configuration Parameters
    ParmeterDescriptionDefault

    chunkLimitSize

    8m

    totalLimitSize

    The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost.

    8G

    flushInterval

    The interval between chunk flushes. You can use s (seconds), m (minutes), h (hours), or d (days).

    1s

    The method to perform flushes:

    • lazy: Flush chunks based on the timekey parameter. You cannot modify the timekey parameter.

    • interval: Flush chunks based on the flushInterval parameter.

    • immediate: Flush chunks immediately after data is added to a chunk.

    interval

    flushThreadCount

    The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency.

    2

    overflowAction

    The chunking behavior when the queue is full:

    • throw_exception: Raise an exception to show in the log.

    • block: Stop data chunking until the full buffer issue is resolved.

    • drop_oldest_chunk: Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks.

    block

    retryMaxInterval

    The maximum time in seconds for the exponential_backoff retry method.

    retryType

    The retry method when flushing fails:

    • exponential_backoff: Increase the time between flush retries. Fluentd doubles the time it waits until the next retry until the retry_max_interval parameter is reached.

    • periodic: Retries flushes periodically, based on the retryWait parameter.

    exponential_backoff

    retryWait

    The time in seconds before the next chunk flush.

    1s

    For more information on the Fluentd chunk lifecycle, see in the Fluentd documentation.

    Procedure

    1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    2. Add or modify any of the following parameters:

      1. apiVersion: logging.openshift.io/v1
      2. kind: ClusterLogging
      3. metadata:
      4. name: instance
      5. namespace: openshift-logging
      6. spec:
      7. forwarder:
      8. fluentd:
      9. buffer:
      10. chunkLimitSize: 8m (1)
      11. flushInterval: 5s (2)
      12. flushThreadCount: 3 (4)
      13. overflowAction: throw_exception (5)
      14. retryMaxInterval: "300s" (6)
      15. retryType: periodic (7)
      16. totalLimitSize: 32m (9)
      17. ...
    3. Verify that the Fluentd pods are redeployed:

      1. $ oc get pods -n openshift-logging
    4. Check that the new values are in the fluentd config map:

      1. $ oc extract configmap/fluentd --confirm

      Example fluentd.conf

    As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster.

    In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources.

    Prerequisites

    • Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default. For example:

      1. outputRefs:
      2. - default

    Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss.

    Procedure

    1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

      1. $ oc edit ClusterLogging instance
    2. If they are present, remove the logStore and visualization stanzas from the ClusterLogging CR.

    3. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example:

      1. apiVersion: "logging.openshift.io/v1"
      2. kind: "ClusterLogging"
      3. metadata:
      4. name: "instance"
      5. namespace: "openshift-logging"
      6. spec:
      7. managementState: "Managed"
      8. collection:
      9. logs:
      10. type: "fluentd"

    Additional resources