Kubernetes
When Fluent Bit is deployed in Kubernetes as a DaemonSet and configured to read the log files from the containers (using tail or systemd input plugins), this filter aims to perform the following operations:
- Analyze the Tag and extract the following metadata:
- Pod Name
- Namespace
- Container ID
- Query Kubernetes API Server to obtain extra metadata for the POD in question:
- Pod ID
- Labels
- Annotations
The data is cached locally in memory and appended to each record.
The plugin supports the following configuration parameters:
A flexible feature of Fluent Bit Kubernetes filter is that allow Kubernetes Pods to suggest certain behaviors for the log processor pipeline when processing the records. At the moment it support:
- Suggest a pre-defined parser
- Request to exclude logs
The following annotations are available:
Annotation | Description | Default |
---|---|---|
fluentbit.io/parser[_stream][-container] | Suggest a pre-defined parser. The parser must be registered already by Fluent Bit. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Parser. If present, the stream (stdout or stderr) will restrict that specific stream. If present, the container can override a specific container in a Pod. | |
fluentbit.io/exclude | Request to Fluent Bit to exclude or not the logs generated by the Pod. This option will only be processed if Fluent Bit configuration (Kubernetes Filter) have enabled the option K8S-Logging.Exclude. | False |
Suggest a parser
The following Pod definition runs a Pod that emits Apache logs to the standard output, in the Annotations it suggest that the data should be processed using the pre-defined parser called apache:
Request to exclude logs
There are certain situations where the user would like to request that the log processor simply skip the logs from the Pod in question:
apiVersion: v1
kind: Pod
metadata:
name: apache-logs
app: apache-logs
fluentbit.io/exclude: "true"
spec:
containers:
- name: apache
image: edsiper/apache_logs
Kubernetes Filter depends on either Tail or input plugins to process and enrich records with Kubernetes metadata. Here we will explain the workflow of Tail and how it configuration is correlated with Kubernetes filter. Consider the following configuration example (just for demo purposes, not production):
In the input section, the Tail plugin will monitor all files ending in .log in path /var/log/containers/. For every file it will read every line and apply the docker parser. Then the records are emitted to the next step with an expanded tag.
Tail support Tags expansion, which means that if a tag have a star character (*), it will replace the value with the absolute path of the monitored file, so if you file name and path is:
/var/log/container/apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
then the Tag for every record of that file becomes:
When runs, it will try to match all records that starts with kube. (note the ending dot), so records from the file mentioned above will hit the matching rule and the filter will try to enrich the records
Kubernetes Filter do not care from where the logs comes from, but it cares about the absolute name of the monitored file, because that information contains the pod name and namespace name that are used to retrieve associated metadata to the running Pod from the Kubernetes Master/API Server.
kube.var.log.containers.apache-logs-annotated_default_apache-aeeccc7a9f00f6e4e066aeff0434cf80621215071f1b20a51e8340aa7c35eac6.log
to:
that new value is used by the filter to lookup the pod name and namespace, for that purpose it uses an internal Regular expression:
You can see on Rublar.com web site how this operation is performed, check the following demo link:
Custom Regex
Under certain and not common conditions, a user would want to alter that hard-coded regular expression, for that purpose the option Regex_Parser can be used (documented on top).
Final Comments
So at this point the filter is able to gather the values of pod_name and namespace, with that information it will check in the local cache (internal hash table) if some metadata for that key pair exists, if so, it will enrich the record with the metadata value, otherwise it will connect to the Kubernetes Master/API Server and retrieve that information.