Log Collection

    tip

    This section will more likely to introduce how to collect and analysis logs. If you just want to get real time logs from application for debug, you can just use vela logs command or check it on UI console provided by velaux addon.

    We need to enable the loki and grafana addon for this capability.

    The loki addon can be enabled in two modes:

    • Collecting application logs by specify trait, this is the default behavior when enabling log collection.
    • Collecting all applications logs from container stdout.

    Collecting logs by traits

    To make this mode work, you need to enable loki addon by setting parameter agent=vector:

    Log Collection - 图2caution

    When enable loki addon without the agent parameter, no log collector will be enabled, only loki service deployed.

    After this addon enabled with agent=vector, a service will be deployed in the control plane as a log store, and a log collection agent vector will be deployed as daemon for all nodes of each current managed clusters.

    note

    If you want to specify the clusters, you can specify the clusters parameter when enabling addon. When new cluster joined, you need to enable this addon once again to install on these newly joined ones.

    Finally, you will get the following traits for log collection.

    • file-logs
    • stdout-logs

    Collecting all STDOUT logs automatically

    1. vela addon enable loki agent=vector stdout=all

    After the addon enabled with stdout=all, the vector agent will collect all stdout logs of application pods automatically. No additional traits need to be configured. The collected logs will be delivered to the loki service in the control plane cluster for storage.

    Log Collection - 图4caution

    The most advantage of this mode is that all logs configuration is simple and automatic, while the disadvantage are:

    1. Collecting all running pods will cause a lot of pressure on the loki service when there are too many applications. On the one hand, not all logs are needed, it can waste disk storage a lot. On the other hand, the vector agents of each cluster need to transmit the collected logs to control plane cluster, which will consume lots of network bandwidth.
    2. The full collection mode can only collect logs in a unified way, and no special log parsing can be done on different applications.
    1. vela addon enable grafana

    caution

    Even if you have enabled the grafana addon as described in the , you still need to re-enable the addon to register the loki data source to grafana.

    After the loki addon enabled, a component will be installed in each cluster, which is responsible for collecting Kubernetes events and converting them to logs transmit to loki. You can also view and analyze the events of the system through the Kubernetes events dashboard in the grafana addon.

    event-log

    Details

    KubeVela Events dashboard

    As mentioned above, if you’re not enable the stdout full collection mode, you can collect stdout logs by specify trait.

    Configure the stdout-logs trait in the component, as follows:

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: app-stdout-log
    5. namespace: default
    6. spec:
    7. components:
    8. - type: webservice
    9. name: comp-stdout-log
    10. properties:
    11. image: busybox
    12. traits:
    13. - type: command
    14. properties:
    15. command:
    16. - sh
    17. - -c
    18. while :
    19. do
    20. now=$(date +"%T")
    21. echo "stdout: $now"
    22. done
    23. - type: stdout-logs

    If your application is an nginx gateway, the stdout-logs trait provide the capability to parse nginx format log to json format as follows:

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: nginx-app-2
    5. spec:
    6. components:
    7. - name: nginx-comp
    8. type: webservice
    9. properties:
    10. image: nginx:1.14.2
    11. ports:
    12. - port: 80
    13. expose: true
    14. traits:
    15. - type: stdout-logs
    16. properties:
    17. parser: nginx

    Then a special nginx access log analysis dashboard will be generated as follows:

    nginx-log

    Details

    KubeVela nginx application dashboard

    You can also set customize parse configuration for your application log in this trait. As follows:

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: nginx-app-2
    5. spec:
    6. components:
    7. - name: nginx-comp
    8. type: webservice
    9. properties:
    10. image: nginx:1.14.2
    11. ports:
    12. expose: true
    13. traits:
    14. - type: stdout-logs
    15. parser: customize
    16. VRL: |
    17. .message = parse_nginx_log!(.message, "combined")
    18. .new_field = "new value"

    In this example, we transform nginx combinded format logs to json format, and adding a new_field json key to each log, the json value is new value. Please refer to for how to write vector VRL.

    If you have a special log analysis dashboard for this processing method, you can refer to document to import it into grafana.

    The loki addon also support to collect file logs of containers. It doesn’t matter with which mode you’re enabling the loki addon, it works for all modes. Use the trait as follows:

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: app-file
    5. namespace: default
    6. spec:
    7. components:
    8. - type: webservice
    9. name: file-log-comp
    10. properties:
    11. image: busybox
    12. traits:
    13. - type: command
    14. properties:
    15. command:
    16. - sh
    17. - -c
    18. - |
    19. while :
    20. do
    21. now=$(date +"%T")
    22. echo "file: $now" >> /root/verbose.log
    23. sleep 10
    24. done
    25. - type: file-logs
    26. properties:
    27. path: /root/verbose.log

    In the example, we let business log of the my-biz component write to the /data/daily.log path in the container. After the application is created, you can view the corresponding file log results through the dashboard.

    It should be noted that the logs that need to be collected mustn’t is in the root directory of the container, otherwise it may cause the container to fail to start.