Injecting Faults

    The books demo is a great way to show off this behavior. The overall topology looks like:

    )

    Topology

    In this guide, you will split some of the requests from to books. Most requests will end up at the correct books destination, however some of them will be redirected to a faulty backend. This backend will return 500s for every request and inject faults into the webapp service. No code changes are required and as this method is configuration driven, it is a process that can be added to integration tests and CI pipelines. If you are really living the chaos engineering lifestyle, fault injection could even be used in production.

    First, add the books sample application to your cluster:

    As this manifest is used as a demo elsewhere, it has been configured with an error rate. To show how fault injection works, the error rate needs to be removed so that there is a reliable baseline. To increase success rate for booksapp to 100%, run:

    1. kubectl -n booksapp patch deploy authors \
    2. --type='json' \
    3. -p='[{"op":"remove", "path":"/spec/template/spec/containers/0/env/2"}]'

    After a little while, the stats will show 100% success rate. You can verify this by running:

    1. linkerd viz -n booksapp stat deploy

    The output will end up looking at little like:

    1. cat <<EOF | linkerd inject - | kubectl apply -f -
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: error-injector
    6. namespace: booksapp
    7. data:
    8. nginx.conf: |-
    9. events {}
    10. http {
    11. server {
    12. listen 8080;
    13. location / {
    14. return 500;
    15. }
    16. }
    17. }
    18. ---
    19. kind: Deployment
    20. metadata:
    21. name: error-injector
    22. namespace: booksapp
    23. labels:
    24. spec:
    25. selector:
    26. matchLabels:
    27. app: error-injector
    28. replicas: 1
    29. template:
    30. metadata:
    31. labels:
    32. app: error-injector
    33. spec:
    34. containers:
    35. - name: nginx
    36. image: nginx:alpine
    37. volumeMounts:
    38. - name: nginx-config
    39. mountPath: /etc/nginx/nginx.conf
    40. subPath: nginx.conf
    41. volumes:
    42. - name: nginx-config
    43. configMap:
    44. name: error-injector
    45. apiVersion: v1
    46. kind: Service
    47. metadata:
    48. name: error-injector
    49. namespace: booksapp
    50. spec:
    51. ports:
    52. - name: service
    53. port: 8080
    54. app: error-injector
    55. EOF

    With booksapp and NGINX running, it is now time to partially split the traffic between an existing backend, books, and the newly created error-injector. This is done by adding a configuration to your cluster:

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: split.smi-spec.io/v1alpha1
    3. kind: TrafficSplit
    4. metadata:
    5. name: error-split
    6. namespace: booksapp
    7. spec:
    8. service: books
    9. backends:
    10. - service: books
    11. weight: 900m
    12. - service: error-injector
    13. weight: 100m
    14. EOF

    When Linkerd sees traffic going to the books service, it will send 9⁄10 requests to the original service and 1⁄10 to the error injector. You can see what this looks like by running stat and filtering explicitly to just the requests from webapp:

    Unlike the previous stat command which only looks at the requests received by servers, this routes command filters to all the requests being issued by webapp destined for the books service itself. The output should show a 90% success rate:

    1. ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99

    Note

    To remove everything in this guide from your cluster, run:

    1. kubectl delete ns booksapp