Canary Release based on Ingress-Nginx

    As we demonstrated in Managing Canary Release of Microservice App based on Istio, you can use KubeSphere to implement grayscale release in your project based on Istio. However, many users are not using Istio. Most projects from these users are pretty simple so that we need to provide a light-weight solution for this case.

    brings a new feature with “Canary”, which could be used as a load balancer for gateway. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the applied rules, and control the traffic splits. KubeSphere built-in gateway of each project supports the “Canary” feature of Ingress-Nginx.

    We have elaborated on the scenarios of grayscale in the Istio bookinfo guide. In this document we are going to demonstrate how to use KubeSphere route and gateway, namely, Ingress and Ingress-Controller, to implement grayscale release.

    Based on , KubeSphere implements the gateway in each project, namely, Kubernetes namespace, serving as the traffic entry and a reverse proxy of each project. Nginx annotations support the following rules after is set. Please refer to Nginx Annotations for further explanation.

    • nginx.ingress.kubernetes.io/canary-by-header
    • nginx.ingress.kubernetes.io/canary-by-header-value
    • nginx.ingress.kubernetes.io/canary-weight
    • nginx.ingress.kubernetes.io/canary-by-cookie

    Note: Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header - > canary-by-cookie - > canary-weight.

    The four annotation rules above can be generally divided into the following two categories:

    • The canary rules based on the weight

    • The canary rules based on the user request

    User-Based Canary

    • You need to complete all steps in .

    1.1. Use project-admin account to log in KubeSphere, create a project ingress-demo under the workspace demo-workspace. Go to Project Settings → Advanced Settings, click Set Gateway, and click Save to open the gateway in this project. Note it defaults to NodePort.

    1.2. We are going to use command line to create the resources provided by the following yaml files. Log in KubeSphere with admin account, open Web kubectl from the Toolbox at the bottom-right corner of console, then use the following command to create production resources Deployment and Service:

    The file is as follows:

    production.yaml

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: production
    5. labels:
    6. app: production
    7. spec:
    8. replicas: 1
    9. selector:
    10. matchLabels:
    11. app: production
    12. template:
    13. metadata:
    14. labels:
    15. app: production
    16. spec:
    17. containers:
    18. - name: production
    19. image: mirrorgooglecontainers/echoserver:1.10
    20. ports:
    21. env:
    22. - name: NODE_NAME
    23. valueFrom:
    24. fieldRef:
    25. fieldPath: spec.nodeName
    26. - name: POD_NAME
    27. valueFrom:
    28. fieldRef:
    29. fieldPath: metadata.name
    30. - name: POD_NAMESPACE
    31. valueFrom:
    32. fieldRef:
    33. fieldPath: metadata.namespace
    34. valueFrom:
    35. fieldRef:
    36. fieldPath: status.podIP
    37. ---
    38. apiVersion: v1
    39. kind: Service
    40. metadata:
    41. name: production
    42. labels:
    43. app: production
    44. spec:
    45. ports:
    46. - port: 80
    47. targetPort: 8080
    48. protocol: TCP
    49. name: http
    50. selector:
    51. app: production
    1. $ kubectl apply -f production.ingress -n ingress-demo
    2. ingress.extensions/production created

    The file is as follows:

    production.ingress

    You can verify each resource by navigating to the corresponding lists from the console.

    Deployment Deployment

    Service

    Route (Ingress) Ingress

    Use the command to access the application of production.

    1. $ curl --resolve kubesphere.io:30205:192.168.0.88 kubesphere.io:30205
    2. Hostname: production-6b4bb8d58d-7r889
    3. Pod Information:
    4. pod name: production-6b4bb8d58d-7r889
    5. pod namespace: ingress-demo
    6. pod IP: 10.233.87.165
    7. Server values:
    8. server_version=nginx: 1.12.2 - lua: 10010
    9. Request Information:
    10. client_address=10.233.87.225
    11. method=GET
    12. real path=/
    13. query=
    14. request_version=1.1
    15. request_scheme=http
    16. Request Headers:
    17. accept=*/*
    18. host=kubesphere.io:30205
    19. user-agent=curl/7.29.0
    20. apiVersion: extensions/v1beta1
    21. x-forwarded-for=192.168.0.88
    22. x-forwarded-host=kubesphere.io:30205
    23. x-forwarded-port=80
    24. x-forwarded-proto=http
    25. x-original-uri=/
    26. x-real-ip=192.168.0.88
    27. x-request-id=9596df96e994ea05bece2ebbe689a2cc
    28. x-scheme=http
    29. Request Body:
    30. -no body in request-

    Same as above, refer to the yaml files that we used in production to create an application of canary version, including Deployment and Service, you just need to replace the occurrences of production with canary in those yaml files.

    Set Canary Release based on Weight

    A typical scenario of the rule is based on weight, that is, blue-green deployment. You can set the weight from 0 to 100 to implement that kind of application release. At any time, only one of the environments is production. For this example, currently green is production and blue is canary. Initially, the weight of canary is set to 0 which means no traffic is forwarded to this release. You can introduce a small portion of traffic to blue version step by step, test and verify it. If everything is OK then you can shift all requests from green to blue by set the weight of blue to 100 which makes blue the production release. In a word, with such canary releasing process, the application is upgraded smoothly.

    4.1. Now create a canary Ingress. The following file uses canary-weight annotation to introduce 30% of all traffic to the canary version.

    1. $ kubectl apply -f weighted-canary.ingress -n ingress-demo
    2. ingress.extensions/canary created

    The yaml file is as follows.

    4.2. Verify the Weighted Canary Release

    Note: Although we set 30% of traffic to the canary, the traffic ratio may fluctuate to a small extent.

    1. for i in $(seq 1 10); do curl -s --resolve kubesphere.io:30205:192.168.0.88 kubesphere.io:30205 | grep "Hostname"; done

    Set Canary Release based on Request Header

    4.3. Go to Application Workloads → Routes, click into the detailed page of route canary, then go to More → Edit Annotations. Follow the screenshot below, add a row of annotation with nginx.ingress.kubernetes.io/canary-by-header: canary to the Ingress of canary release created above. The header to use for notifying the Ingress to route the request to the service specified in the canary Ingress.

    4.4. Add different header in the request, and access the application domain name. More specifically,

    • When the request header is set to always, it will be routed to the canary.
    • When the header is set to never, it will never be routed to the canary.

    Note: For any other value, the header will be ignored and the request compared against the other canary rules by precedence.

    1. for i in $(seq 1 10); do curl -s -H "canary: never" --resolve kubesphere.io:30205:192.168.0.88 kubesphere.io:30205 | grep "Hostname"; done

    Request Header

    We set the canary: other-value in the header, the Ingress with canary value 30% to take precedence over others.

    4.5. Now we can add a new row of annotation nginx.ingress.kubernetes.io/canary-by-header-value: user-value which is for notifying the Ingress to route the request to the service specified in the canary Ingress.

    Canary by Header Value

    4.6. Access the domain name as follows, when the request header is set to this value, it will be routed to the canary version. For any other header value, the header will be ignored and the request is compared against the other canary rules by precedence.

    1. for i in $(seq 1 10); do curl -s -H "canary: user-value" --resolve kubesphere.io:30205:192.168.0.88 kubesphere.io:30205 | grep "Hostname"; done

    Based on Cookie

    Grayscale release can ensure overall system stability. You can find problems and make adjustments at the initial gray scale to minimize the degree of impact. We have demonstrated four annotation rules of Ingress-Nginx. It is convenient and light-weight for users who want to implement grayscale release without Istio.