Advanced Deployment Strategies

    Blue-Green Deployment

    involve running two versions of an application at the same time and moving traffic from the in-production version (the green version) to the newer version (the blue version). You can use a rolling strategy or switch services in a route.

    Blue-Green deployments use two deployment configurations. Both are running, and the one in production depends on the service the route specifies, with each deployment configuration exposed to a different service. You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new, blue, version is live.

    If necessary, you can roll back to the older, green, version by switching service back to the previous version.

    Using a Route and Two Services

    This example sets up two deployment configurations; one for the stable version (the green version) and the other for the newer version (the blue version).

    A route points to a service, and can be changed to point to a different service at any time. As a developer, you can test the new version of your code by connecting to the new service before your production traffic is routed to it.

    Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications.

    1. Create two copies of the example application:

      This creates two independent application components: one running the v1 image under the service, and one using the v2 image under the example-blue service.

    2. Create a route that points to the old service:

      1. $ oc expose svc/example-green --name=bluegreen-example
    3. Browse to the application at example-green.<project>.<router_domain> to verify you see the v1 image.

    4. To verify that the route has changed, refresh the browser until you see the v2 image.

    The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version. Since you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the previous version. As you adjust the request load on each version, the number of pods in each service may need to be scaled as well to provide the expected performance.

    In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user’s reaction to the different versions to inform design decisions.

    For this to be effective, both the old and new versions need to be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions need to properly work together.

    OKD supports N-1 compatibility through the web console as well as the command line interface.

    Load Balancing for A/B Testing

    The user sets up a . Each service handles a version of the application.

    Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights. The weight for each service is distributed to the service’s endpoints so that the sum of the endpoint weights is the service weight.

    The route can have up to four services. The weight for the service can be between 0 and 256. When the weight is 0, the service does not participate in load-balancing but continues to serve existing persistent connections. When the service weight is not 0, each endpoint has a minimum weight of 1. Because of this, a service with a lot of endpoints can end up with higher weight than desired. In this case, reduce the number of pods to get the desired load balance weight. See the Alternate Backends and Weights section for more information.

    The web console allows users to set the weighting and show balance between them:

    1. Create the two applications and give them different names. Each creates a deployment configuration. The applications are versions of the same program; one is usually the current production version and the other the proposed new version:

      1. $ oc new-app openshift/deployment-example1 --name=ab-example-a
    2. Expose the deployment configuration to create a service:

      1. $ oc expose dc/ab-example-a --name=ab-example-A
      2. $ oc expose dc/ab-example-b --name=ab-example-B

      At this point both applications are deployed and are running and have services.

    3. Make the application available externally via a route. You can expose either service at this point, it may be convenient to expose the current production version and latter modify the route to add the new version.

      1. $ oc expose svc/ab-example-A

      Browse to the application at ab-example.<project>.<router_domain> to verify that you see the desired version.

    4. When you deploy the route, the router will according to the weights specified for the services. At this point there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights will bring the A/B setup to life. This can be done by the oc set route-backends command or by editing the route.

      Setting the oc set route-backend to 0 means the service does not participate in load-balancing but continues to serve existing persistent connections.

      To edit the route, run:

    Managing Weights Using the Web Console

    1. Navigate to the Route details page (Applications/Routes).

    2. Select Edit from the Actions menu.

    3. Check Split traffic across multiple services.

    4. The Service Weights slider sets the percentage of traffic sent to each service.

      Service Weights

      For traffic split between more than two services, the relative weights are specified by integers between 0 and 256 for each service.

      Traffic weightings are shown on the Overview in the expanded rows of the applications between which traffic is split.

    Managing Weights Using the CLI

    This command manages the services and corresponding weights load balanced by the route.

    1. $ oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]

    For example, the following sets ab-example-A as the primary service with weight=198 and ab-example-B as the first alternate service with a weight=2:

      This means 99% of traffic will be sent to service ab-example-A and 1% to service ab-example-B.

      This command does not scale the deployment configurations. You may need to do that to have enough pods to handle the request load.

      The command with no flags displays the current configuration.

      1. $ oc set route-backends web
      2. NAME KIND TO WEIGHT
      3. routes/web Service ab-example-A 198 (99%)
      4. routes/web Service ab-example-B 2 (1%)

      The --adjust flag allows you to alter the weight of an individual service relative to itself or to the primary service. Specifying a percentage will adjust the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends their weights will be kept proportional to the changed.

      1. $ oc set route-backends web --adjust ab-example-B=5%
      2. $ oc set route-backends web --adjust ab-example-B=+15%
      1. $ oc set route-backends web --equal

      The --zero flag sets the weight of all services to 0. All requests will return with a 503 error.

      One Service, Multiple Deployment Configurations

      If you have the router installed, make the application available via a route (or use the service IP directly):

      Browse to the application at ab-example.<project>.<router_domain> to verify you see the v1 image.

      1. Create a second shard based on the same source image as the first shard but different tagged version, and set a unique value:

        1. $ oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE="shard B" COLOR="red"
      2. Edit the newly created shard to set a label ab-example=true that will be common to all shards:

        1. $ oc edit dc/ab-example-b

        In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-b label. Save and exit the editor.

      3. Trigger a re-deployment of the second shard to pick up the new labels:

        1. $ oc rollout latest dc/ab-example-b
      4. At this point, both sets of pods are being served under the route. However, since both browsers (by leaving a connection open) and the router (by default, through a cookie) will attempt to preserve your connection to a back-end server, you may not see both shards being returned to you. To force your browser to one or the other shard, use the scale command:

        1. $ oc scale dc/ab-example-a --replicas=0

        Refreshing your browser should show v2 and shard B (in red).

          Refreshing your browser should show v1 and shard A (in blue).

          If you trigger a deployment on either shard, only the pods in that shard will be affected. You can easily trigger a deployment by changing the SUBTITLE environment variable in either deployment config oc edit dc/ab-example-a or oc edit dc/ab-example-b. You can add additional shards by repeating steps 5-7.

        Proxy Shard / Traffic Splitter

        In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard, which forwards or splits the traffic it receives to a separate service or application running elsewhere.

        In the simplest configuration, the proxy would forward requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes.

        While an implementation is beyond the scope of this example, any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OKD router with proportional balancing capabilities.

        Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem.

        This can take many forms — data stored on disk, in a database, in a temporary cache, or that is part of a user’s browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it.

        For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional.

        One way to validate N-1 compatibility is to use an . Run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment.

        Graceful Termination

        OKD and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit.

        On shutdown, OKD will send a TERM signal to the processes in the container. Application code, on receiving SIGTERM, should stop accepting new connections. This will ensure that load balancers route traffic to other active instances. The application code should then wait until all open connections are closed (or gracefully terminate individual connections at the next opportunity) before exiting.

        After the graceful termination period expires, a process that has not exited will be sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and may be customized per application as necessary.