Ingress Operator in OKD

    The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OKD Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints.

    The Ingress configuration asset

    The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml.

    YAML Definition of the Ingress resource

    The installation program stores this asset in the cluster-ingress-02-config.yml file in the manifests/ directory. This Ingress resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows:

    • The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller.

    • The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a Route resource that does not specify an explicit host.

    Ingress controller configuration parameters

    The ingresscontrollers.operator.openshift.io resource offers the following configuration parameters.

    All parameters are optional.

    TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server.

    Understanding TLS security profiles

    You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OKD components. The OKD TLS security profiles are based on Mozilla recommended configurations.

    You can specify one of the following TLS security profiles for each component:

    Table 1. TLS security profiles
    ProfileDescription

    Old

    This profile is intended for use with legacy clients or libraries. The profile is based on the recommended configuration.

    The Old profile requires a minimum TLS version of 1.0.

    For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1.

    Intermediate

    This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller and control plane. The profile is based on the Intermediate compatibility recommended configuration.

    The Intermediate profile requires a minimum TLS version of 1.2.

    Modern

    This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the recommended configuration.

    The Modern profile requires a minimum TLS version of 1.3.

    The Modern profile is currently not supported.

    Custom

    This profile allows you to define the TLS version and ciphers to use.

    OKD router enables Red Hat-distributed OpenSSL default set of TLS 1.3 cipher suites. Your cluster might accept TLS 1.3 connections and cipher suites, even though TLS 1.3 is unsupported in OKD 4.6, 4.7, and 4.8.

    When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout.

    Configuring the TLS security profile for the Ingress Controller

    To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server.

    Sample IngressController CR that configures the TLS security profile

    1. apiVersion: config.openshift.io/v1
    2. kind: IngressController
    3. ...
    4. spec:
    5. tlsSecurityProfile:
    6. old: {}
    7. type: Old
    8. ...

    The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers.

    You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile. For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters.

    The HAProxy Ingress Controller image does not support TLS 1.3 and because the Modern profile requires TLS 1.3, it is not supported. The Ingress Operator converts the Modern profile to Intermediate. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1, and TLS 1.3 of a Custom profile to 1.2.

    Prerequisites

    • You have access to the cluster as a user with the cluster-admin role.

    Procedure

    1. Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile:

      1. $ oc edit IngressController default -n openshift-ingress-operator
    2. Add the spec.tlsSecurityProfile field:

      Sample IngressController CR for a Custom profile

      1. apiVersion: operator.openshift.io/v1
      2. kind: IngressController
      3. ...
      4. spec:
      5. tlsSecurityProfile:
      6. type: Custom (1)
      7. custom: (2)
      8. ciphers: (3)
      9. - ECDHE-ECDSA-CHACHA20-POLY1305
      10. - ECDHE-RSA-CHACHA20-POLY1305
      11. - ECDHE-RSA-AES128-GCM-SHA256
      12. - ECDHE-ECDSA-AES128-GCM-SHA256
      13. minTLSVersion: VersionTLS11
      14. ...
      1Specify the TLS security profile type (Old, Intermediate, or Custom). The default is Intermediate.
      2Specify the appropriate field for the selected type:
      • old: {}

      • intermediate: {}

      • custom:

      3For the custom type, specify a list of TLS ciphers and minimum accepted TLS version.
    3. Save the file to apply the changes.

    Verification

    • Verify that the profile is set in the IngressController CR:

      1. $ oc describe IngressController default -n openshift-ingress-operator

      Example output

      1. Name: default
      2. Namespace: openshift-ingress-operator
      3. Labels: <none>
      4. Annotations: <none>
      5. API Version: operator.openshift.io/v1
      6. Kind: IngressController
      7. ...
      8. Spec:
      9. ...
      10. Tls Security Profile:
      11. Custom:
      12. Ciphers:
      13. ECDHE-RSA-CHACHA20-POLY1305
      14. ECDHE-RSA-AES128-GCM-SHA256
      15. ECDHE-ECDSA-AES128-GCM-SHA256
      16. Min TLS Version: VersionTLS11
      17. Type: Custom
      18. ...

    Ingress controller endpoint publishing strategy

    NodePortService endpoint publishing strategy

    The NodePortService endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service.

    In this configuration, the Ingress Controller deployment uses container networking. A NodePortService is created to publish the deployment. The specific node ports are dynamically allocated by OKD; however, to support static port allocations, your changes to the node port field of the managed NodePortService are preserved.

    The Ingress Operator ignores any updates to .spec.ports[].nodePort fields of the service.

    By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly.

    For more information, see the Kubernetes Services documentation on NodePort.

    HostNetwork endpoint publishing strategy

    The HostNetwork endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed.

    An Ingress controller with the HostNetwork endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80 and 443 on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports.

    The Ingress Operator is a core feature of OKD and is enabled out of the box.

    Every new OKD installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute.

    Procedure

    • View the default Ingress Controller:

      1. $ oc describe --namespace=openshift-ingress-operator ingresscontroller/default

    View Ingress Operator status

    You can view and inspect the status of your Ingress Operator.

    Procedure

    • View your Ingress Operator status:

      1. $ oc describe clusteroperators/ingress

    View Ingress Controller logs

    You can view your Ingress Controller logs.

    Procedure

    • View your Ingress Controller logs:

      1. $ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator

    Your can view the status of a particular Ingress Controller.

    Procedure

    • View the status of an Ingress Controller:

      1. $ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>

    Configuring the Ingress Controller

    Setting a custom default certificate

    As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the IngressController custom resource (CR).

    Prerequisites

    • You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI.

    • Your certificate meets the following requirements:

      • The certificate is valid for the ingress domain.

      • The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com.

    • You must have an IngressController CR. You may use the default one:

      1. $ oc --namespace openshift-ingress-operator get ingresscontrollers

      Example output

      1. NAME AGE
      2. default 10m

    If you have intermediate certificates, they must be included in the tls.crt file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s).

    Procedure

    The following assumes that the custom certificate and key pair are in the tls.crt and tls.key files in the current working directory. Substitute the actual path names for tls.crt and tls.key. You also may substitute another name for custom-certs-default when creating the Secret resource and referencing it in the IngressController CR.

    This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy.

    1. Create a Secret resource containing the custom certificate in the openshift-ingress namespace using the tls.crt and tls.key files.

      1. $ oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key
    2. Update the IngressController CR to reference the new certificate secret:

      1. $ oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \
      2. --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}'
    3. Verify the update was effective:

      1. $ echo Q |\
      2. openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\
      3. openssl x509 -noout -subject -issuer -enddate

      where:

      <domain>

      Specifies the base domain name for your cluster.

      Example output

      The certificate secret name should match the value used to update the CR.

    Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller’s deployment to use the custom certificate.

    Removing a custom default certificate

    As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use.

    Prerequisites

    • You have access to the cluster as a user with the cluster-admin role.

    • You have installed the OpenShift CLI (oc).

    • You previously configured a custom default certificate for the Ingress Controller.

    Procedure

    • To remove the custom certificate and restore the certificate that ships with OKD, enter the following command:

      1. $ oc patch -n openshift-ingress-operator ingresscontrollers/default \
      2. --type json -p $'- op: remove\n path: /spec/defaultCertificate'

      There can be a delay while the cluster reconciles the new certificate configuration.

    Verification

    • To confirm that the original cluster certificate is restored, enter the following command:

      1. $ echo Q | \
      2. openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \
      3. openssl x509 -noout -subject -issuer -enddate

      where:

      <domain>

      Specifies the base domain name for your cluster.

      Example output

      1. subject=CN = *.apps.<domain>
      2. issuer=CN = ingress-operator@1620633373
      3. notAfter=May 10 10:44:36 2023 GMT

    Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc commands are used to scale the IngressController resource. The following procedure provides an example for scaling up the default IngressController.

    Procedure

    1. View the current number of available replicas for the default IngressController:

      1. $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'

      Example output

      1. 2
    2. Scale the default IngressController to the desired number of replicas using the oc patch command. The following example scales the default IngressController to 3 replicas:

      1. $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge

      Example output

      1. ingresscontroller.operator.openshift.io/default patched
    3. Verify that the default IngressController scaled to the number of replicas that you specified:

      1. $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'

      Example output

      1. 3

    Scaling is not an immediate action, as it takes time to create the desired number of replicas.

    Configuring Ingress access logging

    You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OKD, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs.

    Syslog is needed for high-traffic clusters where access logs could exceed the cluster logging stack’s capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap.

    Prerequisites

    • Log in as a user with cluster-admin privileges.

    Procedure

    Configure Ingress access logging to a sidecar.

    • To configure Ingress access logging, you must specify a destination using spec.logging.access.destination. To specify logging to a sidecar container, you must specify Container spec.logging.access.destination.type. The following example is an Ingress Controller definition that logs to a Container destination:

      1. apiVersion: operator.openshift.io/v1
      2. kind: IngressController
      3. metadata:
      4. name: default
      5. namespace: openshift-ingress-operator
      6. spec:
      7. replicas: 2
      8. endpointPublishingStrategy:
      9. type: NodePortService (1)
      10. logging:
      11. access:
      12. destination:
      13. type: Container
    • When you configure the Ingress Controller to log to a sidecar, the operator creates a container named logs inside the Ingress Controller Pod:

      1. $ oc -n openshift-ingress logs deployment.apps/router-default -c logs

      Example output

      1. 2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1"

    Configure Ingress access logging to a Syslog endpoint.

    • To configure Ingress access logging, you must specify a destination using spec.logging.access.destination. To specify logging to a Syslog endpoint destination, you must specify Syslog for spec.logging.access.destination.type. If the destination type is Syslog, you must also specify a destination endpoint using spec.logging.access.destination.syslog.endpoint and you can specify a facility using spec.logging.access.destination.syslog.facility. The following example is an Ingress Controller definition that logs to a Syslog destination:

      1. apiVersion: operator.openshift.io/v1
      2. kind: IngressController
      3. metadata:
      4. name: default
      5. spec:
      6. replicas: 2
      7. endpointPublishingStrategy:
      8. type: NodePortService
      9. logging:
      10. access:
      11. destination:
      12. type: Syslog
      13. syslog:
      14. address: 1.2.3.4
      15. port: 10514

      The syslog destination port must be UDP.

    Configure Ingress access logging with a specific log format.

    • You can specify spec.logging.access.httpLogFormat to customize the log format. The following example is an Ingress Controller definition that logs to a syslog endpoint with IP address 1.2.3.4 and port 10514:

      1. apiVersion: operator.openshift.io/v1
      2. kind: IngressController
      3. metadata:
      4. name: default
      5. namespace: openshift-ingress-operator
      6. spec:
      7. replicas: 2
      8. endpointPublishingStrategy:
      9. type: NodePortService
      10. logging:
      11. access:
      12. destination:
      13. type: Syslog
      14. syslog:
      15. address: 1.2.3.4
      16. port: 10514
      17. httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'

    Disable Ingress access logging.

    • To disable Ingress access logging, leave spec.logging or spec.logging.access empty:

      1. kind: IngressController
      2. metadata:
      3. name: default
      4. namespace: openshift-ingress-operator
      5. spec:
      6. replicas: 2
      7. endpointPublishingStrategy:
      8. type: NodePortService
      9. logging:
      10. access: null

    Ingress Controller sharding

    As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller, or router, can be significant. As a cluster administrator, you can shard the routes to:

    • Balance Ingress Controllers, or routers, with several routes to speed up responses to changes.

    • Allocate certain routes to have different reliability guarantees than other routes.

    • Allow certain Ingress Controllers to have different policies defined.

    • Allow only specific routes to use additional features.

    • Expose different routes on different addresses so that internal and external users can see different routes, for example.

    Ingress Controller can use either route labels or namespace labels as a sharding method.

    Configuring Ingress Controller sharding by using route labels

    Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector.

    Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

    Procedure

    1. Edit the router-internal.yaml file:

    2. Apply the Ingress Controller router-internal.yaml file:

      1. # oc apply -f router-internal.yaml

      The Ingress Controller selects routes in any namespace that have the label type: sharded.

    Configuring Ingress Controller sharding by using namespace labels

    Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector.

    Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

    Procedure

    1. Edit the router-internal.yaml file:

      1. # cat router-internal.yaml

      Example output

      1. apiVersion: v1
      2. items:
      3. - apiVersion: operator.openshift.io/v1
      4. kind: IngressController
      5. metadata:
      6. name: sharded
      7. namespace: openshift-ingress-operator
      8. spec:
      9. domain: <apps-sharded.basedomain.example.net>
      10. nodePlacement:
      11. nodeSelector:
      12. matchLabels:
      13. node-role.kubernetes.io/worker: ""
      14. namespaceSelector:
      15. matchLabels:
      16. type: sharded
      17. status: {}
      18. kind: List
      19. metadata:
      20. resourceVersion: ""
      21. selfLink: ""
    2. Apply the Ingress Controller router-internal.yaml file:

      1. # oc apply -f router-internal.yaml

      The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded.

    Configuring an Ingress Controller to use an internal load balancer

    When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer.

    If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet.

    If you want to change the scope for an IngressController object, you must delete and then recreate that IngressController object. You cannot change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created.

    See the for implementation details.

    Prerequisites

    • Install the OpenShift CLI (oc).

    • Log in as a user with cluster-admin privileges.

    Procedure

    1. Create an IngressController custom resource (CR) in a file named <name>-ingress-controller.yaml, such as in the following example:

      1. apiVersion: operator.openshift.io/v1
      2. kind: IngressController
      3. metadata:
      4. namespace: openshift-ingress-operator
      5. name: <name> (1)
      6. spec:
      7. domain: <domain> (2)
      8. endpointPublishingStrategy:
      9. type: LoadBalancerService
      10. loadBalancer:
      11. scope: Internal (3)
      1Replace <name> with a name for the IngressController object.
      2Specify the domain for the application published by the controller.
      3Specify a value of Internal to use an internal load balancer.
    2. Create the Ingress Controller defined in the previous step by running the following command:

      1. $ oc create -f <name>-ingress-controller.yaml (1)
      1Replace <name> with the name of the IngressController object.
    3. Optional: Confirm that the Ingress Controller was created by running the following command:

      1. $ oc --all-namespaces=true get ingresscontrollers

    You can configure the default Ingress Controller for your cluster to be internal by deleting and recreating it.

    If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet.

    If you want to change the scope for an IngressController object, you must delete and then recreate that IngressController object. You cannot change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created.

    Prerequisites

    • Install the OpenShift CLI (oc).

    • Log in as a user with cluster-admin privileges.

    Procedure

    1. Configure the default Ingress Controller for your cluster to be internal by deleting and recreating it.

      1. $ oc replace --force --wait --filename - <<EOF
      2. apiVersion: operator.openshift.io/v1
      3. kind: IngressController
      4. metadata:
      5. namespace: openshift-ingress-operator
      6. name: default
      7. spec:
      8. endpointPublishingStrategy:
      9. type: LoadBalancerService
      10. loadBalancer:
      11. scope: Internal
      12. EOF

    Configuring the route admission policy

    Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname.

    Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces.

    Prerequisites

    • Cluster administrator privileges.

    Procedure

    • Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command:

      1. $ oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge

      Sample Ingress Controller configuration

      1. spec:
      2. routeAdmission:
      3. namespaceOwnership: InterNamespaceAllowed
      4. ...

    Using wildcard routes

    The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy to configure the ROUTER_ALLOW_WILDCARD_ROUTES environment variable of the Ingress Controller.

    The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None, which is backwards compatible with existing IngressController resources.

    Procedure

    1. Configure the wildcard policy.

      1. Use the following command to edit the IngressController resource:

        1. $ oc edit IngressController
      2. Under spec, set the wildcardPolicy field to WildcardsDisallowed or WildcardsAllowed:

        1. spec:
        2. routeAdmission:
        3. wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed

    Using X-Forwarded headers

    You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including Forwarded and X-Forwarded-For. The Ingress Operator uses the HTTPHeaders field to configure the ROUTER_SET_FORWARDED_HEADERS environment variable of the Ingress Controller.

    Procedure

    1. Configure the HTTPHeaders field for the Ingress Controller.

      1. Use the following command to edit the IngressController resource:

        1. $ oc edit IngressController
      2. Under spec, set the HTTPHeaders policy field to Append, Replace, IfNone, or Never:

        1. apiVersion: operator.openshift.io/v1
        2. kind: IngressController
        3. metadata:
        4. name: default
        5. namespace: openshift-ingress-operator
        6. spec:
        7. httpHeaders:
        8. forwardedHeaderPolicy: Append

    Example use cases

    As a cluster administrator, you can:

    • Configure an external proxy that injects the X-Forwarded-For header into each request before forwarding it to an Ingress Controller.

      To configure the Ingress Controller to pass the header through unmodified, you specify the never policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides.

    • Configure the Ingress Controller to pass the X-Forwarded-For header that your external proxy sets on external cluster requests through unmodified.

      To configure the Ingress Controller to set the X-Forwarded-For header on internal cluster requests, which do not go through the external proxy, specify the if-none policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header.

    As an application developer, you can:

    • Configure an application-specific external proxy that injects the X-Forwarded-For header.

      To configure an Ingress Controller to pass the header through unmodified for an application’s Route, without affecting the policy for other Routes, add an annotation haproxy.router.openshift.io/set-forwarded-headers: if-none or haproxy.router.openshift.io/set-forwarded-headers: never on the Route for the application.

    You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more.

    You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster.

    To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate.

    The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes.

    For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol.

    Procedure

    Enable HTTP/2 on a single Ingress Controller.

    • To enable HTTP/2 on an Ingress Controller, enter the oc annotate command:

      1. $ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true

      Replace with the name of the Ingress Controller to annotate.

    Enable HTTP/2 on the entire cluster.

    Additional resources