Configuring a private cluster

    By default, OKD is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster.

    If you install OKD on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster’s own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for , for the Ingress object, and api, for the API server.

    The *.apps records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster.

    Because the default Ingress object is created as public, the load balancer is internet-facing and in the public subnets. You can replace the default Ingress Controller with an internal one.

    By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic.

    On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster’s access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz path.

    On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer.

    On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster.

    After you deploy a cluster, you can modify its DNS to use only a private zone.

    Procedure

    1. Review the DNS custom resource for your cluster:

      Example output

      1. apiVersion: config.openshift.io/v1
      2. kind: DNS
      3. metadata:
      4. creationTimestamp: "2019-10-25T18:27:09Z"
      5. generation: 2
      6. name: cluster
      7. resourceVersion: "37966"
      8. selfLink: /apis/config.openshift.io/v1/dnses/cluster
      9. uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
      10. spec:
      11. baseDomain: <base_domain>
      12. privateZone:
      13. tags:
      14. Name: <infrastructure_id>-int
      15. kubernetes.io/cluster/<infrastructure_id>: owned
      16. publicZone:
      17. id: Z2XXXXXXXXXXA4
      18. status: {}

      Note that the spec section contains both a private and a public zone.

    2. Patch the DNS custom resource to remove the public zone:

      1. $ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
      2. dns.config.openshift.io/cluster patched

      Because the Ingress Controller consults the DNS definition when it creates objects, when you create or modify Ingress objects, only private records are created.

      DNS records for the existing Ingress objects are not modified when you remove the public zone.

      1. Example output

        1. apiVersion: config.openshift.io/v1
        2. kind: DNS
        3. metadata:
        4. creationTimestamp: "2019-10-25T18:27:09Z"
        5. generation: 2
        6. name: cluster
        7. resourceVersion: "37966"
        8. selfLink: /apis/config.openshift.io/v1/dnses/cluster
        9. uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
        10. spec:
        11. baseDomain: <base_domain>
        12. privateZone:
        13. tags:
        14. Name: <infrastructure_id>-int
        15. kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned
        16. status: {}

      After you deploy a cluster, you can modify its Ingress Controller to use only a private zone.

      Procedure

      1. Modify the default Ingress Controller to use only an internal endpoint:

        Example output

        1. ingresscontroller.operator.openshift.io "default" deleted
        2. ingresscontroller.operator.openshift.io/default replaced

        The public DNS entry is removed, and the private zone entry is updated.

      After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone.

      Prerequisites

      • Install the OpenShift CLI (oc).

      • Have access to the web console as a user with admin privileges.

      Procedure

      1. In the web portal or console for your cloud provider, take the following actions:

        1. Locate and delete the appropriate load balancer component:

          • For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.

          • For Azure, delete the api-internal rule for the load balancer.

        2. Delete the api.$clustername.$yourdomain DNS entry in the public zone.

      2. Remove the external load balancers:

        • If your cluster uses a control plane machine set, delete the following lines in the control plane machine set custom resource:

          1. providerSpec:
          2. value:
          3. loadBalancers:
          4. - name: lk4pj-int
          5. type: network
          1Delete this line.
        • If your cluster does not use a control plane machine set, you must delete the external load balancers from each control plane machine.

          1. From your terminal, list the cluster machines by running the following command:

            1. $ oc get machine -n openshift-machine-api

            Example output

            1. NAME STATE TYPE REGION ZONE AGE
            2. lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m
            3. lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m
            4. lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m
            5. lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m
            6. lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m
            7. lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m

            The control plane machines contain master in the name.

          2. Remove the external load balancer from each control plane machine:

            1. Edit a control plane machine object to by running the following command:

            2. Remove the lines that describe the external load balancer, which are marked in the following example:

              1. providerSpec:
              2. value:
              3. loadBalancers:
              4. - name: lk4pj-ext (1)
              5. type: network (1)
              6. - name: lk4pj-int
              7. type: network
              1Delete this line.
            3. Save your changes and exit the object specification.

            4. Repeat this process for each of the control plane machines.

      When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External. Cluster administrators can change an External scoped Ingress Controller to Internal.

      Prerequisites

      • You installed the oc CLI.

      Procedure

      • To change an External scoped Ingress Controller to Internal, enter the following command:

        1. $ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}'
      • To check the status of the Ingress Controller, enter the following command:

        1. $ oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml
        • The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command:

            If you delete the service, the Ingress Operator recreates it as .