Multi-user, auth-enabled Kubeflow with kfctl_existing_arrikto

    Follow these instructions if you want to install Kubeflow on an existing Kubernetes cluster.

    This installation of Kubeflow is maintained byArrikto, it is geared towards existing Kubernetesclusters and does not depend on any cloud-specific feature.

    In this reference architecture, we use andIstio for vendor-neutral authentication.

    This deployment works well for on-prem installations, where companies/organizations need LDAP/AD integration for multi-user authentication, and they don’t want to depend on any cloud-specific feature.

    Read the relevant for more info about this architecture.

    The instructions below assume that you have an existing Kubernetes cluster.

    Prepare your environment

    Follow these steps to download the kfctl binary for the Kubeflow CLI and setsome handy environment variables:

    • Create environment variables to make the deployment process easier:
    1. # Use only alphanumeric characters or - in the directory name.
    2. export PATH=$PATH:"<path-to-kfctl>"
    3. # Set the following kfctl configuration file:
    4. export CONFIG_URI="https://raw.githubusercontent.com/kubeflow/manifests/b37bad9eded2c47c54ce1150eb9e6edbfb47ceda/kfdef/kfctl_existing_arrikto.0.7.1.yaml"
    5. # Set KF_NAME to the name of your Kubeflow deployment. You also use this
    6. # value as directory name when creating your configuration directory.
    7. # For example, your deployment name can be 'my-kubeflow' or 'kf-test'.
    8. export KF_NAME=<your choice of name for the Kubeflow deployment>
    9. # Set the path to the base directory where you want to store one or more
    10. # Kubeflow deployments. For example, /opt.
    11. # Then set the Kubeflow application directory for this deployment.
    12. export BASE_DIR=<path to a base directory>
    13. export KF_DIR=${BASE_DIR}/${KF_NAME}

    Notes:

    • ${KF_NAME} - The name of your Kubeflow deployment.If you want a custom deployment name, specify that name here.For example, my-kubeflow or kf-test.The value of KF_NAME must consist of lower case alphanumeric characters or‘-’, and must start and end with an alphanumeric character.The value of this variable cannot be greater than 25 characters. It mustcontain just a name, not a directory path.You also use this value as directory name when creating the directory whereyour Kubeflow configurations are stored, that is, the Kubeflow applicationdirectory.

    • ${KF_DIR} - The full path to your Kubeflow application directory.

    • ${CONFIG_URI} - The GitHub address of the configuration YAML file thatyou want to use to deploy Kubeflow. The URI used in this guide is.When you run kfctl apply or kfctl build (see the next step), kfctl createsa local version of the configuration YAML file which you can furthercustomize if necessary.

    Set up and deploy Kubeflow

    To set up and deploy Kubeflow using the default settings,run the kfctl apply command:

    1. mkdir -p ${KF_DIR}
    2. cd ${KF_DIR}
    3. # Download the config file and change the default login credentials.
    4. wget -O kfctl_existing_arrikto.yaml $CONFIG_URI
    5. export CONFIG_FILE=${KF_DIR}/kfctl_existing_arrikto.yaml
    6. # Credentials for the default user are admin@kubeflow.org:12341234
    7. # To change them, please edit the dex-auth application parameters
    8. # inside the KfDef file.
    9. vim $CONFIG_FILE
    10. kfctl apply -V -f ${CONFIG_FILE}

    If you want to customize your configuration before deploying Kubeflow, you canset up your configuration files first, then edit the configuration, thendeploy Kubeflow:

    • Run the kfctl build command to set up your configuration:
    1. mkdir -p ${KF_DIR}
    2. cd ${KF_DIR}
    3. kfctl build -V -f ${CONFIG_URI}
    1. export CONFIG_FILE=${KF_DIR}/kfctl_existing_arrikto.yaml
    • Run the kfctl apply command to deploy Kubeflow:
    1. kfctl apply -V -f ${CONFIG_FILE}

    Accessing Kubeflow

    The default way of accessing Kubeflow is via port-forward.This enables you to get started quickly without imposing any requirements on your environment.

    1. # Kubeflow will be available at localhost:8080
    2. kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80

    The credentials are the ones you specified in the KfDef file, or the default (admin@kubeflow.org:12341234).It is highly recommended to change the default credentials.To add static users or change the existing one, see the relevant section.

    When you’re ready, you can expose your Kubeflow deployment with a LoadBalancer Service or an Ingress.For more information, see the .

    To add users to basic auth, you just have to edit the Dex ConfigMap under the key staticPasswords.

    1. # Download the dex config
    2. kubectl get configmap dex -n auth -o jsonpath='{.data.config\.yaml}' > dex-config.yaml
    3. # Edit the dex config with extra users.
    4. # The password must be hashed with bcrypt with an at least 10 difficulty level.
    5. # You can use an online tool like: https://passwordhashing.com/BCrypt
    6. # After editing the config, update the ConfigMap
    7. kubectl create configmap dex --from-file=config.yaml=dex-config.yaml -n auth --dry-run -oyaml | kubectl apply -f -
    8. # Restart Dex to pick up the changes in the ConfigMap
    9. kubectl rollout restart deployment dex -n auth

    As you saw in the overview, we use Dex for providing user authentication.Dex supports several authentication methods:

    • Static users, as described above
    • LDAP / Active Directory
    • External Identity Provider (IdP) (for example Google, LinkedIn, GitHub, …)

    This section focuses on setting up Dex to authenticate with an existing LDAP database.

    • (Optional) If you don’t have an LDAP database, you can set one up following these instructions:

      • Deploy a new LDAP Server as a StatefulSet. This also deploys phpLDAPadmin, a GUI for interacting with your LDAP Server.

    LDAP Server Manifest

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. labels:
    5. app: ldap
    6. name: ldap-service
    7. namespace: kubeflow
    8. spec:
    9. type: ClusterIP
    10. clusterIP: None
    11. ports:
    12. - port: 389
    13. selector:
    14. app: ldap
    15. ---
    16. apiVersion: apps/v1
    17. kind: StatefulSet
    18. metadata:
    19. name: ldap
    20. namespace: kubeflow
    21. labels:
    22. app: ldap
    23. spec:
    24. serviceName: ldap-service
    25. replicas: 1
    26. selector:
    27. matchLabels:
    28. app: ldap
    29. template:
    30. metadata:
    31. labels:
    32. app: ldap
    33. spec:
    34. containers:
    35. - name: ldap
    36. image: osixia/openldap:1.2.4
    37. volumeMounts:
    38. - name: ldap-data
    39. mountPath: /var/lib/ldap
    40. - name: ldap-config
    41. mountPath: /etc/ldap/slapd.d
    42. ports:
    43. - containerPort: 389
    44. name: openldap
    45. env:
    46. - name: LDAP_LOG_LEVEL
    47. value: "256"
    48. - name: LDAP_ORGANISATION
    49. value: "Example"
    50. - name: LDAP_DOMAIN
    51. value: "example.com"
    52. - name: LDAP_ADMIN_PASSWORD
    53. value: "admin"
    54. - name: LDAP_CONFIG_PASSWORD
    55. value: "config"
    56. - name: LDAP_BACKEND
    57. value: "mdb"
    58. - name: LDAP_TLS
    59. value: "false"
    60. - name: LDAP_REPLICATION
    61. value: "false"
    62. - name: KEEP_EXISTING_CONFIG
    63. value: "false"
    64. - name: LDAP_REMOVE_CONFIG_AFTER_SETUP
    65. value: "true"
    66. volumes:
    67. - name: ldap-config
    68. emptyDir: {}
    69. volumeClaimTemplates:
    70. - metadata:
    71. name: ldap-data
    72. spec:
    73. accessModes: [ "ReadWriteOnce" ]
    74. resources:
    75. requests:
    76. storage: 10Gi
    77. ---
    78. apiVersion: v1
    79. kind: Service
    80. metadata:
    81. app: phpldapadmin
    82. name: phpldapadmin-service
    83. namespace: kubeflow
    84. spec:
    85. type: ClusterIP
    86. ports:
    87. - port: 80
    88. selector:
    89. app: phpldapadmin
    90. ---
    91. apiVersion: apps/v1
    92. kind: Deployment
    93. metadata:
    94. name: phpldapadmin
    95. namespace: kubeflow
    96. labels:
    97. app: phpldapadmin
    98. spec:
    99. replicas: 1
    100. selector:
    101. matchLabels:
    102. app: phpldapadmin
    103. template:
    104. metadata:
    105. labels:
    106. app: phpldapadmin
    107. spec:
    108. containers:
    109. - name: phpldapadmin
    110. image: osixia/phpldapadmin:0.8.0
    111. ports:
    112. - name: http-server
    113. containerPort: 80
    114. env:
    115. - name: PHPLDAPADMIN_HTTPS
    116. value: "false"
    117. - name: PHPLDAPADMIN_LDAP_HOSTS
    118. value : "#PYTHON2BASH:[{'ldap-service.kubeflow.svc.cluster.local': [{'server': [{'tls': False}]},{'login': [ {'bind_id': 'cn=admin,dc=example,dc=com'}]}]}]"
    • Seed the LDAP database with new entries.
    1. kubectl exec -it -n kubeflow ldap-0 -- bash
    2. ldapadd -x -D "cn=admin,dc=example,dc=com" -W
    3. # Enter password "admin".
    4. # Press Ctrl+D to complete after pasting the snippet below.

    LDAP Seed Users and Groups

    1. # If you used the OpenLDAP Server deployment in step 1,
    2. # then this object already exists.
    3. # If it doesn't, uncomment this.
    4. #dn: dc=example,dc=com
    5. #objectClass: dcObject
    6. #objectClass: organization
    7. #o: Example
    8. #dc: example
    9. dn: ou=People,dc=example,dc=com
    10. objectClass: organizationalUnit
    11. ou: People
    12. dn: cn=Nick Kiliadis,ou=People,dc=example,dc=com
    13. objectClass: person
    14. objectClass: inetOrgPerson
    15. givenName: Nick
    16. sn: Kiliadis
    17. cn: Nick Kiliadis
    18. uid: nkili
    19. mail: nkili@example.com
    20. userpassword: 12341234
    21. dn: cn=Robin Spanakopita,ou=People,dc=example,dc=com
    22. objectClass: person
    23. objectClass: inetOrgPerson
    24. givenName: Robin
    25. sn: Spanakopita
    26. cn: Robin Spanakopita
    27. uid: rspanakopita
    28. mail: rspanakopita@example.com
    29. userpassword: 43214321
    30. # Group definitions.
    31. dn: ou=Groups,dc=example,dc=com
    32. objectClass: organizationalUnit
    33. ou: Groups
    34. dn: cn=admins,ou=Groups,dc=example,dc=com
    35. objectClass: groupOfNames
    36. cn: admins
    37. member: cn=Nick Kiliadis,ou=People,dc=example,dc=com
    38. dn: cn=developers,ou=Groups,dc=example,dc=com
    39. objectClass: groupOfNames
    40. cn: developers
    41. member: cn=Nick Kiliadis,ou=People,dc=example,dc=com
    42. member: cn=Robin Spanakopita,ou=People,dc=example,dc=com
    • To use your LDAP/AD server with Dex, you have to edit the Dex config. To edit the ConfigMap containing the Dex config, follow these steps:

      • Get the current Dex config from the corresponding Config Map.
    1. kubectl get configmap dex -n auth -o jsonpath='{.data.config\.yaml}' > dex-config.yaml
    • Add the LDAP-specific options. Here is an example to help you out. It is configured to work with the example LDAP Server you set up previously.

    Dex LDAP Config Section

    1. connectors:
    2. - type: ldap
    3. # Required field for connector id.
    4. id: ldap
    5. # Required field for connector name.
    6. name: LDAP
    7. config:
    8. # Host and optional port of the LDAP server in the form "host:port".
    9. # If the port is not supplied, it will be guessed based on "insecureNoSSL",
    10. # and "startTLS" flags. 389 for insecure or StartTLS connections, 636
    11. # otherwise.
    12. host: ldap-service.kubeflow.svc.cluster.local:389
    13. # Following field is required if the LDAP host is not using TLS (port 389).
    14. # Because this option inherently leaks passwords to anyone on the same network
    15. # as dex, THIS OPTION MAY BE REMOVED WITHOUT WARNING IN A FUTURE RELEASE.
    16. #
    17. insecureNoSSL: true
    18. # If a custom certificate isn't provide, this option can be used to turn off
    19. # TLS certificate checks. As noted, it is insecure and shouldn't be used outside
    20. # of explorative phases.
    21. #
    22. insecureSkipVerify: true
    23. # When connecting to the server, connect using the ldap:// protocol then issue
    24. # a StartTLS command. If unspecified, connections will use the ldaps:// protocol
    25. #
    26. startTLS: false
    27. # Path to a trusted root certificate file. Default: use the host's root CA.
    28. # rootCA: /etc/dex/ldap.ca
    29. # clientCert: /etc/dex/ldap.cert
    30. # clientKey: /etc/dex/ldap.key
    31. # A raw certificate file can also be provided inline.
    32. # rootCAData: ( base64 encoded PEM file )
    33. # The DN and password for an application service account. The connector uses
    34. # these credentials to search for users and groups. Not required if the LDAP
    35. # server provides access for anonymous auth.
    36. # Please note that if the bind password contains a `$`, it has to be saved in an
    37. # environment variable which should be given as the value to `bindPW`.
    38. bindDN: cn=admin,dc=example,dc=com
    39. bindPW: admin
    40. # The attribute to display in the provided password prompt. If unset, will
    41. # display "Username"
    42. usernamePrompt: username
    43. # User search maps a username and password entered by a user to a LDAP entry.
    44. userSearch:
    45. # BaseDN to start the search from. It will translate to the query
    46. # "(&(objectClass=person)(uid=<username>))".
    47. baseDN: ou=People,dc=example,dc=com
    48. # Optional filter to apply when searching the directory.
    49. filter: "(objectClass=inetOrgPerson)"
    50. # username attribute used for comparing user entries. This will be translated
    51. # and combined with the other filter as "(<attr>=<username>)".
    52. username: uid
    53. # The following three fields are direct mappings of attributes on the user entry.
    54. # String representation of the user.
    55. # Required. Attribute to map to Email.
    56. # Maps to display name of users. No default value.
    57. nameAttr: givenName
    58. # Group search queries for groups given a user entry.
    59. groupSearch:
    60. # BaseDN to start the search from. It will translate to the query
    61. # "(&(objectClass=group)(member=<user uid>))".
    62. baseDN: ou=Groups,dc=example,dc=com
    63. # Optional filter to apply when searching the directory.
    64. filter: "(objectClass=groupOfNames)"
    65. # Following two fields are used to match a user to a group. It adds an additional
    66. # requirement to the filter that an attribute in the group must match the user's
    67. # attribute value.
    68. userAttr: DN
    69. groupAttr: member
    70. # Represents group name.
    71. nameAttr: cn
    • Append the LDAP config section to the dex config.
    • Apply the new config.
    1. kubectl create configmap dex --from-file=config.yaml=dex-config-final.yaml -n auth --dry-run -oyaml | kubectl apply -f -
    • Restart the Dex deployment: kubectl rollout restart deployment dex -n auth

    While port-forward is a great way to get started, it is not a long-term, production-ready solution.In this section, we explore the process of exposing your cluster to the outside world.

    NOTE: It is highly recommended to change the default credentials before exposing your Kubeflow cluster.See for how to edit Dex static users.

    Secure with HTTPS

    Since we are exposing our cluster to the outside world, it’s important to secure it with HTTPS.Here we will configure automatic self-signed certificates.

    Edit the Istio Gateway Object and expose port 443 with HTTPS.In addition, make port 80 redirect to 443:

    1. kubectl edit -n kubeflow gateways.networking.istio.io kubeflow-gateway

    The Gateway Spec should look like the following:

    1. spec:
    2. selector:
    3. istio: ingressgateway
    4. servers:
    5. - hosts:
    6. - '*'
    7. port:
    8. name: http
    9. number: 80
    10. protocol: HTTP
    11. # Upgrade HTTP to HTTPS
    12. tls:
    13. httpsRedirect: true
    14. - hosts:
    15. - '*'
    16. port:
    17. name: https
    18. number: 443
    19. protocol: HTTPS
    20. tls:
    21. mode: SIMPLE
    22. privateKey: /etc/istio/ingressgateway-certs/tls.key
    23. serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

    Expose with a LoadBalancer

    If you don’t have support for LoadBalancer on your cluster, please follow the instructions below to deploy MetalLB in Layer 2 mode. (You can read more about Layer 2 mode in the MetalLB docs.)

    MetalLB deployment

    Deploy MetalLB:

    • Apply the manifest:
    1. kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
    • Allocate a pool of addresses on your local network for MetalLB to use. Youneed at least one address for the Istio Gateway. This example assumesaddresses 10.0.0.100-10.0.0.110. You must modify these addresses based onyour environment.
    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. namespace: metallb-system
    6. name: config
    7. data:
    8. config: |
    9. address-pools:
    10. - name: default
    11. protocol: layer2
    12. addresses:
    13. - 10.0.0.100-10.0.0.110
    14. EOF
    • Create a dummy service:
    1. kubectl create service loadbalancer nginx --tcp=80:80
    2. service/nginx created
    • Ensure that MetalLB has allocated an IP address for the service:
    1. kubectl describe service nginx
    2. ...
    3. Events:
    4. Type Reason Age From Message
    5. ---- ------ ---- ---- -------
    6. Normal IPAllocated 69s metallb-controller Assigned IP "10.0.0.101"
    • Check the corresponding MetalLB logs:
    1. kubectl logs -n metallb-system -l component=controller
    2. ...
    3. {"caller":"service.go:98","event":"ipAllocated","ip":"10.0.0.101","msg":"IP address assigned by controller","service":"default/nginx","ts":"2019-08-09T15:12:09.376779263Z"}
    • Create a pod that will be exposed with the service:
    1. kubectl run nginx --image nginx --restart=Never -l app=nginx
    2. pod/nginx created
    • Ensure that MetalLB has assigned a node to announce the allocated IP address:
    1. kubectl describe service nginx
    2. ...
    3. Events:
    4. Type Reason Age From Message
    5. ---- ------ ---- ---- -------
    6. Normal nodeAssigned 4s metallb-speaker announcing from node "node-2"
    • Check the corresponding MetalLB logs:
    1. kubectl logs -n metallb-system -l component=speaker
    2. ...
    3. {"caller":"main.go:246","event":"serviceAnnounced","ip":"10.0.0.101","msg":"service has IP, announcing","pool":"default","protocol":"layer2","service":"default/nginx","ts":"2019-08-09T15:14:02.433876894Z"}
    • Check that MetalLB responds to ARP requests for the allocated IP address:
    1. arping -I eth0 10.0.0.101
    2. ...
    3. ARPING 10.0.0.101 from 10.0.0.204 eth0
    4. Unicast reply from 10.0.0.101 [6A:13:5A:D2:65:CB] 2.619ms
    • Check the corresponding MetalLB logs:
    • Verify that everything works as expected:
    1. curl http://10.0.0.101
    2. ...
    3. <p><em>Thank you for using nginx.</em></p>
    4. ...
    • Clean up:
    1. kubectl delete service nginx
    2. kubectl delete pod nginx

    To expose Kubeflow with a LoadBalancer Service, just change the type of the istio-ingressgateway Service to LoadBalancer.

    1. kubectl patch service -n istio-system istio-ingressgateway -p '{"spec": {"type": "LoadBalancer"}}'

    After that, get the LoadBalancer’s IP or Hostname from its status and create the necessary certificate.

    1. kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0]}'

    Create the Certificate with cert-manager:

    1. apiVersion: cert-manager.io/v1alpha2
    2. kind: Certificate
    3. metadata:
    4. name: istio-ingressgateway-certs
    5. namespace: istio-system
    6. spec:
    7. commonName: istio-ingressgateway.istio-system.svc
    8. # Use ipAddresses if your LoadBalancer issues an IP
    9. ipAddresses:
    10. - <LoadBalancer IP>
    11. # Use dnsNames if your LoadBalancer issues a hostname (eg on AWS)
    12. dnsNames:
    13. - <LoadBalancer HostName>
    14. isCA: true
    15. issuerRef:
    16. kind: ClusterIssuer
    17. name: kubeflow-self-signing-issuer
    18. secretName: istio-ingressgateway-certs

    After applying the above Certificate, cert-manager will generate the TLS certificate inside the istio-ingressgateway-certs secrets.The istio-ingressgateway-certs secret is mounted on the istio-ingressgateway deployment and used to serve HTTPS.

    Navigate to Address>/ and start using Kubeflow.

    If the Kubeflow dashboard is not available at https://<kubeflow address> ensure that:

    • the virtual services have been created:
    1. kubectl get virtualservices -n kubeflow
    2. kubectl get virtualservices -n kubeflow centraldashboard -o yaml

    If not, then kfctl has aborted for some reason, and not completed successfully.

    • OIDC auth service redirects you to Dex:
    1. curl -k https://<kubeflow address>/ -v
    2. ...
    3. < HTTP/2 302
    4. < content-type: text/html; charset=utf-8
    5. < location:
    6. /dex/auth?client_id=kubeflow-authservice-oidc&redirect_uri=%2Flogin%2Foidc&response_type=code&scope=openid+profile+email+groups&state=vSCMnJ2D
    7. < date: Fri, 09 Aug 2019 14:33:21 GMT
    8. < content-length: 181
    9. < x-envoy-upstream-service-time: 0
    10. < server: istio-envoy

    Please join the to report any issues, request help, and give us feedback on this config.

    Some additional debugging information:

    OIDC AuthService logs:

    1. kubectl logs -n istio-system -l app=authservice

    Dex logs:

    1. kubectl logs -n auth -l app=dex

    Istio ingress-gateway logs:

    1. kubectl logs -n istio-system -l istio=ingressgateway

    Istio ingressgateway service:

    1. kubectl get service -n istio-system istio-ingressgateway -o yaml

    MetalLB logs:

    1. kubectl logs -n metallb-system -l component=speaker
    2. ...
    3. {"caller":"arp.go:102","interface":"br100","ip":"10.0.0.100","msg":"got ARP request for service IP, sending response","responseMAC":"62:41:bd:5f:cc:0d","senderIP":"10.0.0.204","senderMAC":"9a:1f:7c:95:ca:dc","ts":"2019-07-31T13:19:19.7082836Z"}

    Delete Kubeflow

    Run the following commands to delete your deployment and reclaim all resources:

    1. cd ${KF_DIR}
    2. # If you want to delete all the resources, run:
    3. kfctl delete -f ${CONFIG_FILE}

    The kfctl deployment process includes the following commands:

    • kfctl build - (Optional) Creates configuration files defining the variousresources in your deployment. You only need to run kfctl build if you wantto edit the resources before running kfctl apply.
    • kfctl apply - Creates or updates the resources.
    • kfctl delete - Deletes the resources.

    Application layout

    Your Kubeflow application directory ${KF_DIR} contains the following files anddirectories:

    • ${CONFIG_FILE} is a YAML file that defines configurations related to yourKubeflow deployment.

    • kustomize is a directory that contains the kustomize packages for Kubeflowapplications. See.

      • The directory is created when you run kfctl build or kfctl apply.
      • You can customize the Kubernetes resources by modifying the manifests andrunning again.

    Next steps

    • Get started with .