Elastic Kubernetes Service (EKS)

    LocalStack Pro allows you to use the EKS API to create Kubernetes clusters and easily deploy containerized apps locally.

    There are two modes for creating EKS clusters on LocalStack:

    • spinning up an embedded kube cluster in your local Docker engine (preferred, simpler), or
    • using an existing Kubernetes installation you can access from your local machine (defined in )

    The default method for creating Kubernetes clusters via the local EKS API is to spin up an embedded kube cluster within Docker. LocalStack handles the download and installation transparently - on most systems the installation is performed automatically, and no customizations should be required.

    A new cluster can be created using the following command:

    You should then see some Docker containers getting started, e.g.:

    1. $ docker ps
    2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    3. b335f7f089e4 rancher/k3d-proxy:5.0.1-rc.1 "/bin/sh -c nginx-pr…" 1 minute ago Up 1 minute 0.0.0.0:8081->80/tcp, 0.0.0.0:44959->6443/tcp k3d-cluster1-serverlb
    4. f05770ec8523 rancher/k3s:v1.21.5-k3s2 "/bin/k3s server --t…" 1 minute ago Up 1 minute

    Once the cluster has been created and initialized, we can determine the server endpoint:

    1. $ awslocal eks describe-cluster --name cluster1
    2. {
    3. "cluster": {
    4. "name": "cluster1",
    5. "status": "ACTIVE",
    6. "endpoint": "https://localhost.localstack.cloud:4513",
    7. ...
    8. }
    9. }

    We can then configure the kubectl command line to interact with the new cluster endpoint:

    1. $ awslocal eks update-kubeconfig --name cluster1
    2. Updated context arn:aws:eks:us-east-1:000000000000:cluster/cluster1 in ~/.kube/config
    3. $ kubectl config use-context arn:aws:eks:us-east-1:000000000000:cluster/cluster1
    4. Switched to context "arn:aws:eks:us-east-1:000000000000:cluster/cluster1".
    5. $ kubectl get services
    6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    7. service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 70s

    In this section we will, by the use of an example, explore the usage of ECR images inside EKS.

    Initial configuration

    You can use the variable HOSTNAME_EXTERNAL to modify the return value of the resource URIs for most services, including ECR. By default, ECR will return a repositoryUri starting with localhost, like: localhost:<port>/<repository-name>. If we set the HOSTNAME_EXTERNAL to localhost.localstack.cloud, ECR will return a repositoryUri like localhost.localstack.cloud:<port>/<repository_name>.

    Notes

    In this section, we will assume localhost.localstack.cloud resolves in your environment and LocalStack is connected to a non-default bridge network. Check the article about to learn more. If this domain does not resolve on your host it is also possible not to set HOSTNAME_EXTERNAL, please nevertheless use localhost.localstack.cloud as registry in your pod configuration. LocalStack will take care of the DNS resolution of localhost.localstack.cloud within ECR itself, and you can use the localhost:<port>/<repository_name> Uri for tagging and pushing the image on your host.

    If this configuration is correct, you can use your ECR image in EKS like expected.

    Deploying a sample application from an ECR image

    1. $ awslocal ecr create-repository --repository-name "fancier-nginx"
    2. {
    3. "repository": {
    4. "repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/fancier-nginx",
    5. "registryId": "c75fd0e2",
    6. "repositoryName": "fancier-nginx",
    7. "repositoryUri": "localhost.localstack.cloud:4510/fancier-nginx",
    8. "createdAt": "2022-04-13T14:22:47+02:00",
    9. "imageTagMutability": "MUTABLE",
    10. "imageScanningConfiguration": {
    11. "scanOnPush": false
    12. },
    13. "encryptionConfiguration": {
    14. "encryptionType": "AES256"
    15. }
    16. }
    17. }

    Note: When creating an ECR, a port from the is dynamically selected.
    Therefore, the port can differ from 4510 used in the samples below. Make sure to use the correct URL / port by using the repositoryUrl of the create-repository request.

    Now let us pull the nginx image:

    1. $ docker pull nginx

    … tag it to our repository name:

    1. $ docker tag nginx localhost.localstack.cloud:4510/fancier-nginx

    … and push it to ECR:

    Now, let us set up the EKS cluster using the image pushed to local ECR.

    1. $ awslocal eks create-cluster --name fancier-cluster --role-arn "r1" --resources-vpc-config "{}"
    2. {
    3. "cluster": {
    4. "name": "fancier-cluster",
    5. "arn": "arn:aws:eks:us-east-1:000000000000:cluster/fancier-cluster",
    6. "createdAt": "2022-04-13T16:38:24.850000+02:00",
    7. "roleArn": "r1",
    8. "resourcesVpcConfig": {},
    9. "identity": {
    10. "oidc": {
    11. "issuer": "https://localhost.localstack.cloud/eks-oidc"
    12. }
    13. },
    14. "status": "CREATING",
    15. "clientRequestToken": "cbdf2bb6-fd3b-42b1-afe0-3c70980b5959"
    16. }

    Once the cluster status is “ACTIVE”:

    1. awslocal eks describe-cluster --name "fancier-cluster"
    2. {
    3. "cluster": {
    4. "name": "fancier-cluster",
    5. "arn": "arn:aws:eks:us-east-1:000000000000:cluster/fancier-cluster",
    6. "createdAt": "2022-04-13T17:12:39.738000+02:00",
    7. "endpoint": "https://localhost.localstack.cloud:4511",
    8. "roleArn": "r1",
    9. "resourcesVpcConfig": {},
    10. "identity": {
    11. "oidc": {
    12. "issuer": "https://localhost.localstack.cloud/eks-oidc"
    13. }
    14. },
    15. "status": "ACTIVE",
    16. "certificateAuthority": {
    17. "data": "..."
    18. },
    19. "clientRequestToken": "d188f578-b353-416b-b309-5d8c76ecc4e2"
    20. }
    21. }

    … we will configure kubectl:

    1. $ awslocal eks update-kubeconfig --name fancier-cluster && kubectl config use-context arn:aws:eks:us-east-1:000000000000:cluster/fancier-cluster
    2. Added new context arn:aws:eks:us-east-1:000000000000:cluster/fancier-cluster to /home/localstack/.kube/config
    3. Switched to context "arn:aws:eks:us-east-1:000000000000:cluster/fancier-cluster".

    … and add a deployment configuration:

    1. $ cat <<EOF | kubectl apply -f -
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. name: fancier-nginx
    6. labels:
    7. app: fancier-nginx
    8. spec:
    9. replicas: 1
    10. selector:
    11. matchLabels:
    12. app: fancier-nginx
    13. template:
    14. metadata:
    15. labels:
    16. app: fancier-nginx
    17. spec:
    18. containers:
    19. - name: fancier-nginx
    20. image: localhost.localstack.cloud:4510/fancier-nginx:latest
    21. ports:
    22. - containerPort: 80
    23. EOF

    Now, if we describe the pod:

    1. kubectl describe pod fancier-nginx

    … we can see, in the events, that the pull from ECR was successful:

    1. Normal Pulled 10s kubelet Successfully pulled image "localhost.localstack.cloud:4510/fancier-nginx:latest" in 2.412775896s

    Configuring an Ingress for your services

    In order to make an EKS service externally accessible, we need to create an Ingress configuration that exposes the service on a certain path to the load balancer.

    Now use the following ingress configuration to expose the nginx service on path /test123:

    1. $ cat <<EOF | kubectl apply -f -
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: nginx
    6. annotations:
    7. ingress.kubernetes.io/ssl-redirect: "false"
    8. spec:
    9. rules:
    10. - http:
    11. paths:
    12. - path: /test123
    13. pathType: Prefix
    14. backend:
    15. service:
    16. port:
    17. number: 80
    18. EOF

    We should then be able to send a request to via the load balancer port 8081 from the host:

    1. $ curl http://localhost:8081/test123
    2. <html>
    3. ...
    4. <hr><center>nginx/1.21.6</center>
    5. ...

    Note

    You can customize the load balancer port by configuring EKS_LOADBALANCER_PORT in your environment.

    You can also use the EKS API using an existing local Kubernetes installation. This works by mounting the $HOME/.kube/config file into the LocalStack container - e.g., when using docker-compose.yml:

    1. volumes:
    2. - "${HOME}/.kube/config:/root/.kube/config"

    In recent versions of Docker, you can simply enable Kubernetes as an embedded service running inside Docker. See below for a screenshot of the Docker settings for Kubernetes in MacOS (similar configurations apply for Linux/Windows). By default, it is asssumed that Kubernetes API runs on the local TCP port 6443.

    The example below illustrates how to create an EKS cluster configuration (assuming you have awslocal installed):

    1. $ awslocal eks create-cluster --name cluster1 --role-arn r1 --resources-vpc-config '{}'
    2. {
    3. "cluster": {
    4. "name": "cluster1",
    5. "arn": "arn:aws:eks:eu-central-1:000000000000:cluster/cluster1",
    6. "createdAt": "Sat, 05 Oct 2019 12:29:26 GMT",
    7. "endpoint": "https://172.17.0.1:6443",
    8. "status": "ACTIVE",
    9. ...
    10. }
    11. }
    12. $ awslocal eks list-clusters
    13. {
    14. "clusters": [
    15. "cluster1"
    16. ]
    17. }

    Simply configure your Kubernetes client (e.g., kubectl or other SDK) to point to the endpoint specified in the create-cluster output above. Depending on whether you’re calling the Kubernetes API from the local machine or from within a Lambda, you may have to use different endpoint URLs (https://localhost:6443 vs https://172.17.0.1:6443).

    If you have specific directories which you want to mount from your local dev machine into one of your pods you can do this with two simple steps:

    First, make sure to create your cluster with the special tag __k3d_volume_mount__, specifying how you want to mount a volume from your dev machine to the cluster nodes:

    1. $ awslocal eks create-cluster --name cluster1 --role-arn r1 --resources-vpc-config '{}' --tags '{"__k3d_volume_mount__":"/path/on/host:/path/on/node"}'
    2. {
    3. "cluster": {
    4. "name": "cluster1",
    5. "arn": "arn:aws:eks:eu-central-1:000000000000:cluster/cluster1",
    6. "createdAt": "Sat, 05 Oct 2019 12:29:26 GMT",
    7. "endpoint": "https://172.17.0.1:6443",
    8. "status": "ACTIVE",
    9. "tags": {
    10. "__k3d_volume_mount__" : "/path/on/host:/path/on/node"
    11. }
    12. ...
    13. }
    14. }

    Then, you can create your path with volume mounts as usual, with a configuration similar to this:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: test
    5. spec:
    6. volumes:
    7. - name: example-volume
    8. hostPath:
    9. path: /path/on/node
    10. containers:
    11. - image: alpine:3.12
    12. command: ["/bin/sh","-c"]
    13. args:
    14. - echo "Starting the update command";
    15. apk update;
    16. echo "Adding the openssh command";
    17. apk add openssh;
    18. echo "openssh completed";
    19. sleep 240m;
    20. imagePullPolicy: IfNotPresent
    21. name: alpine
    22. volumeMounts:
    23. - mountPath: "/path/on/pod"
    24. name: example-volume