Create a crun demo for KubeEdge

    Install CRI-O

    1. cat <<EOF | sudo tee /etc/modules-load.d/crio.conf
    2. overlay
    3. br_netfilter
    4. EOF
    5. sudo modprobe overlay
    6. sudo modprobe br_netfilter
    7. # Set up required sysctl params, these persist across reboots.
    8. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
    9. net.bridge.bridge-nf-call-iptables = 1
    10. net.ipv4.ip_forward = 1
    11. net.bridge.bridge-nf-call-ip6tables = 1
    12. EOF
    13. sudo sysctl --system
    14. export OS="xUbuntu_20.04"
    15. export VERSION="1.21"
    16. cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
    17. deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
    18. EOF
    19. cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
    20. deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
    21. EOF
    22. curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
    23. curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
    24. sudo apt-get update
    25. sudo apt-get install cri-o cri-o-runc
    26. sudo systemctl daemon-reload
    27. sudo systemctl enable crio --now
    28. sudo systemctl status cri-o

    output:

    1. $ sudo systemctl status cri-o
    2. crio.service - Container Runtime Interface for OCI (CRI-O)
    3. Loaded: loaded (/lib/systemd/system/crio.service; enabled; vendor preset: enabled)
    4. Active: active (running) since Mon 2021-12-06 13:46:29 UTC; 16h ago
    5. Docs: https://github.com/cri-o/cri-o
    6. Main PID: 6868 (crio)
    7. Tasks: 14
    8. Memory: 133.2M
    9. CGroup: /system.slice/crio.service
    10. └─6868 /usr/bin/crio
    11. Dec 07 06:04:13 master crio[6868]: time="2021-12-07 06:04:13.694226800Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=1dbb722e-f031-410c-9f45-5d4b5760163e name=/runtime.v1alpha2.ImageServic>
    12. Dec 07 06:04:13 master crio[6868]: time="2021-12-07 06:04:13.695739507Z" level=info msg="Image status: &{0xc00047fdc0 map[]}" id=1dbb722e-f031-410c-9f45-5d4b5760163e name=/runtime.v1alpha2.ImageService/ImageSta>
    13. Dec 07 06:09:13 master crio[6868]: time="2021-12-07 06:09:13.698823984Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=661b754b-48a4-401b-a03f-7f7a553c7eb6 name=/runtime.v1alpha2.ImageServic>
    14. Dec 07 06:09:13 master crio[6868]: time="2021-12-07 06:09:13.703259157Z" level=info msg="Image status: &{0xc0004d98f0 map[]}" id=661b754b-48a4-401b-a03f-7f7a553c7eb6 name=/runtime.v1alpha2.ImageService/ImageSta>
    15. Dec 07 06:14:13 master crio[6868]: time="2021-12-07 06:14:13.707778419Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=8c7e4d36-871a-452e-ab55-707053604077 name=/runtime.v1alpha2.ImageServic>
    16. Dec 07 06:14:13 master crio[6868]: time="2021-12-07 06:14:13.709379469Z" level=info msg="Image status: &{0xc000035030 map[]}" id=8c7e4d36-871a-452e-ab55-707053604077 name=/runtime.v1alpha2.ImageService/ImageSta>
    17. Dec 07 06:19:13 master crio[6868]: time="2021-12-07 06:19:13.713158978Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=827b6315-f145-4f76-b8da-31653d5892a2 name=/runtime.v1alpha2.ImageServic>
    18. Dec 07 06:19:13 master crio[6868]: time="2021-12-07 06:19:13.714030148Z" level=info msg="Image status: &{0xc000162bd0 map[]}" id=827b6315-f145-4f76-b8da-31653d5892a2 name=/runtime.v1alpha2.ImageService/ImageSta>
    19. Dec 07 06:24:13 master crio[6868]: time="2021-12-07 06:24:13.716746612Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=1d53a917-4d98-4723-9ea8-a2951a472cff name=/runtime.v1alpha2.ImageServic>
    20. Dec 07 06:24:13 master crio[6868]: time="2021-12-07 06:24:13.717381882Z" level=info msg="Image status: &{0xc00042ce00 map[]}" id=1d53a917-4d98-4723-9ea8-a2951a472cff name=/runtime.v1alpha2.ImageService/ImageSta>

    Install and Creating a cluster with kubeadm for K8s

    Please see .

    1. sudo apt-get install -y apt-transport-https curl
    2. echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    3. sudo apt update
    4. K_VER="1.21.0-00"
    5. sudo apt install -y kubelet=${K_VER} kubectl=${K_VER} kubeadm=${K_VER}
    6. sudo apt-mark hold kubelet kubeadm kubectl

    1. #kubernetes scheduler requires this setting to be done.
    2. $ sudo swapoff -a
    3. $ sudo vim /etc/fstab
    4. $ cat /etc/cni/net.d/100-crio-bridge.conf
    5. {
    6. "cniVersion": "0.3.1",
    7. "name": "crio",
    8. "type": "bridge",
    9. "bridge": "cni0",
    10. "isGateway": true,
    11. "ipMasq": true,
    12. "hairpinMode": true,
    13. "ipam": {
    14. "type": "host-local",
    15. "routes": [
    16. { "dst": "0.0.0.0/0" },
    17. { "dst": "1100:200::1/24" }
    18. ],
    19. "ranges": [
    20. [{ "subnet": "10.85.0.0/16" }],
    21. [{ "subnet": "1100:200::/24" }]
    22. ]
    23. }
    24. }
    25. $ export CIDR=10.85.0.0/16
    26. $ sudo kubeadm init --apiserver-advertise-address=192.168.122.160 --pod-network-cidr=$CIDR --cri-socket=/var/run/crio/crio.sock
    27. $ mkdir -p $HOME/.kube
    28. $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    29. $ sudo chown $(id -u):$(id -g) $HOME/.kube/config

    output:

    1. Your Kubernetes control-plane has initialized successfully!
    2. To start using your cluster, you need to run the following as a regular user:
    3. mkdir -p $HOME/.kube
    4. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    5. sudo chown $(id -u):$(id -g) $HOME/.kube/config
    6. You should now deploy a Pod network to the cluster.
    7. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    8. /docs/concepts/cluster-administration/addons/
    9. You can now join any number of machines by running the following on each node
    10. as root:
    11. kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

    To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:

    Please see Deploying using Keadm.

    1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster.
    2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with —advertise-address flag.
    3. --advertise-address(only work since 1.3 release) is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP.
    1. wget https://github.com/kubeedge/kubeedge/releases/download/v1.8.0/keadm-v1.8.0-linux-amd64.tar.gz
    2. tar xzvf keadm-v1.8.0-linux-amd64.tar.gz
    3. cd keadm-v1.8.0-linux-amd64/keadm/
    4. sudo ./keadm init --advertise-address=192.168.122.160 --kube-config=/home/${user}/.kube/config

    output:

    1. Kubernetes version verification passed, KubeEdge installation will start...
    2. ...
    3. KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log

    You can use the CRI-O script to install CRI-O and crun on Ubuntu 20.04.

    1. wget -qO- https://raw.githubusercontent.com/second-state/wasmedge-containers-examples/main/crio/install.sh | bash

    1. $ wget https://golang.org/dl/go1.17.3.linux-amd64.tar.gz
    2. $ tar xzvf go1.17.3.linux-amd64.tar.gz
    3. $ export PATH=/home/${user}/go/bin:$PATH
    4. $ go version
    5. go version go1.17.3 linux/amd64

    Run keadm gettoken in cloud side will return the token, which will be used when joining edge nodes.

    1. $ sudo ./keadm gettoken --kube-config=/home/${user}/.kube/config

    Please see Setting different container runtime with CRI and .

    Output:

    1. Host has mosquit+ already installed and running. Hence skipping the installation steps !!!
    2. ...
    3. KubeEdge edgecore is running, For logs visit: /var/log/kubeedge/edgecore.log

    1. kubectl get node
    2. NAME STATUS ROLES AGE VERSION
    3. master Ready control-plane,master 68m v1.21.0

    Before metrics-server deployed, kubectl logs feature must be activated, please see here.

    We can run the WebAssembly-based image from Docker Hub in the Kubernetes cluster.

    Cloud Side

    1. $ kubectl run -it --restart=Never wasi-demo --image=hydai/wasm-wasi-example:with-wasm-annotation --annotations="module.wasm.image/variant=compat" /wasi_example_main.wasm 50000000
    2. Random number: -1694733782
    3. Random bytes: [6, 226, 176, 126, 136, 114, 90, 2, 216, 17, 241, 217, 143, 189, 123, 197, 17, 60, 49, 37, 71, 69, 67, 108, 66, 39, 105, 9, 6, 72, 232, 238, 102, 5, 148, 243, 249, 183, 52, 228, 54, 176, 63, 249, 216, 217, 46, 74, 88, 204, 130, 191, 182, 19, 118, 193, 77, 35, 189, 6, 139, 68, 163, 214, 231, 100, 138, 246, 185, 47, 37, 49, 3, 7, 176, 97, 68, 124, 20, 235, 145, 166, 142, 159, 114, 163, 186, 46, 161, 144, 191, 211, 69, 19, 179, 241, 8, 207, 8, 112, 80, 170, 33, 51, 251, 33, 105, 0, 178, 175, 129, 225, 112, 126, 102, 219, 106, 77, 242, 104, 198, 238, 193, 247, 23, 47, 22, 29]
    4. Printed from wasi: This is from a main function
    5. This is from a main function
    6. The env vars are as follows.
    7. The args are as follows.
    8. /wasi_example_main.wasm
    9. 50000000
    10. File content is This is in a file

    The WebAssembly app of pod successfully deploy to edge node.

    1. $ kubectl describe pod wasi-demo
    2. Name: wasi-demo
    3. Namespace: default
    4. Priority: 0
    5. Node: edge/192.168.122.229
    6. Start Time: Mon, 06 Dec 2021 15:45:34 +0000
    7. Labels: run=wasi-demo
    8. Annotations: module.wasm.image/variant: compat
    9. Status: Succeeded
    10. IP:
    11. IPs: <none>
    12. Containers:
    13. wasi-demo:
    14. Container ID: cri-o://1ae4d0d7f671050331a17e9b61b5436bf97ad35ad0358bef043ab820aed81069
    15. Image: hydai/wasm-wasi-example:with-wasm-annotation
    16. Image ID: docker.io/hydai/wasm-wasi-example@sha256:525aab8d6ae8a317fd3e83cdac14b7883b92321c7bec72a545edf276bb2100d6
    17. Port: <none>
    18. Host Port: <none>
    19. Args:
    20. /wasi_example_main.wasm
    21. 50000000
    22. State: Terminated
    23. Reason: Completed
    24. Exit Code: 0
    25. Started: Mon, 06 Dec 2021 15:45:33 +0000
    26. Finished: Mon, 06 Dec 2021 15:45:33 +0000
    27. Ready: False
    28. Restart Count: 0
    29. Environment: <none>
    30. Mounts:
    31. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bhszr (ro)
    32. Conditions:
    33. Type Status
    34. Initialized True
    35. Ready False
    36. PodScheduled True
    37. Volumes:
    38. kube-api-access-bhszr:
    39. Type: Projected (a volume that contains injected data from multiple sources)
    40. TokenExpirationSeconds: 3607
    41. ConfigMapName: kube-root-ca.crt
    42. ConfigMapOptional: <nil>
    43. DownwardAPI: true
    44. QoS Class: BestEffort
    45. Node-Selectors: <none>
    46. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
    47. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    48. Events:
    49. Type Reason Age From Message
    50. ---- ------ ---- ---- -------
    1. $ sudo crictl ps -a
    2. CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
    3. 1ae4d0d7f6710 0423b8eb71e312b8aaa09a0f0b6976381ff567d5b1e5729bf9b9aa87bff1c9f3 16 minutes ago Exited wasi-demo 0 2bc2ac0c32eda

    That’s it.