3 - vSphere Storage
In order to provision vSphere volumes in a cluster created with the Rancher Kubernetes Engine (RKE), the must be explicitly enabled in the cluster options.
- From the Global view, open the cluster where you want to provide vSphere storage.
- From the main menu, select Storage > Storage Classes. Then click Add Class.
- Enter a Name for the class.
- Under Provisioner, select VMWare vSphere Volume.
Click Save.
- From the cluster where you configured vSphere storage, begin creating a workload as you would in .
- For Workload Type, select Stateful set of 1 pod.
- Expand the Volumes section and click Add Volume.
- Choose Add a new persistent volume (claim). This option will implicitly create the claim once you deploy the workload.
- Assign a Name for the claim, ie.
test-volume
and select the vSphere storage class created in the previous step. - Enter the required Capacity for the volume. Then click Define.
Click Launch to create the workload.
- From the context menu of the workload you just created, click Execute Shell.
- Create a file in the volume by executing the command .
- Close the shell window.
- Click on the name of the workload to reveal detail information.
- Open the context menu next to the Pod in the Running state.
- Delete the Pod by selecting Delete.
- Observe that the pod is deleted. Then a new pod is scheduled to replace it so that the workload maintains its configured scale of a single stateful pod.
- Once the replacement pod is running, click Execute Shell.
- Inspect the contents of the directory where the volume is mounted by entering
ls -l /<volumeMountPoint>
. Note that the file you created earlier is still present.
You should always use StatefulSets for workloads consuming vSphere storage, as this resource type is designed to address a VMDK block storage caveat.
Even using a deployment resource with just a single replica may result in a deadlock situation while updating the deployment. If the updated pod is scheduled to a node different from where the existing pod lives, it will fail to start because the VMDK is still attached to the other node.