OpenEBS for Percona
Percona is highly scalable and requires underlying persistent storage to be equally scalable and performing. OpenEBS provides scalable storage for Percona for providing a simple and scalable RDS like solution for both On-Premise and cloud environments.
Advantages of using OpenEBS for Percona database:
- Storage is highly available. Data is replicated on to three different nodes, even across zones. Node upgrades, node failures will not result in unavailability of persistent data.
- For each database instance of Percona, a dedicated OpenEBS workload is allocated so that granular storage policies can be applied. OpenEBS storage controller can be tuned with resources such as memory, CPU and number/type of disks for optimal performance.
Deployment model
As shown above, OpenEBS volumes need to be configured with three replicas for high availability. This configuration works fine when the nodes (hence the cStor pool) is deployed across Kubernetes zones.
Install OpenEBS
If OpenEBS is not installed in your K8s cluster, this can done from . If OpenEBS is already installed, go to the next step.
Configure cStor Pool
Launch and test Percona:
Create a file called
percona-openebs-deployment.yaml
and add content from given in the configuration details section. Runkubectl apply -f percona-openebs-deployment.yaml
to deploy Percona application. For more information, see Percona documentation. In other way, you can use stable Percona image with helm to deploy Percona in your cluster using the following command.
Reference at openebs.ci
A at https://openebs.ci
Sample YAML for running Percona-mysql using cStor are
OpenEBS-CI dashboard of Percona
It is not seamless to increase the cStor volume size (refer to the roadmap item). Hence, it is recommended that sufficient size is allocated during the initial configuration.
Monitor cStor Pool size
As in most cases, cStor pool may not be dedicated to just Percona database alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See .
Maintain volume replica quorum during node upgrades
cStor volume replicas need to be in quorum when applications are deployed as and cStor volume is configured to have 3 replicas
. Node reboots may be common during Kubernetes upgrade. Maintain volume replica quorum in such instances. See here for more details.
Configuration details
openebs-config.yaml
openebs-sc-disk.yaml