Using Preemptible VMs and GPUs on GCP

    This document describes how to configure preemptible virtual machines(preemptible VMs)and GPUs on preemptible VM instances()for your workflows running on Kubeflow Pipelines on Google Cloud Platform (GCP).

    Preemptible VMs are Compute Engine VMinstances that last a maximumof 24 hours and provide no availability guarantees. The of preemptible VMs islower than that of standard Compute Engine VMs.

    GPUs attached to preemptible instances(preemptible GPUs)work like normal GPUs but persist only for the life of the instance.

    Using preemptible VMs and GPUs can reduce costs on GCP.In addition to using preemptible VMs, your Google Kubernetes Engine (GKE)cluster can autoscale based on current workloads.

    This guide assumes that you have already deployed Kubeflow Pipelines. If not,follow the guide to .

    Using preemptible VMs with Kubeflow Pipelines

    In summary, the steps to schedule a pipeline to run on are asfollows:

    • Create anode poolin your cluster that contains preemptible VMs.
    • Configure your pipelines to run on the preemptible VMs.The following sections contain more detail about the above steps.

    Use the command to.The following example includes placeholders to illustrate the importantconfigurations:

    Where:

    • PREEMPTIBLE_CPU_POOL is the name of the node pool.
    • CLUSTER_NAME is the name of the GKE cluster.
    • MAX_NODES and MIN_NODES are the maximum and minimum number of nodesfor the GKEautoscalingfunctionality.
    • DEPLOYMENT_NAME is the name of your Kubeflow deployment. If you used,this name is the value of the ${KF_NAME} environment variable. If you usedthe deployment UI,this name is the value you specified as the deployment name.
    • PROJECT_NAME is the name of your GCP project.

    Below is an example of the command:

    1. gcloud container node-pools create preemptible-cpu-pool \
    2. --cluster=user-4-18 \
    3. --enable-autoscaling --max-nodes=4 --min-nodes=0 \
    4. --preemptible \
    5. --node-taints=preemptible=true:NoSchedule \
    6. --service-account=user-4-18-vm@ml-pipeline-project.iam.gserviceaccount.com

    In the foryour pipeline, add the following to the ContainerOp instance:

    The above function works for both methods of generating the ContainerOp:

    Note:

    • Call .set_retry(#NUM_RETRY) on your ContainerOp to retrythe task after the task is preempted.
    • If you modified thenode taintwhen creating the node pool, pass the same node toleration to theuse_preemptible_nodepool() function.
    • use_preemptible_nodepool() also accepts a parameter hard_constraint. When the hard_constraint isTrue, the system will strictly schedule the task in preemptible VMs. When the hard_constraint isFalse, the system will try to schedule the task in preemptible VMs. If it cannot find the preemptible VMs,or the preemptible VMs are busy, the system will schedule the task in normal VMs.

    For example:

    1. import kfp.dsl as dsl
    2. import kfp.gcp as gcp
    3. class FlipCoinOp(dsl.ContainerOp):
    4. """Flip a coin and output heads or tails randomly."""
    5. def __init__(self):
    6. super(FlipCoinOp, self).__init__(
    7. name='Flip',
    8. command=['sh', '-c'],
    9. arguments=['python -c "import random; result = \'heads\' if random.randint(0,1) == 0 '
    10. 'else \'tails\'; print(result)" | tee /tmp/output'],
    11. file_outputs={'output': '/tmp/output'})
    12. @dsl.pipeline(
    13. name='pipeline flip coin',
    14. description='shows how to use dsl.Condition.'
    15. )
    16. def flipcoin():
    17. flip = FlipCoinOp().apply(gcp.use_preemptible_nodepool())
    18. if __name__ == '__main__':
    19. import kfp.compiler as compiler
    20. compiler.Compiler().compile(flipcoin, __file__ + '.zip')

    This guide assumes that you have already deployed Kubeflow Pipelines. Insummary, the steps to schedule a pipeline to run withare as follows:

    • Make sure you have enough GPU quota.
    • Create a node pool in your GKE cluster that contains preemptible VMs withpreemptible GPUs.
    • Configure your pipelines to run on the preemptible VMs with preemptibleGPUs.The following sections contain more detail about the above steps.

    Add GPU quota to your GCP project. The GCPdocumentation liststhe availability of GPUs across regions. To check the available quota forresources in your project, go to the page in the GCPConsole.

    Use the gcloud command tocreate a node pool.The following example includes placeholders to illustrate the importantconfigurations:

    Where:

    • PREEMPTIBLE_GPU_POOL is the name of the node pool.
    • CLUSTER_NAME is the name of the GKE cluster.
    • MAX_NODES and MIN_NODES are the maximum and minimum number of nodesfor thefunctionality.
    • DEPLOYMENT_NAME is the name of your Kubeflow deployment. If you usedthe CLI to deploy Kubeflow,this name is the value of the ${KF_NAME} environment variable. If you usedthe ,this name is the value you specified as the deployment name.
    • PROJECT_NAME is the name of your GCP project.
    • GPU_TYPE is the type ofGPU.
    • GPU_COUNT is the number of GPUs.

    Below is an example of the command:

    1. gcloud container node-pools create preemptible-gpu-pool \
    2. --cluster=user-4-18 \
    3. --enable-autoscaling --max-nodes=4 --min-nodes=0 \
    4. --preemptible \
    5. --service-account=user-4-18-vm@ml-pipeline-project.iam.gserviceaccount.com \
    6. --accelerator=type=nvidia-tesla-t4,count=2

    The above function works for both methods of generating the ContainerOp:

    Note:

    • Call .set_gpu_limit(#NUM_GPUs, GPU_VENDOR) on yourContainerOp to specify the GPU limit (for example, 1) and vendor (forexample, 'nvidia').
    • Call .set_retry(#NUM_RETRY) on your ContainerOp to retrythe task after the task is preempted.
    • If you modified thewhen creating the node pool, pass the same node toleration to theuse_preemptible_nodepool() function.
    • use_preemptible_nodepool() also accepts a parameter hard_constraint. When the hard_constraint isTrue, the system will strictly schedule the task in preemptible VMs. When the hard_constraint isFalse, the system will try to schedule the task in preemptible VMs. If it cannot find the preemptible VMs,or the preemptible VMs are busy, the system will schedule the task in normal VMs.

    For example:

    1. import kfp.dsl as dsl
    2. import kfp.gcp as gcp
    3. class FlipCoinOp(dsl.ContainerOp):
    4. """Flip a coin and output heads or tails randomly."""
    5. def __init__(self):
    6. super(FlipCoinOp, self).__init__(
    7. name='Flip',
    8. image='python:alpine3.6',
    9. command=['sh', '-c'],
    10. arguments=['python -c "import random; result = \'heads\' if random.randint(0,1) == 0 '
    11. 'else \'tails\'; print(result)" | tee /tmp/output'],
    12. file_outputs={'output': '/tmp/output'})
    13. @dsl.pipeline(
    14. name='pipeline flip coin',
    15. description='shows how to use dsl.Condition.'
    16. )
    17. def flipcoin():
    18. flip = FlipCoinOp().set_gpu_limit(1, 'nvidia').apply(gcp.use_preemptible_nodepool())
    19. if __name__ == '__main__':
    20. compiler.Compiler().compile(flipcoin, __file__ + '.zip')

    Comparison with Cloud AI Platform Training service

    is a GCPmachine learning (ML) training service that supports distributed training andhyperparameter tuning, and requires no complex GKE configuration. Cloud AIPlatform Training charges the Compute Engine costs only for the runtime of thejob.

    The table below compares Cloud AI Platform Training with Kubeflow Pipelinesrunning preemptible VMs or GPUs:

    Feedback

    Was this page helpful?

    Glad to hear it! Please tell us how we can improve.

    Sorry to hear that. Please .

    Last modified 07.01.2020: Moved GCP-specific docs to GCP pipelines section. (#1498) (ab7ae7f9)