Running an experiment

    This page describes in detail how to configure and run a Katib experiment.The experiment can perform hyperparameter tuning or a neural architecture search(NAS) (alpha), depending on the configuration settings.

    For an overview of the concepts involved, read the introduction toKatib.

    Katib and Kubeflow are Kubernetes-based systems. To use Katib, you must packageyour training code in a Docker container image and make the image availablein a registry. See the Dockerdocumentation andthe .

    To create a hyperparameter tuning or NAS experiment in Katib, you define theexperiment in a YAML configuration file. The YAML file defines the range ofpotential values (the search space) for the paramaters that you want tooptimize, the objective metric to use when determining optimal values, thesearch algorithm to use during optimization, and other configurations.

    See the YAML file for the random algorithmexample.

    The list below describes the fields in the YAML file for an experiment. TheKatib UI offers the corresponding fields. You can choose to configure and runthe experiment from the UI or from the command line.

    These are the fields in the experiment configuration spec:

    • parameters: The range of the hyperparameters or other parameters that youwant to tune for your ML model. The parameters define the search space,also known as the feasible set or the solution space.In this section of the spec, you define the name and the distribution(discrete or continuous) of every hyperparameter that you need to search.For example, you may provide a minimum and maximum value or a listof allowed values for each hyperparameter.Katib generates hyperparameter combinations in the range based on thehyperparameter tuning algorithm that you specify. See the .

    • algorithm: The search algorithm that you want Katib to use to find thebest hyperparameters or neural architecture configuration. Examples includerandom search, grid search, Bayesian optimization, and more.See the search algorithm details below.

    • trialTemplate: The template that defines the trial.You must package your ML training code into a Docker image, as described. You must configure the model’shyperparameters either as command-line arguments or as environment variables,so that Katib can automatically set the values in each trial.

    You can use one of the following job types to train your model:

    You can define the job in raw string format or you can use a.

    • parallelTrialCount: The maximum number of hyperparameter sets that Katibshould train in parallel.

    • maxTrialCount: The maximum number of trials to run.This is equivalent to the number of hyperparameter sets that Katib shouldgenerate to test the model.

    • maxFailedTrialCount: The maximum number of failed trials before Katibshould stop the experiment.This is equivalent to the number of failed hyperparameter sets that Katibshould test.If the number of failed trials exceeds maxFailedTrialCount, Katib stops theexperiment with a status of Failed.

    • metricsCollectorSpec: A specification of how to collect the metrics fromeach trial, such as the accuracy and loss metrics.See the details of the metrics collector below.

    • nasConfig: The configuration for a neural architecture search (NAS).Note: NAS is currently in alpha with limited support.You can specify the configurations of the neural network design that you wantto optimize, including the number of layers in the network, the types ofoperations, and more.See the .As an example, see the YAML file for thenasjob-example-RL-gpu.The example aims to show all the possible operations. Due to the large searchspace, the example is not likely to generate a good result.

    Background information about Katib’s Experiment type: In Kubernetesterminology, Katib’stype is a custom resource(CR).The YAML file that you create for your experiment is the CR specification.

    Katib currently supports several search algorithms. See the AlgorithmSpectype.

    Here’s a list of the search algorithms available in Katib. The links lead todescriptions on this page:

    More algorithms are under development. You can add an algorithm to Katibyourself. See the guide to and the developerguide.

    Grid sampling is useful when all variables are discrete (as opposed tocontinuous) and the number of possibilities is low. A grid searchperforms an exhaustive combinatorial search over all possibilities,making the search process extremely long even for medium sized problems.

    Katib uses the optimizationframework for its grid search.

    The algorithm name in Katib is random.

    Random sampling is an alternative to grid search, useful when the number ofdiscrete variables to optimize is large and the time required for eachevaluation is logn. When all parameters are discrete, random search performssampling without replacement. Random search is therefore the best algorithm touse when combinatorial exploration is not possible. If the number of continuousvariables is high, you should use quasi random sampling instead.

    Katib uses the hyperopt optimizationframework for its random search.

    Katib supports the following algorithm settings:

    Bayesian optimization

    The algorithm name in Katib is skopt-bayesian-optimization.

    The method usesgaussian process regression to model the search space. This technique calculatesan estimate of the loss function and the uncertainty of that estimate at everypoint in the search space. The method is suitable when the number ofdimensions in the search space is low. Since the method models boththe expected loss and the uncertainty, the search algorithm converges in a fewsteps, making it a good choice when the time tocomplete the evaluation of a parameter configuration is long.

    Katib uses theScikit-Optimize libraryfor its Bayesian search. Scikit-Optimize is also known as skopt.

    Katib supports the following algorithm settings:

    Setting NameDescriptionExample
    base_estimator[“GP”, “RF”, “ET”, “GBRT” or sklearn regressor, default=“GP”]: Should inherit from . The predict method should have an optional return_std argument, which returns std(Y | x) along with E[Y | x]. If base_estimator is one of [“GP”, “RF”, “ET”, “GBRT”], the system uses a default surrogate model of the corresponding type. See more information in the .GP
    n_initial_points[int, default=10]: Number of evaluations of func with initialization points before approximating it with base_estimator. Points provided as x0 count as initialization points. If len(x0) < n_initial_points, the system samples additional points at random. See more information in the skopt documentation.10
    acq_func[string, default="gp_hedge"]: The function to minimize over the posterior distribution. See more information in the .gp_hedge
    acq_optimizer[string, “sampling” or “lbfgs”, default=“auto”]: The method to minimize the acquistion function. The system updates the fit model with the optimal value obtained by optimizing acq_func with acq_optimizer. See more information in the skopt documentation.auto
    random_state[int]: Set random_state to something other than None for reproducible results.10

    HYPERBAND

    The algorithm name in Katib is hyperband.

    Katib supports the optimization framework.Instead of using Bayesian optimization to select configurations, HYPERBANDfocuses on early stopping as a strategy for optimizing resource allocation andthus for maximixing the number of configurations that it can evaluate.HYPERBAND also focuses on the speed of the search.

    Hyperopt TPE

    The algorithm name in Katib is tpe.

    Katib uses the Tree of Parzen Estimators (TPE) algorithm inhyperopt. This method provides asearch.

    NAS using reinforcement learning

    Alpha version

    Neural architecture search is currently in alpha with limited support.The Kubeflow team is interested in any feedback you may have, in particular withregards to usability of the feature. You can log issues and comments inthe .

    The algorithm name in Katib is nasrl.

    For more information, see:

    In the metricsCollectorSpec section of the YAML configuration file, you candefine how Katib should collect the metrics from each trial, such as theaccuracy and loss metrics.

    Your training code can record the metrics into stdout or into arbitrary outputfiles. Katib collects the metrics using a sidecar container. A sidecar isa utility container that supports the main container in the Kubernetes Pod.

    • Specify the metrics output location in the source field. See theMetricsCollectorSpec type for default values.

    • Write code in your training container to print metrics in the formatspecified in the metricsCollectorSpec.source.filter.metricsFormatfield. The default format is ([\w|-]+)\s=\s((-?\d+)(.\d+)?).Each element is a regular expression with two subexpressions. The firstmatched expression is taken as the metric name. The second matchedexpression is taken as the metric value.

    For example, using the default metrics format, if the name of your objective metricis loss and the metrics are recall and , your training code shouldprint the following output:

    You can run a Katib experiment from the command line or from the Katib UI.

    You can use to launch an experiment from the command line:

    For example, run the following command to launch an experiment using therandom algorithm example:

    Check the experiment status:

    For example, to check the status of the random algorithm example:

    Instead of using the command line, you can submit an experiment from the KatibUI. The following steps assume you want to run a hyperparameter tuningexperiment. If you want to run a neural architecture search, access the NASsection of the UI (instead of the HP section) and then follow a similarsequence of steps.

    To run a hyperparameter tuning experiment from the Katib UI:

    • Follow the getting-started guide to access the KatibUI.
    • Click Hyperparameter Tuning on the Katib home page.
    • Open the Katib menu panel on the left, then open the HP section andclick Submit:

    • Click on the right-hand panel to close the menu panel. You should seetabs offering you the following options:

      • YAML file: Choose this option to supply an entire YAML file containingthe configuration for the experiment.

    UI tab to paste a YAML configuration file

    • Parameters: Choose this option to enter the configuration valuesinto a form.

    View the results of the experiment in the Katib UI:

    • Open the Katib menu panel on the left, then open the HP section andclick Monitor:

    The Katib menu panel

    • Click on the right-hand panel to close the menu panel. You should seethe list of experiments:

    • Click the name of your experiment. For example, click random-example.

    • You should see a graph showing the level of accuracy for variouscombinations of the hyperparameter values. For example, the graph belowshows learning rate, number of layers, and optimizer:

    Graph produced by the random example

    • Below the graph is a list of trials that ran within the experiment.Click a trial name to see the trial data.
    • For an overview of the concepts involved in hyperparameter tuning andneural architecture search, read the .

    Was this page helpful?

    Glad to hear it! Please tell us how we can improve.

    Last modified 14.02.2020: