Recognition of MNIST Handwritten Digits

    • Configuring the hardware and software environment using the OneFlow interface

    • Define models using OneFlow’s interface

    • Model training with train type

    • How to save/load model

    • Use the predict type for model evaluation

    • Using predict type for image recognition

    This article demonstrates the key steps of how to train a LeNet model with MNIST dataset using OneFlow. The full example code is attached at the end of article.

    You can see the effects of each script by running the following commands (The script operation rely on the default GPU No.0 on your machine. If you install the CPU version of OneFlow, the script will automatically call the CPU for training/evaluation).

    First of all, clone the documentation repository and switch to the corresponding path:

    • Training model

      1. python lenet_train.py

      The commands above will train a model with MNIST dataset and save it.

    Output:

    1. File mnist.npz already exist, path: ./mnist.npz
    2. 5.9947124
    3. 1.0865117
    4. 0.5317516
    5. 0.20937675
    6. 0.26428983
    7. 0.21764673
    8. 0.23443426
    9. ...
    1. #change directory to: en/docs/code/quick_start/
    2. wget https://oneflow-public.oss-cn-beijing.aliyuncs.com/online_document/docs/quick_start/lenet_models_1.zip
    3. unzip lenet_models_1.zip
    • Evaluation

      1. python lenet_eval.py

      The command above uses the MNIST’s testing set to evaluate the trained model and print out the accuracy.

    Output:

    1. File mnist.npz already exist, path: ./mnist.npz
    2. accuracy: 99.4%
    • Image recognition

    MNIST is a handwritten digits database including training set and testing set. Training set includes 60000 pictures and their corresponding label. Yann LeCun and others have normalized all the images and packed them into a single binary file for downloading. http://yann.lecun.com/exdb/mnist/

    Define Training Model

    Modules oneflow.nn and provide the operators to construct the model.

    1. def lenet(data, train=False):
    2. initializer = flow.truncated_normal(0.1)
    3. conv1 = flow.layers.conv2d(
    4. data,
    5. 32,
    6. 5,
    7. padding="SAME",
    8. activation=flow.nn.relu,
    9. name="conv1",
    10. kernel_initializer=initializer,
    11. )
    12. pool1 = flow.nn.max_pool2d(
    13. conv1, ksize=2, strides=2, padding="SAME", name="pool1", data_format="NCHW"
    14. conv2 = flow.layers.conv2d(
    15. pool1,
    16. 64,
    17. 5,
    18. padding="SAME",
    19. activation=flow.nn.relu,
    20. name="conv2",
    21. kernel_initializer=initializer,
    22. )
    23. pool2 = flow.nn.max_pool2d(
    24. conv2, ksize=2, strides=2, padding="SAME", name="pool2", data_format="NCHW"
    25. )
    26. reshape = flow.reshape(pool2, [pool2.shape[0], -1])
    27. hidden = flow.layers.dense(
    28. reshape,
    29. 512,
    30. activation=flow.nn.relu,
    31. kernel_initializer=initializer,
    32. name="dense1",
    33. )
    34. if train:
    35. hidden = flow.nn.dropout(hidden, rate=0.5, name="dropout")
    36. return flow.layers.dense(hidden, 10, kernel_initializer=initializer, name="dense2")

    As the code showing above, we build up a LeNet network model.

    OneFlow provides a decorator named oneflow.global_function by which we can covert a Python function to a OneFlow Job Function .

    decorator takes a type parameter to specify the type of job function. The means that the job function is for traning and type="predict" is for predicting.

    There is also a function_config parameter taken by oneflow.global_function decorator. The function_config contains configuration about training.

    1. @flow.global_function(type="train")
    2. def train_job(images:tp.Numpy.Placeholder((BATCH_SIZE, 1, 28, 28), dtype=flow.float),
    3. labels:tp.Numpy.Placeholder((BATCH_SIZE,), dtype=flow.int32)) -> tp.Numpy:
    4. # Implementation of netwrok ...

    The tp.Numpy.Placeholder is a placeholder. The annotation tp.Numpy on return type means that the job function will return a numpy object.

    Setup Optimizer

    We can use to specify the parameters needed by optimization. By this way, in the process of each iteration during training, OneFlow will take the specified object as optimization goal.

    1. @flow.global_function(type="train")
    2. def train_job(
    3. images: tp.Numpy.Placeholder((BATCH_SIZE, 1, 28, 28), dtype=flow.float),
    4. labels: tp.Numpy.Placeholder((BATCH_SIZE,), dtype=flow.int32),
    5. ) -> tp.Numpy:
    6. with flow.scope.placement("gpu", "0:0"):
    7. logits = lenet(images, train=True)
    8. loss = flow.nn.sparse_softmax_cross_entropy_with_logits(
    9. labels, logits, name="softmax_loss"
    10. )
    11. lr_scheduler = flow.optimizer.PiecewiseConstantScheduler([], [0.1])
    12. flow.optimizer.SGD(lr_scheduler, momentum=0).minimize(loss)
    13. return loss

    So Far, we use flow.nn.sparse_softmax_cross_entropy_with_logits to calculate the loss and specify it as optimization goal.

    • lr_scheduler sets the learning rate schedule, and [0.1] means learning rate is 0.1.
    • flow.optimizer.SGD means SGD is specified as the optimizer. The loss is the goal of minimization to the optimizer and the return type (not requried).

    Calling the Job Function and Get Results

    We can start training by invoking the job function.

    The return value we get when we call the job function is defined by the annotation of return value type in job function.

    We can get one or multiple results after each call of job function.

    Example on Single Return Value

    The job function in lenet_train.py:

    1. @flow.global_function(type="train")
    2. def train_job(
    3. images: tp.Numpy.Placeholder((BATCH_SIZE, 1, 28, 28), dtype=flow.float),
    4. labels: tp.Numpy.Placeholder((BATCH_SIZE,), dtype=flow.int32),
    5. ) -> tp.Numpy:
    6. with flow.scope.placement("gpu", "0:0"):
    7. logits = lenet(images, train=True)
    8. loss = flow.nn.sparse_softmax_cross_entropy_with_logits(
    9. labels, logits, name="softmax_loss"
    10. )
    11. lr_scheduler = flow.optimizer.PiecewiseConstantScheduler([], [0.1])
    12. flow.optimizer.SGD(lr_scheduler, momentum=0).minimize(loss)
    13. return loss

    The return value in job function is a tp.Numpy. When calling job function, we will get a numpy object:

    1. for epoch in range(20):
    2. loss = train_job(images, labels)
    3. if i % 20 == 0:
    4. print(loss.mean())

    We call the train_job and print the loss every 20 iterations.

    Example on Multiple Return Values

    In script lenet_eval.py, we define the job function below:

    1. for i, (images, labels) in enumerate(zip(test_images, test_labels)):
    2. labels, logits = eval_job(images, labels)
    3. acc(labels, logits)

    We call the job function and get labels and logits then use them to evaluate the model.

    All code in this article only call synchronously to get results from job function. In fact, OneFlow can call job function asynchronously. For more details, please refer to .

    Model Initialization and Saving

    The example of model saved by the flow.checkpoint.save:

    1. #data loading and training ...
    2. flow.checkpoint.save("./lenet_models_1")

    When the model is saved, we will get a folder called “lenet_models_1”. This folder contains directories and files corresponding with the model parameters.

    Model Loading

    During the prediction process, we can load the parameter from the file to memory by flow.checkpoint.get and then update the parameter to the model by flow.load_variables. For example:

    1. if __name__ == '__main__':
    2. flow.load_variables(flow.checkpoint.get("./lenet_models_1"))
    3. #evaluation process ...

    Evaluation of Model

    The job function for evaluation is basically same as job function for training. The small difference is that the model we use is already saved in evaluation process. Thus, initialization and update of model during iteration are not needed.

    Job Function for Evaluation

    1. @flow.global_function(type="predict")
    2. def eval_job(
    3. images: tp.Numpy.Placeholder((BATCH_SIZE, 1, 28, 28), dtype=flow.float),
    4. labels: tp.Numpy.Placeholder((BATCH_SIZE,), dtype=flow.int32),
    5. ) -> Tuple[tp.Numpy, tp.Numpy]:
    6. with flow.scope.placement("gpu", "0:0"):
    7. logits = lenet(images, train=False)
    8. loss = flow.nn.sparse_softmax_cross_entropy_with_logits(
    9. labels, logits, name="softmax_loss"
    10. )
    11. return (labels, logits)

    Code above is the implementation of job function for evaluation and its return type is declared as Tuple[tp.Numpy, tp.Numpy]. Tuple have two numpy in it. We will call the job function and calculate the accuracy according to the return values.

    The acc function is used to count the total number of samples and the number of correct prediction results. We will call the job function to get paramters labels and logits:

    1. g_total = 0
    2. g_correct = 0
    3. def acc(labels, logits):
    4. global g_total
    5. global g_correct
    6. predictions = np.argmax(logits, 1)
    7. right_count = np.sum(predictions == labels)
    8. g_total += labels.shape[0]
    9. g_correct += right_count

    Call the job function for evaluation:

    So far, we call the job function for evaluation looply and print the accuracy of evaluation result on MNIST testing set.

    After making a few changes to the code above, it will take the data from the raw images rather than existing dataset. Then we can get a model to predict the content from the images.

    1. def load_image(file):
    2. im = Image.open(file).convert("L")
    3. im = im.resize((28, 28), Image.ANTIALIAS)
    4. im = np.array(im).reshape(1, 1, 28, 28).astype(np.float32)
    5. im = (im - 128.0) / 255.0
    6. im.reshape((-1, 1, 1, im.shape[1], im.shape[2]))
    7. return im
    8. def main():
    9. if len(sys.argv) != 2:
    10. usage()
    11. return
    12. flow.load_variables(flow.checkpoint.get("./lenet_models_1"))
    13. image = load_image(sys.argv[1])
    14. logits = test_job(image)
    15. prediction = np.argmax(logits, 1)
    16. print("prediction: {}".format(prediction[0]))
    17. main()

    Code

    Model training

    Script: lenet_train.py

    Model evaluation

    Script: lenet_eval.py

    Saved model:

    Digits prediction

    Script:

    Saved model: lenet_models_1.zip

    Please activate JavaScript for write a comment in LiveRe