人脸关键点检测

    日期: 2021.01

    摘要: 本示例教程将会演示如何使用飞桨实现人脸关键点检测。

    在图像处理中,关键点本质上是一种特征。它是对一个固定区域或者空间物理关系的抽象描述,描述的是一定邻域范围内的组合或上下文关系。它不仅仅是一个点信息,或代表一个位置,更代表着上下文与周围邻域的组合关系。关键点检测的目标就是通过计算机从图像中找出这些点的坐标,作为计算机视觉领域的一个基础任务,关键点的检测对于高级别任务,例如识别和分类具有至关重要的意义。

    关键点检测方法总体上可以分成两个类型,一个种是用坐标回归的方式来解决,另一种是将关键点建模成热力图,通过像素分类任务,回归热力图分布得到关键点位置。这两个方法,都是一种手段或者是途径,解决的问题就是要找出这个点在图像当中的位置与关系。

    其中人脸关键点检测是关键点检测方法的一个成功实践,本示例简要介绍如何通过飞桨开源框架,实现人脸关键点检测的功能。这个案例用到的是第一种关键点检测方法——坐标回归。我们将使用到新版的paddle2.0的API,集成式的训练接口,能够很方便对模型进行训练和预测。

    二、环境设置

    本教程基于Paddle 2.0 编写,如果您的环境不是本版本,请先参考官网 Paddle 2.0 。

    1. 2.0.0

    本案例使用了Kaggle官方举办的人脸关键点检测challenge数据集,官网:https://www.kaggle.com/c/facial-keypoints-detection

    官方数据集将人脸图像和标注数据打包成了csv文件,我们使用panda来读取。其中数据集中的文件: training.csv: 包含了用于训练的人脸关键点坐标和图像。 test.csv: 包含了用于测试的人脸关键点图像, 没有标注关键点坐标。 IdLookupTable.csv: 测试集关键点的位置的对应名称。

    图像的长和宽都为96像素,所需要检测的一共有15个关键点。

    1. !unzip -o ./test.zip -d data/data60
    2. !unzip -o ./training.zip -d data/data60
    1. unzip: cannot find or open ./test.zip, ./test.zip.zip or ./test.zip.ZIP.
    2. unzip: cannot find or open ./training.zip, ./training.zip.zip or ./training.zip.ZIP.

    飞桨(PaddlePaddle)数据集加载方案是统一使用Dataset(数据集定义) + DataLoader(多进程数据集加载)。

    首先我们先进行数据集的定义,数据集定义主要是实现一个新的Dataset类,继承父类paddle.io.Dataset,并实现父类中以下两个抽象方法,__getitem__和len

    1. Train_Dir = './data/data60/training.csv'
    2. Test_Dir = './data/data60/test.csv'
    3. lookid_dir = './data/data60/IdLookupTable.csv'
    4. class ImgTransforms(object):
    5. """
    6. 图像预处理工具,用于将图像进行升维(96, 96) => (96, 96, 3),
    7. 并对图像的维度进行转换从HWC变为CHW
    8. """
    9. def __init__(self, fmt):
    10. self.format = fmt
    11. def __call__(self, img):
    12. if len(img.shape) == 2:
    13. img = np.expand_dims(img, axis=2)
    14. img = img.transpose(self.format)
    15. if img.shape[0] == 1:
    16. img = np.repeat(img, 3, axis=0)
    17. return img
    18. class FaceDataset(Dataset):
    19. def __init__(self, data_path, mode='train', val_split=0.2):
    20. self.mode = mode
    21. assert self.mode in ['train', 'val', 'test'], \
    22. "mode should be 'train' or 'test', but got {}".format(self.mode)
    23. self.data_source = pd.read_csv(data_path)
    24. # 清洗数据, 数据集中有很多样本只标注了部分关键点, 这里有两种策略
    25. # 第一种, 将未标注的位置从上一个样本对应的关键点复制过来
    26. # self.data_source.fillna(method = 'ffill',inplace = True)
    27. # 第二种, 将包含有未标注的样本从数据集中移除
    28. self.data_source.dropna(how="any", inplace=True)
    29. self.data_label_all = self.data_source.drop('Image', axis = 1)
    30. # 划分训练集和验证集合
    31. if self.mode in ['train', 'val']:
    32. np.random.seed(43)
    33. data_len = len(self.data_source)
    34. # 随机划分
    35. shuffled_indices = np.random.permutation(data_len)
    36. # 顺序划分
    37. # shuffled_indices = np.arange(data_len)
    38. self.shuffled_indices = shuffled_indices
    39. val_set_size = int(data_len*val_split)
    40. if self.mode == 'val':
    41. val_indices = shuffled_indices[:val_set_size]
    42. self.data_img = self.data_source.reindex().iloc[val_indices]
    43. self.data_label = self.data_label_all.reindex().iloc[val_indices]
    44. elif self.mode == 'train':
    45. train_indices = shuffled_indices[val_set_size:]
    46. self.data_img = self.data_source.reindex().iloc[train_indices]
    47. self.data_label = self.data_label_all.reindex().iloc[train_indices]
    48. elif self.mode == 'test':
    49. self.data_img = self.data_source
    50. self.data_label = self.data_label_all
    51. self.transforms = transforms.Compose([
    52. ImgTransforms((2, 0, 1))
    53. ])
    54. # 每次迭代时返回数据和对应的标签
    55. def __getitem__(self, idx):
    56. img = self.data_img['Image'].iloc[idx].split(' ')
    57. img = ['0' if x == '' else x for x in img]
    58. img = np.array(img, dtype = 'float32').reshape(96, 96)
    59. img = self.transforms(img)
    60. label = np.array(self.data_label.iloc[idx,:],dtype = 'float32')/96
    61. return img, label
    62. # 返回整个数据集的总数
    63. def __len__(self):
    64. return len(self.data_img)
    65. # 训练数据集和验证数据集
    66. train_dataset = FaceDataset(Train_Dir, mode='train')
    67. val_dataset = FaceDataset(Train_Dir, mode='val')
    68. # 测试数据集
    69. test_dataset = FaceDataset(Test_Dir, mode='test')

    实现好Dataset数据集后,我们来测试一下数据集是否符合预期,因为Dataset是一个可以被迭代的Class,我们通过for循环从里面读取数据来用matplotlib进行展示。关键点的坐标在数据集中进行了归一化处理,这里乘以图像的大小恢复到原始尺度,并用scatter函数将点画在输出的图像上。

    1. def plot_sample(x, y, axis):
    2. img = x.reshape(96, 96)
    3. axis.imshow(img, cmap='gray')
    4. axis.scatter(y[0::2], y[1::2], marker='x', s=10, color='b')
    5. fig = plt.figure(figsize=(10, 7))
    6. fig.subplots_adjust(
    7. left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
    8. # 随机取16个样本展示
    9. for i in range(16):
    10. axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
    11. idx = np.random.randint(train_dataset.__len__())
    12. # print(idx)
    13. img, label = train_dataset[idx]
    14. label = label*96
    15. plot_sample(img[0], label, axis)
    16. plt.show()

    四、定义模型

    这里使用到 paddle.vision.models 中定义的 resnet18 网络模型。在ImageNet分类任务中,图像分成1000类,在模型后接一个全连接层,将输出的1000维向量映射成30维,对应15个关键点的横纵坐标。

    调用飞桨提供的summary接口对组建好的模型进行可视化,方便进行模型结构和参数信息的查看和确认。

    1. from paddle.static import InputSpec
    2. num_keypoints = 15
    3. model = paddle.Model(FaceNet(num_keypoints))
    4. model.summary((1,3, 96, 96))
    1. -------------------------------------------------------------------------------
    2. Layer (type) Input Shape Output Shape Param #
    3. ===============================================================================
    4. Conv2D-1 [[1, 3, 96, 96]] [1, 64, 48, 48] 9,408
    5. BatchNorm2D-1 [[1, 64, 48, 48]] [1, 64, 48, 48] 256
    6. ReLU-1 [[1, 64, 48, 48]] [1, 64, 48, 48] 0
    7. MaxPool2D-1 [[1, 64, 48, 48]] [1, 64, 24, 24] 0
    8. Conv2D-2 [[1, 64, 24, 24]] [1, 64, 24, 24] 36,864
    9. BatchNorm2D-2 [[1, 64, 24, 24]] [1, 64, 24, 24] 256
    10. ReLU-2 [[1, 64, 24, 24]] [1, 64, 24, 24] 0
    11. Conv2D-3 [[1, 64, 24, 24]] [1, 64, 24, 24] 36,864
    12. BatchNorm2D-3 [[1, 64, 24, 24]] [1, 64, 24, 24] 256
    13. BasicBlock-1 [[1, 64, 24, 24]] [1, 64, 24, 24] 0
    14. Conv2D-4 [[1, 64, 24, 24]] [1, 64, 24, 24] 36,864
    15. BatchNorm2D-4 [[1, 64, 24, 24]] [1, 64, 24, 24] 256
    16. ReLU-3 [[1, 64, 24, 24]] [1, 64, 24, 24] 0
    17. Conv2D-5 [[1, 64, 24, 24]] [1, 64, 24, 24] 36,864
    18. BatchNorm2D-5 [[1, 64, 24, 24]] [1, 64, 24, 24] 256
    19. BasicBlock-2 [[1, 64, 24, 24]] [1, 64, 24, 24] 0
    20. Conv2D-7 [[1, 64, 24, 24]] [1, 128, 12, 12] 73,728
    21. BatchNorm2D-7 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
    22. ReLU-4 [[1, 128, 12, 12]] [1, 128, 12, 12] 0
    23. Conv2D-8 [[1, 128, 12, 12]] [1, 128, 12, 12] 147,456
    24. BatchNorm2D-8 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
    25. Conv2D-6 [[1, 64, 24, 24]] [1, 128, 12, 12] 8,192
    26. BatchNorm2D-6 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
    27. BasicBlock-3 [[1, 64, 24, 24]] [1, 128, 12, 12] 0
    28. Conv2D-9 [[1, 128, 12, 12]] [1, 128, 12, 12] 147,456
    29. BatchNorm2D-9 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
    30. ReLU-5 [[1, 128, 12, 12]] [1, 128, 12, 12] 0
    31. Conv2D-10 [[1, 128, 12, 12]] [1, 128, 12, 12] 147,456
    32. BatchNorm2D-10 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
    33. BasicBlock-4 [[1, 128, 12, 12]] [1, 128, 12, 12] 0
    34. Conv2D-12 [[1, 128, 12, 12]] [1, 256, 6, 6] 294,912
    35. BatchNorm2D-12 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
    36. ReLU-6 [[1, 256, 6, 6]] [1, 256, 6, 6] 0
    37. Conv2D-13 [[1, 256, 6, 6]] [1, 256, 6, 6] 589,824
    38. BatchNorm2D-13 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
    39. Conv2D-11 [[1, 128, 12, 12]] [1, 256, 6, 6] 32,768
    40. BatchNorm2D-11 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
    41. BasicBlock-5 [[1, 128, 12, 12]] [1, 256, 6, 6] 0
    42. Conv2D-14 [[1, 256, 6, 6]] [1, 256, 6, 6] 589,824
    43. BatchNorm2D-14 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
    44. ReLU-7 [[1, 256, 6, 6]] [1, 256, 6, 6] 0
    45. Conv2D-15 [[1, 256, 6, 6]] [1, 256, 6, 6] 589,824
    46. BatchNorm2D-15 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
    47. BasicBlock-6 [[1, 256, 6, 6]] [1, 256, 6, 6] 0
    48. Conv2D-17 [[1, 256, 6, 6]] [1, 512, 3, 3] 1,179,648
    49. BatchNorm2D-17 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
    50. ReLU-8 [[1, 512, 3, 3]] [1, 512, 3, 3] 0
    51. Conv2D-18 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,359,296
    52. BatchNorm2D-18 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
    53. Conv2D-16 [[1, 256, 6, 6]] [1, 512, 3, 3] 131,072
    54. BatchNorm2D-16 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
    55. BasicBlock-7 [[1, 256, 6, 6]] [1, 512, 3, 3] 0
    56. Conv2D-19 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,359,296
    57. BatchNorm2D-19 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
    58. ReLU-9 [[1, 512, 3, 3]] [1, 512, 3, 3] 0
    59. Conv2D-20 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,359,296
    60. BatchNorm2D-20 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
    61. BasicBlock-8 [[1, 512, 3, 3]] [1, 512, 3, 3] 0
    62. AdaptiveAvgPool2D-1 [[1, 512, 3, 3]] [1, 512, 1, 1] 0
    63. Linear-1 [[1, 512]] [1, 1000] 513,000
    64. ResNet-1 [[1, 3, 96, 96]] [1, 1000] 0
    65. Linear-2 [[1, 1000]] [1, 512] 512,512
    66. ReLU-10 [[1, 512]] [1, 512] 0
    67. Dropout-1 [[1, 512]] [1, 512] 0
    68. Linear-3 [[1, 512]] [1, 30] 15,390
    69. ===============================================================================
    70. Total params: 12,227,014
    71. Trainable params: 12,207,814
    72. Non-trainable params: 19,200
    73. -------------------------------------------------------------------------------
    74. Input size (MB): 0.11
    75. Forward/backward pass size (MB): 10.51
    76. Params size (MB): 46.64
    77. Estimated Total Size (MB): 57.26
    78. -------------------------------------------------------------------------------
    1. {'total_params': 12227014, 'trainable_params': 12207814}

    在这个任务是对坐标进行回归,我们使用均方误差(Mean Square error )损失函数paddle.nn.MSELoss()来做计算,飞桨2.0中,在nn下将损失函数封装成可调用类。我们这里使用paddle.Model相关的API直接进行训练,只需要定义好数据集、网络模型和损失函数即可。

    使用模型代码进行Model实例生成,使用prepare接口定义优化器、损失函数和评价指标等信息,用于后续训练使用。在所有初步配置完成后,调用fit接口开启训练执行过程,调用fit时只需要将前面定义好的训练数据集、测试数据集、训练轮次(Epoch)和批次大小(batch_size)配置好即可。

    1. optim = paddle.optimizer.Adam(learning_rate=1e-3,
    2. parameters=model.parameters())
    3. model.prepare(optim, paddle.nn.MSELoss())
    4. model.fit(train_dataset, val_dataset, epochs=60, batch_size=256)
    1. Epoch 1/60
    2. step 7/7 - loss: 0.1134 - 611ms/step
    3. Eval begin...
    4. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    5. step 2/2 - loss: 6.2252 - 502ms/step
    6. Eval samples: 428
    7. Epoch 2/60
    8. step 7/7 - loss: 0.0331 - 591ms/step
    9. Eval begin...
    10. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    11. step 2/2 - loss: 0.4000 - 506ms/step
    12. Eval samples: 428
    13. Epoch 3/60
    14. step 7/7 - loss: 0.0241 - 592ms/step
    15. Eval begin...
    16. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    17. step 2/2 - loss: 0.0677 - 509ms/step
    18. Eval samples: 428
    19. Epoch 4/60
    20. step 7/7 - loss: 0.0187 - 590ms/step
    21. Eval begin...
    22. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    23. step 2/2 - loss: 0.0171 - 490ms/step
    24. Eval samples: 428
    25. Epoch 5/60
    26. step 7/7 - loss: 0.0153 - 598ms/step
    27. Eval begin...
    28. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    29. step 2/2 - loss: 0.0059 - 508ms/step
    30. Eval samples: 428
    31. Epoch 6/60
    32. step 7/7 - loss: 0.0134 - 593ms/step
    33. Eval begin...
    34. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    35. step 2/2 - loss: 0.0031 - 495ms/step
    36. Eval samples: 428
    37. Epoch 7/60
    38. step 7/7 - loss: 0.0107 - 594ms/step
    39. Eval begin...
    40. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    41. step 2/2 - loss: 0.0023 - 510ms/step
    42. Eval samples: 428
    43. Epoch 8/60
    44. step 7/7 - loss: 0.0100 - 590ms/step
    45. Eval begin...
    46. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    47. step 2/2 - loss: 0.0014 - 503ms/step
    48. Eval samples: 428
    49. Epoch 9/60
    50. step 7/7 - loss: 0.0102 - 595ms/step
    51. Eval begin...
    52. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    53. step 2/2 - loss: 0.0017 - 535ms/step
    54. Eval samples: 428
    55. Epoch 10/60
    56. step 7/7 - loss: 0.0088 - 599ms/step
    57. Eval begin...
    58. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    59. step 2/2 - loss: 0.0029 - 501ms/step
    60. Eval samples: 428
    61. Epoch 11/60
    62. step 7/7 - loss: 0.0090 - 600ms/step
    63. Eval begin...
    64. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    65. step 2/2 - loss: 0.0011 - 505ms/step
    66. Eval samples: 428
    67. Epoch 12/60
    68. step 7/7 - loss: 0.0076 - 597ms/step
    69. Eval begin...
    70. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    71. step 2/2 - loss: 0.0017 - 503ms/step
    72. Eval samples: 428
    73. Epoch 13/60
    74. step 7/7 - loss: 0.0071 - 603ms/step
    75. Eval begin...
    76. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    77. step 2/2 - loss: 0.0028 - 504ms/step
    78. Eval samples: 428
    79. Epoch 14/60
    80. step 7/7 - loss: 0.0077 - 595ms/step
    81. Eval begin...
    82. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    83. step 2/2 - loss: 0.0044 - 501ms/step
    84. Eval samples: 428
    85. Epoch 15/60
    86. step 7/7 - loss: 0.0076 - 600ms/step
    87. Eval begin...
    88. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    89. step 2/2 - loss: 0.0013 - 502ms/step
    90. Eval samples: 428
    91. Epoch 16/60
    92. step 7/7 - loss: 0.0072 - 599ms/step
    93. Eval begin...
    94. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    95. step 2/2 - loss: 9.3609e-04 - 498ms/step
    96. Eval samples: 428
    97. Epoch 17/60
    98. step 7/7 - loss: 0.0076 - 584ms/step
    99. Eval begin...
    100. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    101. step 2/2 - loss: 0.0036 - 482ms/step
    102. Eval samples: 428
    103. Epoch 18/60
    104. step 7/7 - loss: 0.0077 - 566ms/step
    105. Eval begin...
    106. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    107. step 2/2 - loss: 0.0011 - 485ms/step
    108. Eval samples: 428
    109. Epoch 19/60
    110. step 7/7 - loss: 0.0057 - 586ms/step
    111. Eval begin...
    112. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    113. step 2/2 - loss: 0.0019 - 486ms/step
    114. Eval samples: 428
    115. Epoch 20/60
    116. step 7/7 - loss: 0.0061 - 570ms/step
    117. Eval begin...
    118. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    119. step 2/2 - loss: 0.0012 - 485ms/step
    120. Eval samples: 428
    121. Epoch 21/60
    122. step 7/7 - loss: 0.0055 - 591ms/step
    123. Eval begin...
    124. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    125. step 2/2 - loss: 0.0018 - 499ms/step
    126. Eval samples: 428
    127. Epoch 22/60
    128. step 7/7 - loss: 0.0067 - 588ms/step
    129. Eval begin...
    130. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    131. step 2/2 - loss: 8.7753e-04 - 500ms/step
    132. Eval samples: 428
    133. Epoch 23/60
    134. step 7/7 - loss: 0.0056 - 588ms/step
    135. Eval begin...
    136. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    137. step 2/2 - loss: 9.4301e-04 - 511ms/step
    138. Eval samples: 428
    139. Epoch 24/60
    140. step 7/7 - loss: 0.0054 - 598ms/step
    141. Eval begin...
    142. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    143. step 2/2 - loss: 0.0010 - 505ms/step
    144. Eval samples: 428
    145. Epoch 25/60
    146. step 7/7 - loss: 0.0056 - 608ms/step
    147. Eval begin...
    148. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    149. step 2/2 - loss: 8.5451e-04 - 498ms/step
    150. Eval samples: 428
    151. Epoch 26/60
    152. step 7/7 - loss: 0.0286 - 600ms/step
    153. Eval begin...
    154. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    155. step 2/2 - loss: 0.0165 - 505ms/step
    156. Eval samples: 428
    157. Epoch 27/60
    158. step 7/7 - loss: 0.0082 - 610ms/step
    159. Eval begin...
    160. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    161. step 2/2 - loss: 0.0065 - 500ms/step
    162. Eval samples: 428
    163. Epoch 28/60
    164. step 7/7 - loss: 0.0085 - 610ms/step
    165. Eval begin...
    166. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    167. step 2/2 - loss: 0.0021 - 506ms/step
    168. Eval samples: 428
    169. Epoch 29/60
    170. step 7/7 - loss: 0.0048 - 597ms/step
    171. Eval begin...
    172. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    173. step 2/2 - loss: 0.0027 - 496ms/step
    174. Eval samples: 428
    175. Epoch 30/60
    176. step 7/7 - loss: 0.0051 - 604ms/step
    177. Eval begin...
    178. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    179. step 2/2 - loss: 0.0010 - 524ms/step
    180. Eval samples: 428
    181. Epoch 31/60
    182. step 7/7 - loss: 0.0049 - 600ms/step
    183. Eval begin...
    184. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    185. step 2/2 - loss: 7.4699e-04 - 506ms/step
    186. Eval samples: 428
    187. Epoch 32/60
    188. step 7/7 - loss: 0.0051 - 598ms/step
    189. Eval begin...
    190. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    191. step 2/2 - loss: 7.6433e-04 - 505ms/step
    192. Epoch 33/60
    193. Eval begin...
    194. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    195. step 2/2 - loss: 0.0013 - 515ms/step
    196. Eval samples: 428
    197. Epoch 34/60
    198. step 7/7 - loss: 0.0054 - 598ms/step
    199. Eval begin...
    200. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    201. step 2/2 - loss: 7.3304e-04 - 502ms/step
    202. Eval samples: 428
    203. Epoch 35/60
    204. step 7/7 - loss: 0.0044 - 607ms/step
    205. Eval begin...
    206. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    207. step 2/2 - loss: 8.8994e-04 - 494ms/step
    208. Eval samples: 428
    209. Epoch 36/60
    210. step 7/7 - loss: 0.0043 - 629ms/step
    211. Eval begin...
    212. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    213. step 2/2 - loss: 0.0011 - 499ms/step
    214. Eval samples: 428
    215. Epoch 37/60
    216. step 7/7 - loss: 0.0045 - 601ms/step
    217. Eval begin...
    218. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    219. step 2/2 - loss: 7.7268e-04 - 535ms/step
    220. Eval samples: 428
    221. Epoch 38/60
    222. step 7/7 - loss: 0.0045 - 594ms/step
    223. Eval begin...
    224. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    225. step 2/2 - loss: 6.8808e-04 - 506ms/step
    226. Eval samples: 428
    227. Epoch 39/60
    228. step 7/7 - loss: 0.0040 - 590ms/step
    229. Eval begin...
    230. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    231. step 2/2 - loss: 7.0140e-04 - 522ms/step
    232. Eval samples: 428
    233. Epoch 40/60
    234. step 7/7 - loss: 0.0061 - 593ms/step
    235. Eval begin...
    236. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    237. step 2/2 - loss: 0.0029 - 496ms/step
    238. Eval samples: 428
    239. Epoch 41/60
    240. step 7/7 - loss: 0.0046 - 601ms/step
    241. Eval begin...
    242. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    243. step 2/2 - loss: 6.9420e-04 - 573ms/step
    244. Eval samples: 428
    245. Epoch 42/60
    246. step 7/7 - loss: 0.0077 - 590ms/step
    247. Eval begin...
    248. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    249. step 2/2 - loss: 0.0029 - 522ms/step
    250. Eval samples: 428
    251. Epoch 43/60
    252. step 7/7 - loss: 0.0038 - 591ms/step
    253. Eval begin...
    254. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    255. step 2/2 - loss: 7.0032e-04 - 523ms/step
    256. Eval samples: 428
    257. Epoch 44/60
    258. step 7/7 - loss: 0.0042 - 598ms/step
    259. Eval begin...
    260. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    261. step 2/2 - loss: 0.0025 - 519ms/step
    262. Eval samples: 428
    263. Epoch 45/60
    264. step 7/7 - loss: 0.0054 - 616ms/step
    265. Eval begin...
    266. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    267. step 2/2 - loss: 7.9877e-04 - 515ms/step
    268. Eval samples: 428
    269. Epoch 46/60
    270. step 7/7 - loss: 0.0047 - 607ms/step
    271. Eval begin...
    272. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    273. step 2/2 - loss: 0.0021 - 504ms/step
    274. Eval samples: 428
    275. Epoch 47/60
    276. step 7/7 - loss: 0.0047 - 609ms/step
    277. Eval begin...
    278. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    279. step 2/2 - loss: 6.5195e-04 - 559ms/step
    280. Eval samples: 428
    281. Epoch 48/60
    282. step 7/7 - loss: 0.0046 - 626ms/step
    283. Eval begin...
    284. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    285. step 2/2 - loss: 0.0013 - 523ms/step
    286. Eval samples: 428
    287. Epoch 49/60
    288. step 7/7 - loss: 0.0039 - 597ms/step
    289. Eval begin...
    290. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    291. step 2/2 - loss: 6.3211e-04 - 521ms/step
    292. Eval samples: 428
    293. Epoch 50/60
    294. step 7/7 - loss: 0.0035 - 600ms/step
    295. Eval begin...
    296. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    297. step 2/2 - loss: 6.7967e-04 - 514ms/step
    298. Eval samples: 428
    299. Epoch 51/60
    300. step 7/7 - loss: 0.0033 - 605ms/step
    301. Eval begin...
    302. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    303. step 2/2 - loss: 6.4899e-04 - 521ms/step
    304. Eval samples: 428
    305. Epoch 52/60
    306. step 7/7 - loss: 0.0046 - 606ms/step
    307. Eval begin...
    308. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    309. step 2/2 - loss: 0.0017 - 520ms/step
    310. Eval samples: 428
    311. Epoch 53/60
    312. step 7/7 - loss: 0.0036 - 633ms/step
    313. Eval begin...
    314. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    315. step 2/2 - loss: 6.4985e-04 - 524ms/step
    316. Eval samples: 428
    317. Epoch 54/60
    318. step 7/7 - loss: 0.0038 - 601ms/step
    319. Eval begin...
    320. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    321. step 2/2 - loss: 0.0017 - 531ms/step
    322. Eval samples: 428
    323. Epoch 55/60
    324. step 7/7 - loss: 0.0057 - 598ms/step
    325. Eval begin...
    326. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    327. step 2/2 - loss: 0.0032 - 509ms/step
    328. Eval samples: 428
    329. Epoch 56/60
    330. step 7/7 - loss: 0.0042 - 597ms/step
    331. Eval begin...
    332. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    333. step 2/2 - loss: 7.3378e-04 - 514ms/step
    334. Eval samples: 428
    335. Epoch 57/60
    336. step 7/7 - loss: 0.0065 - 609ms/step
    337. Eval begin...
    338. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    339. step 2/2 - loss: 8.6400e-04 - 525ms/step
    340. Eval samples: 428
    341. Epoch 58/60
    342. step 7/7 - loss: 0.0056 - 621ms/step
    343. Eval begin...
    344. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    345. step 2/2 - loss: 0.0013 - 528ms/step
    346. Eval samples: 428
    347. Epoch 59/60
    348. step 7/7 - loss: 0.0040 - 608ms/step
    349. Eval begin...
    350. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    351. step 2/2 - loss: 7.8955e-04 - 507ms/step
    352. Eval samples: 428
    353. Epoch 60/60
    354. step 7/7 - loss: 0.0028 - 603ms/step
    355. Eval begin...
    356. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
    357. step 2/2 - loss: 0.0014 - 516ms/step
    358. Eval samples: 428

    六、模型预测

    为了更好的观察预测结果,我们分别可视化验证集结果与标注点的对比,和在未标注的测试集的预测结果。 ### 6.1 验证集结果可视化 红色的关键点为网络预测的结果, 绿色的关键点为标注的groundtrue。

    1. Predict begin...
    2. step 428/428 [==============================] - 15ms/step
    3. Predict samples: 428
    1. def plot_sample(x, y, axis, gt=[]):
    2. img = x.reshape(96, 96)
    3. axis.imshow(img, cmap='gray')
    4. axis.scatter(y[0::2], y[1::2], marker='x', s=10, color='r')
    5. if gt!=[]:
    6. axis.scatter(gt[0::2], gt[1::2], marker='x', s=10, color='lime')
    7. fig = plt.figure(figsize=(10, 7))
    8. fig.subplots_adjust(
    9. left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
    10. for i in range(16):
    11. axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
    12. idx = np.random.randint(val_dataset.__len__())
    13. img, gt_label = val_dataset[idx]
    14. gt_label = gt_label*96
    15. label_pred = result[0][idx].reshape(-1)
    16. label_pred = label_pred*96
    17. plot_sample(img[0], label_pred, axis, gt_label)
    18. plt.show()
    1. /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/ipykernel_launcher.py:5: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
    2. """

    1. result = model.predict(test_dataset, batch_size=1)