6-3,使用单GPU训练模型

    训练过程的耗时主要来自于两个部分,一部分来自数据准备,另一部分来自参数迭代。

    当数据准备过程还是模型训练时间的主要瓶颈时,我们可以使用更多进程来准备数据。

    当参数迭代过程成为训练时间的主要瓶颈时,我们通常的方法是应用GPU或者Google的TPU来进行加速。

    详见《用GPU加速Keras模型——Colab免费GPU使用攻略》

    无论是内置fit方法,还是自定义训练循环,从CPU切换成单GPU训练模型都是非常方便的,无需更改任何代码。当存在可用的GPU时,如果不特意指定device,tensorflow会自动优先选择使用GPU来创建张量和执行张量计算。

    但如果是在公司或者学校实验室的服务器环境,存在多个GPU和多个使用者时,为了不让单个同学的任务占用全部GPU资源导致其他同学无法使用(tensorflow默认获取全部GPU的全部内存资源权限,但实际上只使用一个GPU的部分资源),我们通常会在开头增加以下几行代码以控制每个任务使用的GPU编号和显存大小,以便其他同学也能够同时训练模型。

    在Colab笔记本中:修改->笔记本设置->硬件加速器 中选择 GPU

    注:以下代码只能在Colab 上才能正确执行。

    https://colab.research.google.com/drive/1r5dLoeJq5z01sU72BX2M5UiNSkuxsEFe

    1. #打印时间分割线
    2. @tf.function
    3. def printbar():
    4. ts = tf.timestamp()
    5. today_ts = ts%(24*60*60)
    6. hour = tf.cast(today_ts//3600+8,tf.int32)%tf.constant(24)
    7. minite = tf.cast((today_ts%3600)//60,tf.int32)
    8. second = tf.cast(tf.floor(today_ts%60),tf.int32)
    9. def timeformat(m):
    10. if tf.strings.length(tf.strings.format("{}",m))==1:
    11. return(tf.strings.format("0{}",m))
    12. else:
    13. return(tf.strings.format("{}",m))
    14. timestring = tf.strings.join([timeformat(hour),timeformat(minite),
    15. timeformat(second)],separator = ":")
    16. tf.print("=========="*8,end = "")
    17. tf.print(timestring)
    1. gpus = tf.config.list_physical_devices("GPU")
    2. if gpus:
    3. gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    4. tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    5. # 或者也可以设置GPU显存为固定使用量(例如:4G)
    6. #tf.config.experimental.set_virtual_device_configuration(gpu0,
    7. # [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=4096)])
    8. tf.config.set_visible_devices([gpu0],"GPU")

    比较GPU和CPU的计算速度

    1. printbar()
    2. with tf.device("/gpu:0"):
    3. tf.random.set_seed(0)
    4. a = tf.random.uniform((10000,100),minval = 0,maxval = 3.0)
    5. b = tf.random.uniform((100,100000),minval = 0,maxval = 3.0)
    6. c = a@b
    7. tf.print(tf.reduce_sum(tf.reduce_sum(c,axis = 0),axis=0))
    8. printbar()
    1. ================================================================================17:37:01
    2. 2.24953778e+11
    3. ================================================================================17:37:01
    1. ================================================================================17:37:34
    2. 2.24953795e+11
    1. MAX_LEN = 300
    2. (x_train,y_train),(x_test,y_test) = datasets.reuters.load_data()
    3. x_train = preprocessing.sequence.pad_sequences(x_train,maxlen=MAX_LEN)
    4. x_test = preprocessing.sequence.pad_sequences(x_test,maxlen=MAX_LEN)
    5. MAX_WORDS = x_train.max()+1
    6. CAT_NUM = y_train.max()+1
    7. ds_train = tf.data.Dataset.from_tensor_slices((x_train,y_train)) \
    8. .shuffle(buffer_size = 1000).batch(BATCH_SIZE) \
    9. .prefetch(tf.data.experimental.AUTOTUNE).cache()
    10. ds_test = tf.data.Dataset.from_tensor_slices((x_test,y_test)) \
    11. .shuffle(buffer_size = 1000).batch(BATCH_SIZE) \
    12. .prefetch(tf.data.experimental.AUTOTUNE).cache()
    1. Model: "sequential"
    2. _________________________________________________________________
    3. Layer (type) Output Shape Param #
    4. =================================================================
    5. embedding (Embedding) (None, 300, 7) 216874
    6. _________________________________________________________________
    7. conv1d (Conv1D) (None, 296, 64) 2304
    8. _________________________________________________________________
    9. max_pooling1d (MaxPooling1D) (None, 148, 64) 0
    10. _________________________________________________________________
    11. conv1d_1 (Conv1D) (None, 146, 32) 6176
    12. _________________________________________________________________
    13. max_pooling1d_1 (MaxPooling1 (None, 73, 32) 0
    14. _________________________________________________________________
    15. flatten (Flatten) (None, 2336) 0
    16. _________________________________________________________________
    17. dense (Dense) (None, 46) 107502
    18. =================================================================
    19. Total params: 332,856
    20. Trainable params: 332,856
    21. Non-trainable params: 0
    22. _________________________________________________________________
    1. optimizer = optimizers.Nadam()
    2. loss_func = losses.SparseCategoricalCrossentropy()
    3. train_loss = metrics.Mean(name='train_loss')
    4. valid_loss = metrics.Mean(name='valid_loss')
    5. @tf.function
    6. def train_step(model, features, labels):
    7. with tf.GradientTape() as tape:
    8. predictions = model(features,training = True)
    9. loss = loss_func(labels, predictions)
    10. gradients = tape.gradient(loss, model.trainable_variables)
    11. optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    12. train_loss.update_state(loss)
    13. train_metric.update_state(labels, predictions)
    14. @tf.function
    15. def valid_step(model, features, labels):
    16. predictions = model(features)
    17. batch_loss = loss_func(labels, predictions)
    18. valid_loss.update_state(batch_loss)
    19. valid_metric.update_state(labels, predictions)
    20. def train_model(model,ds_train,ds_valid,epochs):
    21. for epoch in tf.range(1,epochs+1):
    22. for features, labels in ds_train:
    23. train_step(model,features,labels)
    24. for features, labels in ds_valid:
    25. valid_step(model,features,labels)
    26. logs = 'Epoch={},Loss:{},Accuracy:{},Valid Loss:{},Valid Accuracy:{}'
    27. if epoch%1 ==0:
    28. printbar()
    29. tf.print(tf.strings.format(logs,
    30. (epoch,train_loss.result(),train_metric.result(),valid_loss.result(),valid_metric.result())))
    31. tf.print("")
    32. train_loss.reset_states()
    33. valid_loss.reset_states()
    34. train_metric.reset_states()
    35. valid_metric.reset_states()
    36. train_model(model,ds_train,ds_test,10)

    如果对本书内容理解上有需要进一步和作者交流的地方,欢迎在公众号”Python与算法之美”下留言。作者时间和精力有限,会酌情予以回复。

    也可以在公众号后台回复关键字:加群,加入读者交流群和大家讨论。