What should you do?

You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time.

What should you do?
A . Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset
B. Create a custom training loop.
C. Use a TPU with tf.distribute.TPUStrategy.
D. Increase the batch size.

Answer: B

Explanation:

This would allow you to tailor the training process to your specific needs and requirements, and it would also allow for more flexible experimentation with different training strategies.

Additionally, creating a custom training loop could result in faster training times compared to using a single GPU or the distributed training strategies currently available in Keras.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments