What should you first try to quickly lower the serving latency?

You have trained a DNN regressor with TensorFlow to predict housing prices using a set of predictive features.

Your default precision is tf.float64, and you use a standard TensorFlow estimator;

estimator = tf.estimator.DNNRegressor(

feature_columns=[YOUR_LIST_OF_FEATURES],

hidden_units-[1024, 512, 256],

dropout=None)

Your model performs well, but Just before deploying it to production, you discover that your current serving latency is 10ms @ 90 percentile and you currently serve on CPUs. Your production requirements expect a model latency of 8ms @ 90 percentile. You are willing to accept a small decrease in performance in order to reach the latency requirement Therefore your plan is to improve latency while evaluating how much the model’s prediction decreases.

What should you first try to quickly lower the serving latency?
A . Increase the dropout rate to 0.8 in_PREDICT mode by adjusting the TensorFlow Serving parameters
B. Increase the dropout rate to 0.8 and retrain your model.
C. Switch from CPU to GPU serving
D. Apply quantization to your SavedModel by reducing the floating point precision to tf.float16.

Answer: D

Explanation:

Applying quantization to your SavedModel by reducing the floating point precision can help reduce the serving latency by decreasing the amount of memory and computation required to make a prediction. TensorFlow provides tools such as the tf.quantization module that can be used to quantize models and reduce their precision, which can significantly reduce serving latency without a significant decrease in model performance.

Reference: https://www.tensorflow.org/guide/quantization

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments