What should you do?

Your team has recently deployed an NGINX-based application into Google Kubernetes Engine (GKE)

and has exposed it to the public via an HTTP Google Cloud Load Balancer (GCLB) ingress. You want to scale the deployment of the application’s frontend using an appropriate Service Level Indicator (SLI).

What should you do?
A . Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness probes.
B . Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand.
C . Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB.
D . Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment.

Answer: C

Explanation:

https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics

The Google Cloud HTTP Load Balancer (GCLB) provides metrics on the number of requests and the response latency for each backend service. These metrics can be used as custom metrics for the horizontal pod autoscaler (HPA) to scale the deployment based on the load. This is the correct solution to use an appropriate SLI for scaling.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments