How should a solutions architect redesign the architect to better respond to changing traffic?

A website runs a web application that receives a burst of traffic each day at noon. The users upload new pictures and context daily, but have complaining of timeout. The architect uses Amazon EC2 Auto Scaling groups, and the custom application consistently takes 1 minutes to initiate upon boot up before responding to user requests.

How should a solutions architect redesign the architect to better respond to changing traffic?
A . Configure a Network Load Balancer with a slow start configuration.
B . Configure AWS ElastiCache for Redis to offload direct requests to the servers.
C . Configure an Auto Scaling step scaling policy with an instance warmup condition.
D . Configure Amazon CloudFront to use an Application Load Balancer as the origin.

Answer: C

Explanation:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html#as-step-scaling-warmup

"If you are creating a step policy, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an instance is not counted toward the aggregated metrics of the Auto Scaling group. Using the example in the Step Adjustments section, suppose that the metric gets to 60, and then it gets to 62 while the new instance is still warming up. The current capacity is still 10 instances, so 1 instance is added (10 percent of 10 instances). However, the desired capacity of the group is already 11 instances, so the scaling policy does not increase the desired capacity further. If the metric gets to 70 while the new instance is still warming up, we should add 3 instances (30 percent of 10 instances). However, the desired capacity of the group is already 11, so we add only 2 instances, for a new desired capacity of 13 instances"

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments