Exam4Training

Amazon DOP-C01 AWS DevOps Engineer – Professional Online Training

Question #1

A web application with multiple services runs on Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS Multi-AZ DB instance. The instance health check used by the load balancer returns PASS if at least one service is running on the instance.

The company uses AWS CodePipeline with AWS CodeBuild and AWS CodeDeploy steps to deploy code to test and production environments. Recently, a new version was unable to connect to the database server in the test environment. One process was running, so the health checks reported healthy and the application was promoted to production, causing a production outage. The company wants to ensure that test builds are fully functional before a promotion to production.

Which changes should a DevOps Engineer make to the test and deployment process? (Choose two.)

  • A . Add an automated functional test to the pipeline that ensures solid test cases are performed.
  • B . Add a manual approval action to the CodeDeploy deployment pipeline that requires a Testing Engineer to validate the testing environment.
  • C . Refactor the health check endpoint the Elastic Load Balancer is checking to better validate actual application functionality.
  • D . Refactor the health check endpoint the Elastic Load Balancer is checking to return a text-based status result and configure the load balancer to check for a valid response.
  • E . Add a dependency checking step to the existing testing framework to ensure compatibility.

Reveal Solution Hide Solution

Correct Answer: D,E
Question #2

A company uses Amazon S3 to store proprietary information. The development team creates buckets for new projects on a daily basis. The security team wants to ensure that all existing and future buckets have encryption, logging, and versioning enabled. Additionally, no buckets should ever be publicly read or write accessible.

What should a DevOps engineer do to meet these requirements?

  • A . Enable AWS CloudTrail and configure automatic remediation using AWS Lambda.
  • B . Enable AWS Config rules and configure automatic remediation using AWS Systems
    Manager documents.
  • C . Enable AWS Trusted Advisor and configure automatic remediation using Amazon CloudWatch Events.
  • D . Enable AWS Systems Manager and configure automatic remediation using Systems Manager documents.

Reveal Solution Hide Solution

Correct Answer: B
Question #3

A DevOps Engineer is leading the implementation for automating patching of Windows-based workstations in a hybrid cloud environment by using AWS Systems Manager (SSM).

What steps should the Engineer follow to set up Systems Manager to automate patching in this environment? (Select TWO.)

  • A . Create multiple IAM service roles for Systems Manager so that the ssm.amazonaws.com service can execute the AssumeRole operation on every instance. Register the role on a per-resource level to enable the creation of a service token. Perform managed-instance activation with the newly created service role attached to each managed instance.
  • B . Create an IAM service role for Systems Manager so that the ssm.amazonaws.com service can execute the AssumeRole operation. Register the role to enable the creation of a service token. Perform managed-instance activation with the newly created service role.
  • C . Using previously obtained activation codes and activation IDs, download and install the SSM Agent on the hybrid servers, and register the servers or virtual machines on the Systems Manager service. Hybrid instances will show with an "mi-" prefix in the SSM console.
  • D . Using previously obtained activation codes and activation IDs, download and install the SSM Agent on the hybrid servers, and register the servers or virtual machines on the Systems Manager service. Hybrid instances will show with an "i-" prefix in the SSM console as if they were provisioned as a regular Amazon EC2 instance.
  • E . Run AWS Config to create a list of instances that are unpatched and not compliant. Create an instance scheduler job, and through an AWS Lambda function, perform the instance patching to bring them up to compliance.

Reveal Solution Hide Solution

Correct Answer: B,C
B,C

Explanation:

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-managed-instance-activation.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html

Question #4

A company recently launched an application that is more popular than expected. The company wants to ensure the application can scale to meet increasing demands and provide reliability using multiple Availability Zones (AZs) The application runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB) A DevOps engineer has created an Auto Scaling group across multiple AZs for the application Instances launched in the newly added AZs are not receiving any traffic for the application.

What is likely causing this issue?

  • A . Auto Scaling groups can create new instances in a single AZ only.
  • B . The EC2 instances have not been manually associated to the ALB
  • C . The ALB should be replaced with a Network Load Balancer (NLB).
  • D . The new AZ has not been added to the ALB

Reveal Solution Hide Solution

Correct Answer: D
Question #5

You have an application running a specific process that is critical to the application’s functionality, and have added the health check process to your Auto Scaling Group. The instances are showing healthy but the application itself is not working as it should.

What could be the issue with the health check, since it is still showing the instances as healthy.

  • A . You do not have the time range in the health check properly configured
  • B . It is not possible for a health check to monitor a process that involves the application
  • C . The health check is not configured properly
  • D . The health check is not checking the application process

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

If you have custom health checks, you can send the information from your health checks to Auto Scaling so that Auto Scaling can use this information. For example, if you determine that an instance is not functioning as expected, you can set the health status of the instance to Unhealthy.

The next time that Auto Scaling performs a health check on the instance, it will determine that the instance is unhealthy and then launch a replacement instance

For more information on Autoscaling health checks, please refer to the below document link: from AWS

✑ http://docs.aws.a mazon.com/autoscaling/latest/userguide/healthcheck.html

Question #6

A company is using an AWS CodeBuild project to build and package an application. The packages are copied to a shared Amazon S3 bucket before being deployed across multiple AWS accounts.

The buildspec.yml file contains the following:

The DevOps Engineer has noticed that anybody with an AWS account is able to download the artifacts.

What steps should the DevOps Engineer take to stop this?

  • A . Modify the post_build to command to use ""-acl public-read and configure a bucket policy that grants read access to the relevant AWS accounts only.
  • B . Configure a default ACL for the S3 bucket that defines the set of authenticated users as the relevant AWS accounts only and grants read-only access.
  • C . Create an S3 bucket policy that grants read access to the relevant AWS accounts and denies read access to the principal "*"
  • D . Modify the post_build command to remove ""-acl authenticated-read and configure a bucket policy that allows read access to the relevant AWS accounts only.

Reveal Solution Hide Solution

Correct Answer: D
Question #7

A DevOps engineer is architecting a continuous development strategy for a company’s software as a service (SaaS) web application running on AWS. For application and security reasons, users subscribing to this application are distributed across multiple Application Load Balancers (ALBs), each of which has a dedicated Auto Scaling group and fleet of Amazon EC2 instances. The application does not require a build stage, and when it is committed to AWS CodeCommit, the application must trigger a simultaneous deployment to all ALBs. Auto Scaling groups, and EC2 fleets.

Which architecture will meet these requirements with the LEAST amount of configuration?

  • A . Create a single AWS CodePipeline pipeline that deploys the application in parallel using unique AWS CodeDeploy applications and deployment groups created for each ALB-Auto Scaling group pair.
  • B . Create a single AWS CodePipeline pipeline that deploys the application using a single AWS CodeDeploy application and single deployment group.
  • C . Create a single AWS CodePipeline pipeline that deploys the application in parallel using a single AWS CodeDeploy application and unique deployment group for each ALB-Auto Scaling group pair.
  • D . Create an AWS CodePipeline pipeline for each ALB-Auto Scaling group pair that deploys the application using an AWS CodeDeploy application and deployment group created for the same ALB-Auto Scaling group pair.

Reveal Solution Hide Solution

Correct Answer: C
Question #8

A company is using AWS Organizations to create separate AWS accounts for each of its departments.

It needs to automate the following tasks:

Updating the Linux AMIs with new patches periodically and generating a golden image

Installing a new version of Chef agents in the golden image, if available

Enforcing the use of the newly generated golden AMIs in the department’s account

Which option requires the LEAST management overhead?

  • A . Write a script to launch an Amazon EC2 instance from the previous golden AMI, apply the patch updates, install the new version of the Chef agent, generate a new golden AMI, and then modify the AMI permissions to share only the new image with the departments’ accounts.
  • B . Use an AWS Systems Manager Run Command to update the Chef agent first, use Amazon EC2 Systems Manager Automation to generate an updated AMI, and then
    assume an IAM role to copy the new golden AMI into the departments’ accounts.
  • C . Use AWS Systems Manager Automation to update the Linux AMI using the previous image, provide the URL for the script that will update the Chef agent, and then use AWS Organizations to replace the previous golden AMI into the departments’ accounts.
  • D . Use AWS Systems Manager Automation to update the Linux AMI from the previous golden image, provide the URL for the script that will update the Chef agent, and then share only the newly generated AMI with the departments’ accounts.

Reveal Solution Hide Solution

Correct Answer: C
Question #9

A DevOps Engineer has been asked by the Security team to ensure that AWS CloudTrail files are not tampered with after being created. Currently, there is a process with multiple trails, using AWS IAM to restrict access to specific trails. The Security team wants to ensure they can trace the integrity of each file and make sure there has been no tampering.

Which option will require the LEAST effort to implement and ensure the legitimacy of the file while allowing the Security team to prove the authenticity of the logs?

  • A . Create an Amazon CloudWatch Events rule that triggers an AWS Lambda function when a new file is delivered. Configure the Lambda function to perform an MD5 hash check on the file, store the name and location of the file, and post the returned hash to an Amazon DynamoDB table. The Security team can use the values stored in DynamoDB to verify the file authenticity.
  • B . Enable the CloudTrail file integrity feature on an Amazon S3 bucket. Create an IAM policy that grants the Security team access to the file integrity logs stored in the S3 bucket.
  • C . Enable the CloudTrail file integrity feature on the trail. Use the digest file created by CloudTrail to verify the integrity of the delivered CloudTrail files.
  • D . Create an AWS Lambda function that is triggered each time a new file is delivered to the CloudTrail bucket. Configure the Lambda function to execute an MD5 hash check on the file, and store the result on a tag in an Amazon S3 object. The Security team can use the information on the tag to verify the integrity of the file.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html

Question #10

A company wants to adopt a methodology for handling security threats from leaked and compromised IAM access keys. The DevOps Engineer has been asked to automate the process of acting upon compromised access keys, which includes identifying users, revoking their permissions, and sending a notification to the Security team.

Which of the following would achieve this goal?

  • A . Use the AWS Trusted Advisor generated security report for access keys. Use Amazon EMR to run analytics on the report. Identify compromised IAM access keys and delete them. Use Amazon CloudWatch with an EMR Cluster State Change event to notify the Security team.
  • B . Use AWS Trusted Advisor to identify compromised access keys. Create an Amazon CloudWatch Events rule with Trusted Advisor as the event source, and AWS Lambda and Amazon SNS as targets. Use AWS Lambda to delete compromised IAM access keys and Amazon SNS to notify the Security team.
  • C . Use the AWS Trusted Advisor generated security report for access keys. Use AWS Lambda to scan through the report. Use scan result inside AWS Lambda and delete compromised IAM access keys. Use Amazon SNS to notify the Security team.
  • D . Use AWS Lambda with a third-party library to scan for compromised access keys. Use scan result inside AWS Lambda and delete compromised IAM access keys. Create Amazon CloudWatch custom metrics for compromised keys. Create a CloudWatch alarm on the metrics to notify the Security team.

Reveal Solution Hide Solution

Correct Answer: B

Question #11

An ecommerce company is looking for ways to deploy an application on AWS that satisf es the following requirements:

• Has a simple and automated application deployment process.

• Has minimal deployment costs while ensuring that at least half of the instances are available to receive end-user requests.

• If the application fails, an automated healing mechanism will replace the affected instances.

Which deployment strategy will meet these requirements?

  • A . Create an AWS Elastic Beanstalk environment and configure it to use Auto Scaling and an Elastic Load Balancer. Use rolling deployments with a batch size of 50%.
  • B . Create an AWS OpsWorks stack. Configure the application layer to use rolling deployments as a deployment strategy. Add an Elastic Load Balancing layer. Enable auto healing on the application layer.
  • C . Use AWS CodeDeploy with Auto Scaling and an Elastic Load Balancer Use the CodeDeployDefault.HalfAtAtime deployment strategy. Enable an Elastic Load Balancing health check to report the status of the application, and set the Auto Scaling health check to ELB.
  • D . Use AWS CodeDeploy with Auto Scaling and an Elastic Load Balancer. Use a blue/green deployment strategy. Enable an Elastic Load Balancing health check to report the status of the application, and set the Auto Scaling health check to ELB.

Reveal Solution Hide Solution

Correct Answer: C
Question #12

An ecommerce company is receiving reports that its order history page is experiencing delays in reflecting the processing status of orders. The order processing system consists of an AWS Lambda function using reserved concurrency. The Lambda function processes order messages from an Amazon SQS queue and inserts processed orders into an Amazon DynamoDB table. The DynamoDB table has Auto Scaling enabled for read and write capacity.

Which actions will diagnose and resolve the delay? (Select TWO.)

  • A . Check the ApproximateAgeOfOldestMessage metric for the SQS queue and increase the Lambda function concurrency limit.
  • B . Check the ApproximateAgeOfOldestMessage metric for the SQS queue and configure a redrive policy on the SQS queue.
  • C . Check the NumberOfMessagesSent metric for the SQS queue and increase the SQS queue visibility timeout.
  • D . Check the ThrottledWriteRequests metric for the DynamoDB table and increase the maximum write capacity units for the table’s Auto Scaling policy.
  • E . Check the Throttles metric for the Lambda function and increase the Lambda function timeout.

Reveal Solution Hide Solution

Correct Answer: A,B
Question #13

To run an application, a DevOps Engineer launches an Amazon EC2 instances with public IP addresses in a public subnet. A user data script obtains the application artifacts and installs them on the instances upon launch. A change to the security classification of the application now requires the instances to run with no access to the Internet. While the instances launch successfully and show as healthy, the application does not seem to be installed.

Which of the following should successfully install the application while complying with the new rule?

  • A . Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards.
  • B . Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet’s route table to use the NAT gateway as the default route.
  • C . Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.
  • D . Create a security group for the application instances and whitelist only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

EC2 instances running in private subnets of a VPC can now have controlled access to S3 buckets, objects, and API functions that are in the same region as the VPC. You can use an S3 bucket policy to indicate which VPCs and which VPC Endpoints have access to your S3 buckets 1- https://aws.amazon.com/pt/blogs/aws/new-vpc-endpoint-for-amazon-s3/

Question #14

A new zero-day vulnerability was found in OpenSSL requiring the immediate patching of a production web fleet running on Amazon Linux. Currently, OS updates are performed manually on a monthly basis and deployed using updates to the production Auto Scaling Group’s launch configuration.

Which method should a DevOps Engineer use to update packages in-place without downtime?

  • A . Use AWS CodePipline and AWS CodeBuild to generate new copies of these packages, and update the Auto Scaling group’s launch configuration.
  • B . Use AWS Inspector to run "yum upgrade" on all running production instances, and manually update the AMI for the next maintenance window.
  • C . Use Amazon EC2 Run Command to issue a package update command to all running production instances, and update the AMI for future deployments.
  • D . Define a new AWS OpsWorks layer to match the running production instances, and use a recipe to issue a package update command to all running production instances.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://aws.amazon.com/blogs/aws/ec2-run-command-is-now-a-cloudwatch-events-target/

" EC2 Run Command is part of EC2 Systems Manager. It allows you to operate on collections of EC2 instances and on-premises servers reliably and at scale, in a controlled and selective fashion. You can run scripts, install software, collect metrics and log files, manage patches, and much more, on both Windows and Linux."

Question #15

A company needs to introduce automatic DNS failover for a distributed web application to a disaster recovery or standby installation. The DevOps Engineer plans to configure Amazon Route 53 to provide DNS routing to alternate endpoint in the event of an application failure.

What steps should the Engineer take to accomplish this? (Select TWO.)

  • A . Create Amazon Route 53 health checks for each endpoint that cannot be entered as alias records. Ensure firewall and routing rules allow Amazon Route 53 to send requests to the endpoints that are specified in the health checks.
  • B . Create alias records that route traffic to AWS resources and set the value of the Evaluate Target Health option to Yes, then create all the non-alias records.
  • C . Create a governing Amazon Route 53 record set, set it to failover, and associate it with the primary and secondary Amazon Route 53 record sets to distribute traffic to healthy DNS entries.
  • D . Create an Amazon CloudWatch alarm to monitor the primary Amazon Route 53 DNS entry. Then create an associated AWS Lambda function to execute the failover API call to Route 53 to the secondary DNS entry.
  • E . Map the primary and secondary Amazon Route 53 record sets to an Amazon CloudFront distribution using primary and secondary origins.

Reveal Solution Hide Solution

Correct Answer: A,C
Question #16

A DevOps Engineer must track the health of a stateless RESTful service sitting behind a Classic Load Balancer. The deployment of new application revisions is through a Cl/CD pipeline. If the service’s latency increases beyond a defined threshold, deployment should be stopped until the service has recovered.

Which of the following methods allow for the QUICKEST detection time?

  • A . Use Amazon CloudWatch metrics provided by Elastic Load Balancing to calculate average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
  • B . Use AWS Lambda and Elastic Load Balancing access logs to detect average latency.
    Alarm and stop deployment when latency increases beyond the defined threshold.
  • C . Use AWS CodeDeploy’s Minimum Healthy Hosts setting to define thresholds for rolling back deployments. If these thresholds are breached, roll back the deployment.
  • D . Use Metric Filters to parse application logs in Amazon CloudWatch Logs. Create a filter for latency. Alarm and stop deployment when latency increases beyond the defined threshold.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-stop.html

Question #17

A DevOps engineer is setting up a container-based architecture. The engineer has decided to use AWS CloudFormation to automatically provision an Amazon ECS cluster and an Amazon EC2 Auto Scaling group to launch the EC2 container instances. After successfully creating the CloudFormation stack, the engineer noticed that, even though the ECS cluster and the EC2 instances were created successfully and the stack finished the creation, the EC2 instances were associating with a different cluster.

How should the DevOps engineer update the CloudFormation template to resolve this issue?

  • A . Reference the EC2 instances in the AWS::ECS::Cluster resource and reference the ECS cluster in the AWS::ECS::Service resource.
  • B . Reference the ECS cluster in the AWS::AutoScaling::LaunchConfiguration resource of the UserData property.
  • C . Reference the ECS cluster in the AWS::EC2::lnstance resource of the UserData property.
  • D . Reference the ECS cluster in the AWS::CloudFormation::CustomResource resource to trigger an AWS Lambda function that registers the EC2 instances with the appropriate ECS
    cluster.

Reveal Solution Hide Solution

Correct Answer: B
Question #18

A company uses AWS CodePipeline to manage and deploy infrastructure as code. The infrastructure is defined in AWS CloudFormation templates and is primarily comprised of multiple Amazon EC2 instances and Amazon RDS databases. The Security team has observed many operators creating inbound security group rules with a source CIDR of 0 0 0 0/0 and would like to proactively stop the deployment of rules with open CIDRs

The DevOps Engineer will implement a predeptoyment step that runs some security checks over the CloudFormation template before the pipeline processes it. This check should allow only inbound security group rules with a source CIDR of 0.0.0.0/0 if the rule has the description "Security Approval Ref XXXXX (where XXXXX is a preallocated reference). The pipeline step should fail if this condition is not met and the deployment should be blocked

How should this be accomplished?

  • A . Enable a SCP in AWS Organizations. The policy should deny access to the API call Create Security GroupRule if the rule specifies 0.0.0.0/0 without a description referencing a security approval
  • B . Add an initial stage to CodePipeline called Security Check. This stage should call an AWS Lambda function that scans the CloudFormation template and fails the pipeline if it finds 0.0.0.0/0 in a security group without a description referencing a security approval
  • C . Create an AWS Config rule that is triggered on creation or edit of resource type EC2 SecurityGroup. This rule should call an AWS Lambda function to send a failure notification if the security group has any rules with a source CIDR of 0.0.0.0/0 without a description referencing a security approval.
  • D . Modify the IAM role used by CodePipeline. The IAM policy should deny access.

Reveal Solution Hide Solution

Correct Answer: B
Question #19

A company is implementing a well-architected design for its globally accessible API stack. The design needs to ensure both high reliability and fast response times for users located in North America and Europe.

The API stack contains the following three tiers:

• Amazon API Gateway

• AWS Lambda

• Amazon DynamoDB

Which solution will meet the requirements?

  • A . Configure Amazon Route 53 to point to API Gateway APIs in North America and Europe using health checks. Configure the APIs to forward requests to a Lambda function in that Region. Configure the Lambda functions to retrieve and update the data in a DynamoDB table in the same Region as the Lambda function.
  • B . Configure Amazon Route 53 to point to API Gateway APIs in North America and Europe using latency-based routing and health checks. Configure the APIs to forward requests to a Lambda function in that Region. Configure the Lambda functions to retrieve and update the data in a DynamoDB global table.
  • C . Configure Amazon Route 53 to point to API Gateway in North America, create a disaster recovery API in Europe, and configure both APIs to forward requests to the Lambda functions in that Region. Retrieve the data from a DynamoDB global table. Deploy a Lambda function to check the North America API health every 5 minutes. In the event of a failure, update Route 53 to point to the disaster recovery API.
  • D . Configure Amazon Route 53 to point to API Gateway API in North America using latency-based routing. Configure the API to forward requests to the Lambda function in the Region nearest to the user. Configure the Lambda function to retrieve and update the data in a DynamoDB table.

Reveal Solution Hide Solution

Correct Answer: B
Question #20

A DevOps Engineer has several legacy applications that all generate different log formats. The Engineer must standardize the formats before writing them to Amazon S3 for querying and analysis.

How can this requirement be met at the LOWEST cost?

  • A . Have the application send its logs to an Amazon EMR cluster and normalize the logs before sending them to Amazon S3.
  • B . Have the application send its logs to Amazon QuickSight, then use the Amazon QuickSight SPICE engine to normalize the logs. Do the analysis directly from Amazon QuickSight.
  • C . Keep the logs in Amazon S3 and use Amazon Redshift Spectrum to normalize the logs in place.
  • D . Use Amazon Kinesis Agent on each server to upload the logs and have Amazon Kinesis Data Firehose use an AWS Lambda function to normalize the logs before writing them to Amazon.

Reveal Solution Hide Solution

Correct Answer: D

Question #21

A company uses a complex system that consists of networking, IAM policies, and multiple three-tier applications. Requirements are still being defined for a new system, so the number of AWS components present in the final design is not known. The DevOps Engineer needs to begin defining AWS resources using AWS CloudFormation to automate and version-control the new infrastructure.

What is the best practice for using CloudFormation to create new environments?

  • A . Manually construct the networking layer using Amazon VPC and then define all other resources using CloudFormation.
  • B . Create a single template to encompass all resources that are required for the system so there is only one template to version-control.
  • C . Create multiple separate templates for each logical part of the system, use cross-stack references in CloudFormation, and maintain several templates in version control.
  • D . Create many separate templates for each logical part of the system, and provide the outputs from one to the next using an Amazon EC2 instance running SDK for granular control.

Reveal Solution Hide Solution

Correct Answer: C
Question #22

A company has deployed several applications globally. Recently, Security Auditors found that few Amazon EC2 instances were launched without Amazon EBS disk encryption. The Auditors have requested a report detailing all EBS volumes that were not encrypted in multiple AWS accounts and regions. They also want to be notified whenever this occurs in future.

How can this be automated with the LEAST amount of operational overhead?

  • A . Create an AWS Lambda function to set up an AWS Config rule on all the target accounts. Use AWS Config aggregators to collect data from multiple accounts and regions. Export the aggregated report to an Amazon S3 bucket and use Amazon SNS to deliver the notifications.
  • B . Set up AWS CloudTrail to deliver all events to an Amazon S3 bucket in a centralized account. Use the S3 event notification feature to invoke an AWS Lambda function to parse AWS CloudTrail logs whenever logs are delivered to the S3 bucket. Publish the output to
    an Amazon SNS topic using the same Lambda function.
  • C . Create an AWS CloudFormation template that adds an AWS Config managed rule for EBS encryption. Use a CloudFormation stack set to deploy the template across all accounts and regions. Store consolidated evaluation results from config rules in Amazon S3. Send a notification using Amazon SNS when non- compliant resources are detected.
  • D . Using AWS CLI, run a script periodically that invokes the aws ec2 describe-volumes query with a JMESPATH query filter. Then, write the output to an Amazon S3 bucket. Set up an S3 event notification to send events using Amazon SNS when new data is written to the S3 bucket.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://aws.amazon.com/blogs/aws/aws-config-update-aggregate-compliance-data-across-accounts-regions/

https://docs.aws.amazon.com/config/latest/developerguide/aws-config-managed-rules-cloudformation-templates.html

Question #23

An application runs on Amazon EC2 instances behind an Application Load Balancer. Amazon RDS MySOL is used on the backend. The instances run in an Auto Scaling group across multiple Availability Zones. The Application Load Balancer health check ensures the web servers are operating and able to make read/write SQL connections. Amazon Route 53 provides DNS functionality with a record pointing to the Application Load Balancer. A new policy requires a geographically isolated disaster recovery site with an RTO of 4 hours and an RPO of 15 minutes.

Which disaster recovery strategy will require the LEAST amount of changes to the application stack?

  • A . Launch a replica stack of everything except RDS in a different Availability Zone. Create an RDS read-only replica in a new Availability Zone and configure the new stack to point to the local RDS instance. Add the new stack to the Route 53 record set with a failover routing policy.
  • B . Launch a replica stack of everything except RDS in a different region. Create an RDS read-only replica in a new region and configure the new stack to point to the local RDS instance. Add the new stack to the Route 53 record set with a latency routing policy.
  • C . Launch a replica stack of everything except RDS in a different region. Upon failure, copy the snapshot over from the primary region to the disaster recovery region. Adjust the Amazon Route 53 record set to point to the disaster recovery region’s Application Load Balancer.
  • D . Launch a replica stack of everything except RDS in a different region. Create an RDS
    read-only replica in a new region and configure the new stack to point to the local RDS instance. Add the new stack to the Amazon Route 53 record set with a failover routing policy

Reveal Solution Hide Solution

Correct Answer: D
Question #24

A company is testing a web application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The company uses a blue/green deployment process with immutable instances when deploying new software.

During testing, users are being automatically logged out of the application at random times. Testers also report that, when a new version of the application is deployed, all users are logged out. The Development team needs a solution to ensure users remain logged in across scaling events and application deployments.

What is the MOST efficient way to ensure users remain logged in?

  • A . Enable smart sessions on the load balancer and modify the application to check for an existing session.
  • B . Enable session sharing on the load balancer and modify the application to read from the session store.
  • C . Store user session information in an Amazon S3 bucket and modify the application to read session information from the bucket.
  • D . Modify the application to store user session information in an Amazon ElastiCache cluser.

Reveal Solution Hide Solution

Correct Answer: D
Question #25

A DevOps Engineer has several legacy applications that all generate different log formats. The Engineer must standardize the formats before writing them to Amazon S3 for querying and analysis.

How can this requirement be met at the LOWEST cost?

  • A . Have the application send its logs to an Amazon EMR cluster and normalize the logs before sending them to Amazon S3
  • B . Have the application send its logs to Amazon QuickSight then use the Amazon QuickSight SPICE engine to normalize the logs Do the analysis directly from Amazon QuickSight.
  • C . Keep the logs in Amazon S3 and use Amazon Redshift Spectrum to normalize the logs in place
  • D . Use Amazon Kinesis Agent on each server to upload the logs and have Amazon Kinesis Data Firehose use an AWS Lambda function to normalize the logs before writing them to Amazon S3

Reveal Solution Hide Solution

Correct Answer: D
Question #26

A company is using AWS CodeDeploy to automate software deployment.

The deployment must meet these requirements:

* A number of instances must be available to serve traffic during the deployment. Traffic must be balanced across those instances, and the instances must automatically heal in the event of failure.

* A new fleet of instances must be launched for deploying a new revision automatically, with no manual provisioning.

* Traffic must be rerouted to the new environment to half of the new instances at a time. The deployment should succeed if traffic is rerouted to at least half of the instances; otherwise, it should fail.

* Before routing traffic to the new fleet of instances, the temporary files generated during the deployment process must be deleted.

* At the end of a successful deployment, the original instances in the deployment group must be deleted immediately to reduce costs.

How can a DevOps Engineer meet these requirements?

  • A . Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group with the deployment group. Use the Automatically copy option, and use CodeDeployDefault.OneAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original Auto Scaling group instances in the deployment group, and use the AllowTraffic hook within appspec.yml to delete the temporary files.
  • B . Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, create a custom deployment configuration with minimum healthy hosts defined as 50%, and assign the configuration to the deployment group. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeBlock Traffic hook within appsec.yml to delete the temporary files.
  • C . Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault HalfAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original isntances in the deployment group, and use the BeforeAllowTraffic hook within appspec.yml to delete the temporary files.
  • D . Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group and Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault AllatOnce as a deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BlockTraffic hook within appsec.yml to delete the temporary files.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html

https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_BlueGreenDeploymentConfiguration.html

Question #27

A DevOps Engineer is responsible for the deployment of a PHP application. The Engineer is working in a hybrid deployment, with the application running on both on-premises servers and Amazon EC2 instances. The application needs access to a database containing highly confidential information. Application instances need access to database credentials, which must be encrypted at rest and in transit before reaching the instances.

How should the Engineer automate the deployment process while also meeting the security requirements?

  • A . Use AWS Elastic Beanstalk with a PHP platform configuration to deploy application packages to the instances. Store database credentials on AWS Systems Manager Parameter Store using the Secure String data type. Define an IAM role for Amazon EC2 allowing access, and decrypt only the database credentials. Associate this role to all the instances.
  • B . Use AWS CodeDeploy to deploy application packages to the instances. Store database credentials on AWS Systems Manager Parameter Store using the Secure String data type. Define an IAM policy for allowing access, and decrypt only the database credentials. Attach the IAM policy to the role associated to the instance profile for CodeDeploy-managed instances, and to the role used for on-premises instances registration on CodeDeploy.
  • C . Use AWS CodeDeploy to deploy application packages to the instances. Store database credentials on AWS Systems Manager Parameter Store using the Secure String data type. Define an IAM role with an attached policy that allows decryption of the database credentials. Associate this role to all the instances and on-premises servers.
  • D . Use AWS CodeDeploy to deploy application packages to the instances. Store database credentials in the AppSpec file. Define an IAM policy for allowing access to only the database credentials. Attach the IAM policy to the role associated to the instance profile for CodeDeploy-managed instances and the role used for on-premises instances registration on CodeDeploy

Reveal Solution Hide Solution

Correct Answer: B
Question #28

Which Auto Scaling process would be helpful when testing new instances before sending traffic to them, while still keeping them in your Auto Scaling Group?

  • A . Suspend the process AZ Rebalance
  • B . Suspend the process Health Check
  • C . Suspend the process Replace Unhealthy
  • D . Suspend the process AddToLoadBalancer

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

If you suspend Ad dTo Load Balancer, Auto Scaling launches the instances but does not add them to the load balancer or target group. If you resume the AddTo Load Balancer process. Auto Scaling resumes adding instances to the load balancer or target group when they are launched. However, Auto Scaling does not add the instances that were launched while this process was suspended. You must register those instances manually.

Option A is invalid because this just balances the number of CC2 instances in the group across the Availability Zones in the region

Option B is invalid because this just checks the health of the instances. Auto Scaling marks an instance as unhealthy if Amazon CC2 or Clastic Load Balancing tells Auto Scaling that the instance is unhealthy.

Option C is invalid because this process just terminates instances that are marked as unhealthy and later creates new instances to replace them.

For more information on process suspension, please refer to the below document link: from AWS

✑ http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html

Question #29

A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being used in the pipeline to deploy an application to Amazon ECS using the blue/green deployment model. The company wants to implement scripts to shifting traffic. These scripts will complete in 5 minutes or less If errors are discovered during these tests, the application must be rolled back.

Which strategy will meet these requirements?

  • A . Add a stage to the CodePipeline pipeline between the source and deploy stages Use AWS CodeBuild to create an execution environment and build commands in the buildspec file to invoke test scripts If errors are found, use the aws deploy stop-deployment command to stop the deployment
  • B . Add a stage to the CodePipeline pipeline between the source and deploy stages Use this stage to execute an AWS Lambda function that will run the test scripts If errors are found, use the aws deploy stop-deployment command to stop the deployment.
  • C . Add a hooks section to the CodeDeploy AppSpec file Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to trigger rollback.
  • D . Add a hooks section to the CodeDeploy AppSpec file Use the AfterAllowTraffic lifecycle event to invoke the test scripts. If errors are found, use the aws deploy stop-deployment CLI command to stop the deployment.

Reveal Solution Hide Solution

Correct Answer: A
Question #30

Management has reported an increase in the monthly bill from Amazon Web Services, and they are extremely concerned with this increased cost. Management has asked you to determine the exact cause of this increase. After reviewing the billing report, you notice an increase in the data transfer cost.

How can you provide management with a better insight into data transfer use?

  • A . Update your Amazon CloudWatch metrics to use five-second granularity, which will give better detailed metrics that can be combined with your billing data to pinpoint anomalies.
  • B . Use Amazon CloudWatch Logs to run a map-reduce on your logs to determine high usage and data transfer.
  • C . Deliver custom metrics to Amazon CloudWatch per application that breaks down application data transfer into multiple, more specific data points.
  • D . Using Amazon CloudWatch metrics, pull your Elastic Load Balancing outbound data transfer metrics monthly, and include them with your billing report to show which application is causing higher bandwidth usage.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

You can publish your own metrics to CloudWatch using the AWS CLI or an API. You can view statistical graphs of your published metrics with the AWS Management Console. CloudWatch stores data about a metric as a series of data points. Each data point has an associated time stamp. You can even publish an aggregated set of data points called a statistic set.

If you have custom metrics specific to your application, you can give a breakdown to the management on the exact issue.

Option A won’t be sufficient to provide better insights.

Option B is an overhead when you can make the application publish custom metrics Option D is invalid because just the ELB metrics will not give the entire picture

For more information on custom metrics, please refer to the below document link: from AWS

✑ http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publ ishingMetrics.htmI

Question #31

A company requires its internal business teams to launch resources through pre-approved AWS CloudFormation templates only. The security team requires automated monitoring when resources drift from their expected state.

Which strategy should be used to meet these requirements?

  • A . Allow users to deploy Cloud Formation stacks using a CloudFormation service role only. Use CloudFormation drift detection to detect when resources have drifted from their expected state.
  • B . Allow users to deploy CloudFormation stacks using a CloudFormation service role only. Use AWS Config rules to detect when resources have drifted from their expected state.
  • C . Allow users to deploy CloudFormation stacks using AWS Service Catalog only Enforce the use of a launch constraint Use AWS Config rules to detect when resources have drifted from their expected state.
  • D . Allow users to deploy CloudFormation stacks using AWS Service Catalog only Enforce the use of a template constraint Use Amazon EventBridge (Amazon CloudWatch Events) notifications to detect when resources have drifted from their expected state.

Reveal Solution Hide Solution

Correct Answer: B
Question #32

A legacy web application stores access logs in a proprietary text format. One of the security requirements is to search application access events and correlate them with access data from many different systems. These searches should be near-real time.

Which solution offloads the processing load on the application server and provides a mechanism to search the data in near-real time?

  • A . Install the Amazon CloudWatch Logs agent on the application server and use CloudWatch Events rules to search logs for access events. Use Amazon CloudSearch as an interface to search for events.
  • B . Use the third-party file-input plugin Logstash to monitor the application log file, then use a custom dissect filter on the agent to parse the log entries into the JSON format. Output the events to Amazon ES to be searched. Use the Elasticsearch API for querying the data.
  • C . Upload the log files to Amazon S3 by using the S3 sync command. Use Amazon Athena to define the structure of the data as a table, with Athena SQL queries to search for access events.
  • D . Install the Amazon Kinesis Agent on the application server, configure it to monitor the log files, and send it to a Kinesis stream. Configure Kinesis to transform the data by using an AWS Lambda function, and forward events to Amazon ES for analysis. Use the Elasticsearch API for querying the data.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://docs.aws.amazon.com/zh_cn/streams/latest/dev/writing-with-agents.html

Question #33

A DevOps engineer is tasked with moving a mission-critical business application running in Go to AWS. The development team running this application is understaffed and requires a solution that allows the team to focus on application development. They also want to enable blue/green deployments and perform A/B testing.

Which solution will meet these requirements?

  • A . Deploy the application on an Amazon EC2 instance and create an AMI of this instance. Use this AMI to create an automatic scaling launch configuration that is used in an Auto Scaling group. Use an Elastic Load Balancer to distribute traffic. When changes are made to the application, a new AMI is created and replaces the launch configuration.
  • B . Use Amazon Lightsail to deploy the application. Store the application in a zipped format in an Amazon S3 bucket Use this zipped version to deploy new versions of the application to Lightsail. Use Lightsail deployment options to manage the deployment.
  • C . Use AWS CodePipeline with AWS CodeDeploy to deploy the application to a fleet of Amazon EC2 instances. Use an Elastic Load Balancer to distribute the traffic to the EC2 instances. When making changes to the application, upload a new version to CodePipeline and let it deploy the new version.
  • D . Use AWS Elastic Beanstalk to host the application. Store a zipped version of the application in Amazon S3, and use that location to deploy new versions of the application using Elastic Beanstalk to manage the deployment options.

Reveal Solution Hide Solution

Correct Answer: D
Question #34

A DevOps engineer must ensure all IAM entity configurations across multiple AWS accounts in AWS Organizations are compliant with corporate IAM policies.

Which combination of steps will accomplish this? (Select TWO.)

  • A . Enable AWS Trusted Advisor in Organizations for all accounts to report on noncompliant IAM entities.
  • B . Configure an AWS Config aggregator in the Organizations master account for all accounts
  • C . Deploy AWS Config rules to the master account in Organizations that match corporate IAM policies.
  • D . Apply an SCP in Organizations to ensure compliance of IAM entities.
  • E . Deploy AWS Config rules to all accounts in Organizations that match the corporate IAM policies.

Reveal Solution Hide Solution

Correct Answer: B,E
Question #35

A devops team uses AWS CloudFormation to build their infrastructure. The security team is concerned about sensitive parameters, such as passwords, being exposed.

Which combination of steps will enhance the security of AWS CloudFormation? (Select THREE.)

  • A . Create a secure string with AWS KMS and choose a KMS encryption key. Reference the ARN of the secure string, and give AWS CloudFormation permission to the KMS key for decryption.
  • B . Create secrets using the AWS Secrets Manager AWS::SecretsManager::Secret resource type. Reference the secret resource return attributes in resources that need a password, such as an Amazon RDS database.
  • C . Store sensitive static data as secure strings in the AWS Systems Manager Parameter Store. Use dynamic references in the resources that need access to the data.
  • D . Store sensitive static data in the AWS Systems Manager Parameter Store as strings.
    Reference the stored value using types of Systems Manager parameters.
  • E . Use AWS KMS to encrypt the CloudFormation template.
  • F . Use the CloudFormation NoEcho parameter property to mask the parameter value.

Reveal Solution Hide Solution

Correct Answer: A,B,D
Question #36

A company is migrating an application to AWS that runs on a single Amazon EC2 instance. Because of licensing limitations, the application does not support horizontal scaling. The application will be using Amazon Aurora for its database.

How can the DevOps Engineer architect automated healing to automatically recover from EC2 and Aurora failures, in addition to recovering across Availability Zones (AZs), in the MOST cost-effective manner?

  • A . Create an EC2 Auto Scaling group with a minimum and maximum instance count of 1, and have it span across AZs. Use a single-node Aurora instance.
  • B . Create an EC2 instance and enable instance recovery. Create an Aurora database with a read replica in a second AZ, and promote it to a primary database instance if the primary database instance fails.
  • C . Create an Amazon CloudWatch Events rule to trigger an AWS Lambda function to start a new EC2 instance in an available AZ when the instance status reaches a failure state. Create an Aurora database with a read replica in a second AZ, and promote it to a primary database instance when the primary database instance fails.
  • D . Assign an Elastic IP address on the instance. Create a second EC2 instance in a second AZ. Create an Amazon CloudWatch Events rule to trigger an AWS Lambda function to move the Elastic IP address to the second instance when the first instance fails. Use a single-node Aurora instance.

Reveal Solution Hide Solution

Correct Answer: C
Question #37

A company has migrated its container-based applications to Amazon EKS and want to establish automated email notifications. The notifications sent to each email address are for specific activities related to EXS components. The solution will include Amazon SNS topics and an AWS Lambda function to evaluate incoming log events and publish messages to the correct SNS topic.

Which logging solution will support these requirements?

  • A . Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
  • B . Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon CloudWatch Events events that trigger Lambda.
  • C . Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
  • D . Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.

Reveal Solution Hide Solution

Correct Answer: A
Question #38

A company is reviewing its IAM policies. One policy written by the DevOps Engineer has been flagged as too permissive. The policy is used by an AWS Lambda function that issues a stop command to Amazon EC2 instances tagged with Environment: Nonproduction over the weekend.

The current policy is:

What changes should the Engineer make to achieve a policy of least permission? (Select THREE.)

A)

B)

C)

D)

E)

F)

  • A . Option A
  • B . Option B
  • C . Option C
  • D . Option D
  • E . Option E
  • F . Option F

Reveal Solution Hide Solution

Correct Answer: B,D,E
B,D,E

Explanation:

https://docs.aws.amazon.com/ja_jp/IAM/latest/UserGuide/reference_policies_variables.htm l https://aws.amazon.com/jp/premiumsupport/knowledge-center/restrict-ec2-iam/

Question #39

You have an ELB setup in AWS with EC2 instances running behind it. You have been requested to monitor the incoming connections to the ELB.

Which of the below options can suffice this requirement?

  • A . UseAWSCIoudTrail with your load balancer
  • B . Enable access logs on the load balancer
  • C . Use a CloudWatch Logs Agent
  • D . Create a custom metric CloudWatch filter on your load balancer

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Clastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Cach log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues.

Option A is invalid because this service will monitor all AWS services Option C and D are invalid since CLB already provides a logging feature.

For more information on ELB access logs, please refer to the below document link: from AWS

✑ http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection. html

Question #40

A DevOps engineer is deploying a new version of a company’s application in an AWS CodeDeploy deployment group associated with its Amazon EC2 instances. After some time, the deployment fails. The engineer realizes that all the events associated with the specific deployment ID are in a Skipped status, and code was not deployed in the instances associated with the deployment group.

What are valid reasons for this failure? (Select TWO.)

  • A . The networking configuration does not allow the EC2 instances to reach the internet via a NAT gateway or internet gateway, and the CodeDeploy endpoint cannot be reached.
  • B . The IAM user who triggered the application deployment does not have permission to interact with the CodeDeploy endpoint.
  • C . The target EC2 instances were not properly registered with the CodeDeploy endpoint.
  • D . An instance profile with proper permissions was not attached to the target EC2 instances.
  • E . The appspec.yrnl file was not included in the application revision.

Reveal Solution Hide Solution

Correct Answer: B,D

Question #41

An application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). A DevOps engineer is using AWS CodeDeploy to release a new version. The deployment fails during the AllowTraffic lifecycle event, but a cause for the failure is not indicated in the deployment logs.

What would cause this?

  • A . The appspec.yml file contains an invalid script to execute in the AllowTraffic lifecycle hook.
  • B . The user who initiated the deployment does not have the necessary permissions to interact with the ALB
  • C . The health checks specified for the ALB target group are misconfigured.
  • D . The CodeDeploy agent was not installed in the EC2 instances that are part of the ALB target group.

Reveal Solution Hide Solution

Correct Answer: C
Question #42

A company’s web application will be migrated to AWS. The application is designed so that there is no server-side code required. As part of the migration, the company would like to improve the security of the application by adding HTTP response headers, following the Open Web Application Security Project (OWASP) secure headers recommendations.

How can this solution be implemented to meet the security requirements using best practices?

  • A . Use an Amazon S3 bucket configured for website hosting, then set up server access logging on the S3 bucket to track user activity. Then configure the static website hosting and execute a scheduled AWS Lambda function to verify, and if missing, add security headers to the metadata.
  • B . Use an Amazon S3 bucket configured for website hosting, then set up server access logging on the S3 bucket to track user activity. Configure the static website hosting to return the required security headers.
  • C . Use an Amazon S3 bucket configured for website hosting. Create an Amazon CloudFront distribution that refers to this S3 bucket, with the origin response event set to trigger a Lambda@Edge Node.js function to add in the security headers.
  • D . set an Amazon S3 bucket configured for website hosting. Create an Amazon CloudFront distribution that refers to this S3 bucket. Set "Cache Based on Selected Request Headers" to "Whitelist," and add the security headers into the whitelist.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://aws.amazon.com/blogs/networking-and-content-delivery/adding-http-security-headers-using-lambdaedge-and-amazon-cloudfront/

Question #43

A company used AWS CloudFormation to deploy a three-tier web application that stores data in an Amazon RDS MySOL Multi-AZ DB instance. A DevOps

Engineer must upgrade the RDS instance to the latest major version of MySQL while incurring minimal downtime.

How should the Engineer upgrade the instance while minimizing downtime?

  • A . Update the EngineVersion property of the AWS::RDS::DBInstance resource type in the CloudFormation template to the latest desired version. Launch a second stack and make the new RDS instance a read replica.
  • B . Update the DBEngineVersion property of the AWS:: RDS::DBInstance resource type in the CloudFormation template to the latest desired version. Perform an Update Stack operation. Create a new RDS Read Replicas resource with the same properties as the instance to be upgraded. Perform a second Update Stack operation.
  • C . Update the DBEngineVersion property of the AWS::RDS::DBInstance resource type in the CloudFormation template to the latest desired version. Create a new RDS Read Replicas resource with the same properties as the instance to be upgraded. Perform an Update Stack operation.
  • D . Update the EngineVersion property of the AWS::RDS::DBInstance resource type in the CloudFormation template to the latest version, and perform an operation. Update Stack

Reveal Solution Hide Solution

Correct Answer: A
Question #44

ESTION NO: 92

A startup company is developing a web application on AWS. It plans to use Amazon RDS for persistence and deploy the application to Amazon EC2 with an Auto

Scaling group. The company would also like to separate the environments for development, testing, and production.

What is the MOST secure and flexible approach to manage the application configuration?

  • A . Create a property file to include the configuration and the encrypted passwords. Check in the property file to the source repository, package the property file with the application, and deploy the application. Create an environment tag for the EC2 instances and tag the instances respectively. The application will extract the necessary property values based on the environment tag.
  • B . Create a property file for each environment to include the environment-specific configuration and an encrypted password. Check in the property files to the source repository. During deployment, use only the environment-specific property file with the application. The application will read the needed property values from the deployed property file.
  • C . Create a property file for each environment to include the environment-specific configuration. Create a private Amazon S3 bucket and save the property files in the bucket. Save the passwords in the bucket with AWS KMS encryption. During deployment, the application will read the needed property values from the environment-specific property file in the S3 bucket.
  • D . Create a property file for each environment to include the environment-specific configuration. Create a private Amazon S3 bucket and save the property files in the bucket. Save the encrypted passwords in the AWS Systems Manager Parameter Store. Create an environment tag for the EC2 instances and tag the instances respectively. The application will read the needed property values from the environment-specific property file in the S3 bucket and the parameter store.

Reveal Solution Hide Solution

Correct Answer: D
Question #45

A company is deploying a new mobile game on AWS for its customers around the world. The Development team uses AWS Code services and must meet the following requirements:

– Clients need to send/receive real-time playing data from the backend frequently and with minimal latency

– Game data must meet the data residency requirement

Which strategy can a DevOps Engineer implement to meet their needs?

  • A . Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build and deployment pipeline. A successful deployment in one region invokes an AWS Lambda function to copy the build artifacts to an Amazon S3 bucket in another region. After the artifact is copied, it triggers a deployment pipeline in the new region.
  • B . Deploy the backend application to multiple Availability Zones in a single region. Create an Amazon CloudFront distribution to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-and-deployment pipeline. The pipeline deploys the backend application to all Availability Zones.
  • C . Deploy the backend application to multiple regions. Use AWS Direct Connect to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-and-deployment pipeline in the region. After a successful deployment in the region, the pipeline continues to deploy the artifact to another region.
  • D . Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build-and-deployment pipeline in the region. After a successful deployment in the region, the pipeline invokes the pipeline in another region and passes the build artifact location. The pipeline uses the artifact location and deploys applications in the new region.

Reveal Solution Hide Solution

Correct Answer: A
Question #46

A DevOps engineer is troubleshooting deployments to a new application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones. Instances sometimes come online before they are ready, which is leading to increased error rates among users. The current health check configuration gives instances a 60-second grace period and considers instances healthy after two 200 response codes from /index.php, a page that may respond intermittently during the deployment process. The development team wants instances to come online as soon as possible.

Which strategy would address this issue?

  • A . Increase the instance grace period from 60 seconds to 180 seconds, and the consecutive health check requirement from 2 to 3.
  • B . Increase the instance grace period from 60 seconds to 120 seconds, and change the response code requirement from 200 to 204.
  • C . Modify the deployment script to create a /health-check.php file when the deployment begins, then modify the health check path to point to that file.
  • D . Modify the deployment script to create a /health-check.php file when all tasks are complete, then modify the health check path to point to that file.

Reveal Solution Hide Solution

Correct Answer: D
Question #47

A devops engineer wants to deploy a serverless web application based on AWS Lambda.

The deployment must meet the following requirements:

• Provide staging and production environments.

• Restrict the developers from accessing the production environment.

• Avoid hard coding passwords in the Lambda functions

• Store source code in AWS CodeCommit.

• Use AWS CodePipeline to automate the deployment.

Which solution will accomplish this?

  • A . Create separate staging and production accounts to segregate deployment targets. Use AWS KMS to store environment-specific values Use CodePipeline to automate deployments with AWS CodeDeploy.
  • B . Create separate staging and production accounts to segregate deployment targets. Use Lambda environment variables to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy.
  • C . Define tagging conventions for staging and production environments to segregate deployment targets. Use AWS KMS to store environment-specific values Use CodePipeline to automate deployments with AWS CodeDeploy.
  • D . Define naming conventions for staging and production environments to segregate deployment targets. Use Lambda environment variables to store environment-specific values. Use CodePipeline to automate deployments with AWS CodeDeploy

Reveal Solution Hide Solution

Correct Answer: A
Question #48

A company is deploying a container-based application using AWS CodeBuild. The security team mandates that all containers are scanned for vulnerabilities prior to deployment using a password-protected endpoint. All sensitive information must be stored securely.

Which solution should be used to meet these requirements?

  • A . Encrypt the password using AWS KMS. Store the encrypted password in the buildspec.yml file as an environment variable under the variables mapping. Reference the environment variable to initiate scanning.
  • B . Import the password into an AWS CloudHSM key. Reference the CloudHSM key in the buildpec.yml file as an environment variable under the variables mapping. Reference the environment variable to initiate scanning.
  • C . Store the password in the AWS Systems Manager Parameter Store as a secure string. Add the Parameter Store key to the buildspec.yml file as an environment variable under the parameter-store mapping. Reference the environment variable to initiate scanning.
  • D . Use the AWS Encryption SDK to encrypt the password and embed in the buildspec.yml file as a variable under the secrets mapping. Attach a policy to CodeBuild to enable access to the required decryption key.

Reveal Solution Hide Solution

Correct Answer: C
Question #49

A company wants to use Amazon DynamoDB for maintaining metadata on its forums.

See the sample data set in the image below.

A DevOps Engineer is required to define the table schema with the partition key, the sort key, the local secondary index, projected attributes, and fetch operations.

The schema should support the following example searches using the least provisioned read capacity units to minimize cost.

– Search within ForumName for items where the subject starts with "˜a’.

– Search forums within the given LastPostDateTime time frame.

– Return the thread value where LastPostDateTime is within the last three months.

Which schema meets the requirements?

  • A . Use Subject as the primary key and ForumName as the sort key. Have LSI with LastPostDateTime as the sort key and fetch operations for thread.
  • B . Use ForumName as the primary key and Subject as the sort key. Have LSI with LastPostDateTime as the sort key and the projected attribute thread.
  • C . Use ForumName as the primary key and Subject as the sort key. Have LSI with Thread as the sort key and the projected attribute LastPostDateTime.
  • D . Use Subject as the primary key and ForumName as the sort key. Have LSI with Thread as the sort key and fetch operations for LastPostDateTime.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LSI.html

Question #50

A web application has been deployed using an AWS Elastic Beanstalk application The Application Developers are concerned that they are seeing high latency in two different areas of the application: HTTP client requests to a third-party API MySQL client library queries to an Amazon RDS database A DevOps Engineer must gather trace data to diagnose the issues.

Which steps will gather the trace information with the LEAST amount of changes and performance impacts to the application?

  • A . Add additional logging to the application code. Use the Amazon CloudWatch agent to stream the application logs into Amazon Elasticsearch Service. Query the log data in Amazon ES.
  • B . Instrument the application to use the AWS X-Ray SDK. Post trace data to an Amazon Elasticsearch Service cluster. Query the trace data for calls to the HTTP client and the MySQL client.
  • C . On the AWS Elastic Beanstalk management page for the application, enable the AWS X-Ray daemon. View the trace data in the X-Ray console.
  • D . Instrument the application using the AWS X-Ray SDK. On the AWS Elastic Beanstalk management page for the application, enable the X-Ray daemon. View the trace data in the X-Ray console.

Reveal Solution Hide Solution

Correct Answer: C

Question #51

A retail company is currently hosting a Java-based application in its on-premises data center. Management wants the DevOps Engineer to move this application to

AWS. Requirements state that while keeping high availability, infrastructure management should be as simple as possible. Also, during deployments of new application versions, while cost is an important metric, the Engineer needs to ensure that at least half of the fleet is available to handle user traffic.

What option requires the LEAST amount of management overhead to meet these requirements?

  • A . Create an AWS CodeDeploy deployment group and associate it with an Auto Scaling
    group configured to launch instances across subnets in different Availability Zones. Configure an in-place deployment with a CodeDeploy.HalfAtAtime configuration for application deployments.
  • B . Create an AWS Elastic Beanstalk Java-based environment using Auto Scaling and load balancing. Configure the network setting for the environment to launch instances across subnets in different Availability Zones. Use "Rolling with additional batch" as a deployment strategy with a batch size of 50%.
  • C . Create an AWS CodeDeploy deployment group and associate it with an Auto Scaling group configured to launch instances across subnets in different Availability Zones. Configure an in-place deployment with a custom deployment configuration with the MinimumHealthyHosts option set to type FLEET_PERCENT and a value of 50.
  • D . Create an AWS Elastic Beanstalk Java-based environment using Auto Scaling and load balancing. Configure the network options for the environment to launch instances across subnets in different Availability Zones. Use "Rolling" as a deployment strategy with a batch size of 50%.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Rolling with batches keep 100% up yoiu need 50%. With rolling deployments, Elastic Beanstalk splits the environment’s EC2 instances into batches and deploys the new version of the application to one batch at a time, leaving the rest of the instances in the environment running the old version of the application. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

Question #52

A DevOps Engineer discovered a sudden spike in a website’s page load times and found that a recent deployment occurred. A brief diff of the related commit shows that the URL for an external API call was altered and the connecting port changed from 80 to 443. The external API has been verified and works outside the application. The application logs show that the connection is now timing out, resulting in multiple retries and eventual failure of the call.

Which debug steps should the Engineer take to determine the root cause of the issue?

  • A . Check the VPC Flow Logs looking for denies originating from Amazon EC2 instances that are part of the web Auto Scaling group. Check the ingress security group rules and routing rules for the VPC.
  • B . Check the existing egress security group rules and network ACLs for the VPC. Also check the application logs being written to Amazon CloudWatch Logs for debug information.
  • C . Check the egress security group rules and network ACLs for the VPC. Also check the VPC flow logs looking for accepts originating from the web Auto Scaling group.
  • D . Check the application logs being written to Amazon CloudWatch Logs for debug information. Check the ingress security group rules and routing rules for the VPC.

Reveal Solution Hide Solution

Correct Answer: C
Question #53

A company runs an application with an Amazon EC2 and on-premises configuration. A DevOps engineer needs to standardize patching across both environments. Company policy dictates that patching only happens during non-business hours.

Which combination of actions will meet these requirements? (Select THREE.)

  • A . Add the physical machines into AWS Systems Manager using Systems Manager Hybrid Activations.
  • B . Attach an IAM role to the EC2 instances, allowing them to be managed by AWS Systems Manager.
  • C . Create IAM access keys for the on-premises machines to interact with AWS Systems Manager.
  • D . Execute an AWS Systems Manager Automation document to patch the systems every hour.
  • E . Use Amazon CloudWatch Events scheduled events to schedule a patch window.
  • F . Use AWS Systems Manager Maintenance Windows to schedule a patch window.

Reveal Solution Hide Solution

Correct Answer: A,B,F
Question #54

A company is creating a software solution that executes a specific parallel-processing mechanism. The software can scale to tens of servers in some special scenarios. This solution uses a proprietary library that is license-based, requiring that each individual server have a single, dedicated license installed. The company has 200 licenses and is planning to run 200 server nodes concurrently at most.

The company has requested the following features:

"¢ A mechanism to automate the use of the licenses at scale. "¢ Creation of a dashboard to use in the future to verify which licenses are available at any moment.

What is the MOST effective way to accomplish these requirements?

  • A . Upload the licenses to a private Amazon S3 bucket. Create an AWS CloudFormation template with a Mappings section for the licenses. In the template, create an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the Mappings section. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.
  • B . Upload the licenses to an Amazon DynamoDB table. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the DynamoDB table. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.
  • C . Upload the licenses to a private Amazon S3 bucket. Populate an Amazon SQS queue with the list of licenses stored in S3. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script acquire an available license from SQS. Create an Auto Scaling lifecycle hook, then use it to put the license back in SQS after the instance is terminated.
  • D . Upload the licenses to an Amazon DynamoDB table. Create an AWS CLI script to launch the servers by using the parameter –count, with min:max instances to launch. In the user data script, acquire an available license from the DynamoDB table. Monitor each instance and, in case of failure, replace the instance, then manually update the DynamoDB table.

Reveal Solution Hide Solution

Correct Answer: D
Question #55

A company has 100 GB of log data in an Amazon S3 bucket stored in .csv format. SQL developers want to query this data and generate graphs to visualize it. They also need an efficient, automated way to store metadata from the .csv file.

Which combination of steps should be taken to meet these requirements with the LEAST amount of effort? (Select THREE.)

  • A . Filter the data through AWS X-Ray to visualize the data.
  • B . Filter the data through Amazon QuickSight to visualize the data.
  • C . Query the data with Amazon Athena.
  • D . Query the data with Amazon Redshift.
  • E . Use AWS Glue as the persistent metadata store.
  • F . Use Amazon S3 as the persistent metadata store.

Reveal Solution Hide Solution

Correct Answer: B,C,E
Question #56

A Development team is building more than 40 applications. Each app is a three-tiered web application based on an ELB Application Load Balancer, Amazon EC2, and Amazon RDS. Because the applications will be used internally, the Security team wants to allow access to the 40 applications only from the corporate network and block access from external IP addresses. The corporate network reaches the internet through proxy servers. The proxy servers have 12 proxy IP addresses that are being changed one or two times per month. The Network Infrastructure team manages the proxy servers; they upload the file that contains the latest proxy IP addresses into an Amazon S3 bucket. The DevOps Engineer must build a solution to ensure that the applications are accessible from the corporate network.

Which solution achieves these requirements with MINIMAL impact to application development, MINIMAL operational effort, and the LOWEST infrastructure cost?

  • A . Implement an AWS Lambda function to read the list of proxy IP addresses from the S3 object and to update the ELB security groups to allow HTTPS only from the given IP addresses. Configure the S3 bucket to invoke the Lambda function when the object is updated. Save the IP address list to the S3 bucket when they are changed.
  • B . Ensure that all the applications are hosted in the same Virtual Private Cloud (VPC). Otherwise, consolidate the applications into a single VPC. Establish an AWS Direct Connect connection with an active/standby configuration. Change the ELB security groups to allow only inbound HTTPS connections from the corporate network IP addresses.
  • C . Implement a Python script with the AWS SDK for Python (Boto), which downloads the S3 object that contains the proxy IP addresses, scans the ELB security groups, and updates them to allow only HTTPS inbound from the given IP addresses. Launch an EC2 instance and store the script in the instance. Use a cron job to execute the script daily.
  • D . Enable ELB security groups to allow HTTPS inbound access from the Internet. Use Amazon Cognito to integrate the company’s Active Directory as the identity provider. Change the 40 applications to integrate with Amazon Cognito so that only company employees can log into the application. Save the user access logs to Amazon CloudWatch Logs to record user access activities

Reveal Solution Hide Solution

Correct Answer: A
Question #57

A government agency is storing highly confidential files in an encrypted Amazon S3 bucket. The agency has configured federated access and has allowed only a particular on-premises Active Directory user group to access this bucket.

The agency wants to maintain audit records and automatically detect and revert any accidental changes administrators make to the IAM policies used for providing this restricted federated access.

Which of the following options provide the FASTEST way to meet these requirements?

  • A . Configure an Amazon CloudWatch Events Event Bus on an AWS CloudTrail API for triggering the AWS Lambda function that detects and reverts the change.
  • B . Configure an AWS Config rule to detect the configuration change and execute an AWS Lambda function to revert the change.
  • C . Schedule an AWS Lambda function that will scan the IAM policy attached to the federated access role for detecting and reverting any changes.
  • D . Restrict administrators in the on-premises Active Directory from changing the IAM policies

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://www.puresec.io/blog/aws-security-best-practices-config-rules-lambda-security

"Cloudwatch Event Bus" are used for -> "Sending and Receiving Events Between AWS Accounts" https://aws.amazon.com/about-aws/whats-new/2017/06/cloudwatch-events-adds-cross-account-event-delivery-support/

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.html

Question #58

A company is implementing an Amazon ECS cluster to run its workload. The company architecture will run multiple ECS services on the cluster, with an Application Load Balancer on the front end, using multiple target groups to route traffic. The Application Development team has been struggling to collect logs that must be collected and sent to an Amazon S3 bucket for near-real time analysis.

What must the DevOps Engineer configure in the deployment to meet these requirements? (Select THREE)

  • A . Install the Amazon CloudWatch Logs logging agent on the ECS instances. Change the logging driver in the ECS task definition to ‘awslogs’.
  • B . Download the Amazon CloudWatch Logs container instance from AWS and configure it as a task. Update the application service definitions to include the logging task.
  • C . Use Amazon CloudWatch Events to schedule an AWS Lambda function that will run every 60 seconds running the create-export -task CloudWatch Logs command, then point the output to the logging S3 bucket.
  • D . Enable access logging on the Application Load Balancer, then point it directly to the S3 logging bucket.
  • E . Enable access logging on the target groups that are used by the ECS services, then point it directly to the S3 logging bucket.
  • F . Create an Amazon Kinesis Data Firehose with a destination of the S3 logging bucket, then create an Amazon CloudWatch Logs subscription filter for Kinesis

Reveal Solution Hide Solution

Correct Answer: B,D,F
Question #59

A company has multiple child accounts that are part of an organization in AWS Organizations. The security team needs to review every Amazon EC2 security group and their inbound and outbound rules. The security team wants to programmatically retrieve this information from the child accounts using an AWS Lambda function in the master account of the organization.

Which combination of access changes will meet these requirements? (Select THREE.)

  • A . Create a trust relationship that allows users in the child accounts to assume the master account IAM role.
  • B . Create a trust relationship that allows users in the master account to assume the IAM roles of the child accounts.
  • C . Create an IAM role in each child account that has access to the AmazonEC2ReadOnlyAccess managed policy.
  • D . Create an IAM role in each child account to allow the sts:AssumeRole action against the master account IAM role’s ARN.
  • E . Create an IAM role in the master account that allows the sts:AssumeRole action against the child account IAM role’s ARN.
  • F . Create an IAM role in the master account that has access to the AmazonEC2ReadOnlyAccess managed policy.

Reveal Solution Hide Solution

Correct Answer: B
Question #60

Your application is currently running on Amazon EC2 instances behind a load balancer. Your management has decided to use a Blue/Green deployment strategy.

How should you implement this for each deployment?

  • A . Set up Amazon Route 53 health checks to fail over from any Amazon EC2 instance that is currently being deployed to.
  • B . Using AWS CloudFormation, create a test stack for validating the code, and then deploy the code to each production Amazon EC2 instance.
  • C . Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then switch DNS over to the new load balancer using Amazon Route 53 after testing.
  • D . Launch more Amazon EC2 instances to ensure high availability, de-register each Amazon EC2 instance from the load balancer, upgrade it, and test it, and then register it again with the load balancer.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The below diagram shows how this can be done

C:UserswkDesktopmudassarUntitled.jpg

1) First create a new ELB which will be used to point to the new production changes.

2) Use the Weighted Route policy for Route53 to distribute the traffic to the 2 ELB’s based on a 80-20% traffic scenario. This is the normal case, the % can be changed based on the requirement.

3) Finally when all changes have been tested, Route53 can be set to 100% for the new ELB.

Option A is incorrect because this is a failover scenario and cannot be used for Blue green deployments. In Blue Green deployments, you need to have 2 environments running side by side.

Option B is incorrect, because you need to a have a production stack with the changes which will run side by side.

Option D is incorrect because this is not a blue green deployment scenario. You cannot control which users will go the new EC2 instances.

For more information on blue green deployments, please refer to the below document link:

from AWS

✑ https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf


Question #61

An Information Security policy requires that all publicly accessible systems be patched with critical OS security patches within 24 hours of a patch release. All instances are tagged with the Patch Group key set to 0. Two new AWS Systems Manager patch baselines for Windows and Red Hat Enterprise Linux (RHEL) with zero-day delay for security patches of critical severity were created with an auto-approval rule. Patch Group 0 has been associated with the new patch baselines.

Which two steps will automate patch compliance and reporting? (Select TWO.)

  • A . Create an AWS Systems Manager Maintenance Window and add a target with Patch Group 0. Add a task that runs the AWS-InstallWindowsUpdates document with a daily schedule.
  • B . Create an AWS Systems Manager Maintenance Window with a daily schedule and add a target with Patch Group 0. Add a task that runs the AWS-RunPatchBaseline document with the Install action.
  • C . Create an AWS Systems Manager State Manager configuration. Associate the AWS-RunPatchBaseline task with the configuration and add a target with Patch Group 0.
  • D . Create an AWS Systems Manager Maintenance Window and add a target with Patch Group 0. Add a task that runs the AWS-ApplyPatchBaseline document with a daily schedule.
  • E . Use the AWS Systems Manager Run Command to associate the AWS-ApplyPatchBaseline document with instances tagged with Patch Group 0.

Reveal Solution Hide Solution

Correct Answer: A,C
Question #62

A DevOps Engineer needs to deploy a scalable three-tier Node.js application in AWS. The application must have zero downtime during deployments and be able to roll back to previous versions. Other applications will also connect to the same MySQL backend database.

The CIO has provided the following guidance for logging:

* Centrally view all current web access server logs.

* Search and filter web and application logs in near-real time.

* Retain log data for three months.

How should these requirements be met?

  • A . Deploy the application using AWS Elastic Beanstalk. Configure the environment type for Elastic Load Balancing and Auto Scaling. Create an Amazon RDS MySQL instance inside the Elastic Beanstalk stack. Configure the Elastic Beanstalk log options to stream logs to Amazon CloudWatch Logs. Set retention to 90 days.
  • B . Deploy the application on Amazon EC2. Configure Elastic Load Balancing and Auto Scaling. Use an Amazon RDS MySQL instance for the database tier. Configure the application to store log files in Amazon S3. Use Amazon EMR to search and filter the data. Set an Amazon S3 lifecycle rule to expire objects after 90 days.
  • C . Deploy the application using AWS Elastic Beanstalk. Configure the environment type for Elastic Load Balancing and Auto Scaling. Create the Amazon RDS MySQL instance outside the Elastic Beanstalk stack. Configure the Elastic Beanstalk log options to stream logs to Amazon CloudWatch Logs. Set retention to 90 days.
  • D . Deploy the application on Amazon EC2. Configure Elastic Load Balancing and Auto Scaling. Use an Amazon RDS MySQL instance for the database tier. Configure the application to load streaming log data using Amazon Kinesis Data Firehouse into Amazon ES. Delete and create a new Amazon ES domain every 90 days.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-debugging.html

Question #63

An application has microservices spread across different AWS accounts and is integrated with an on-premises legacy system for some of its functionality.

Because of the segmented architecture and missing logs, every time the application experiences issues, it is taking too long to gather the logs to identify the issues. A DevOps Engineer must fix the log aggregation process and provide a way to centrally analyze the logs.

Which is the MOST efficient and cost-effective solution?

  • A . Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to export on-premises logs, and store the logs in an S3 bucket in a central account. Build an Amazon EMR cluster to reduce the logs and derive the root cause.
  • B . Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to import on-premises logs. Store all logs in S3 buckets in individual accounts. Use Amazon Macie to write a query to search for the required specific event-related data point.
  • C . Collect system logs and application logs using the Amazon CloudWatch Logs agent. Install the CloudWatch Logs agent on the on-premises servers. Transfer all logs from AWS to the on-premises data center. Use an Amazon Elasticsearch Logstash Kibana stack to analyze logs on premises.
  • D . Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Install a CloudWatch Logs agent for on-premises resources. Store all logs in an S3 bucket in a central account. Set up an Amazon S3 trigger and an AWS Lambda function to analyze incoming logs and automatically identify anomalies. Use Amazon Athena to run ad hoc queries on the logs in the central account.

Reveal Solution Hide Solution

Correct Answer: D
Question #64

A DevOps Engineer manages an application that has a cross-region failover requirement. The application stores its data in an Amazon Aurora on Amazon RDS database in the primary region with a read replica in the secondary region. The application uses Amazon Route 53 to direct customer traffic to the active region.

Which steps should be taken to MINIMIZE downtime if a primary database fails?

  • A . Use Amazon CloudWatch to monitor the status of the RDS instance. In the event of a failure, use a CloudWatch Events rule to send a short message service (SMS) to the Systems Operator using Amazon SNS. Have the Systems Operator redirect traffic to an Amazon S3 static website that displays a downtime message. Promote the RDS read replica to the master. Confirm that the application is working normally, then redirect traffic from the Amazon S3 website to the secondary region.
  • B . Use RDS Event Notification to publish status updates to an Amazon SNS topic. Use an AWS Lambda function subscribed to the topic to monitor database health. In the event of a failure, the Lambda function promotes the read replica, then updates Route 53 to redirect traffic from the primary region to the secondary region.
  • C . Set up an Amazon CloudWatch Events rule to periodically invoke an AWS Lambda function that checks the health of the primary database. If a failure is detected, the Lambda function promotes the read replica. Then, update Route 53 to redirect traffic from the primary to the secondary region.
  • D . Set up Route 53 to balance traffic between both regions equally. Enable the Aurora multi-master option, then set up a Route 53 health check to analyze the health of the databases. Configure Route 53 to automatically direct all traffic to the secondary region when a primary database fails.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html

Question #65

Company policies require that information about IP traffic going between instances in the production Amazon VPC is captured. The capturing mechanism must always be enabled and the Security team must be notified when any changes in configuration occur.

What should be done to ensure that these requirements are met?

  • A . Using the UserData section of an AWS CloudFormation template, install tcpdump on every provisioned Amazon EC2 instance. The output of the tool is sent to Amazon EFS for aggregation and querying. In addition, scheduling an Amazon CloudWatch Events rule calls an AWS Lambda function to check whether tcpdump is up and running and sends an email to the security organization when there is an exception.
  • B . Create a flow log for the production VPC and assign an Amazon S3 bucket as a destination for delivery. Using Amazon S3 Event Notification, set up an AWS Lambda function that is triggered when a new log file gets delivered. This Lambda function updates an entry in Amazon DynamoDB, which is periodically checked by scheduling an Amazon CloudWatch Events rule to notify security when logs have not arrived.
  • C . Create a flow log for the production VPC. Create a new rule using AWS Config that is triggered by configuration changes of resources of type "˜EC2:VPC’. As part of configuring the rule, create an AWS Lambda function that looks up flow logs for a given VPC. If the VPC flow logs are not configured, return a "˜NON_COMPLIANT’ status and notify the security organization.
  • D . Configure a new trail using AWS CloudTrail service. Using the UserData section of an AWS CloudFormation template, install tcpdump on every provisioned Amazon EC2 instance. Connect Amazon Athena to the CloudTrail and write an AWS Lambda function that monitors for a flow log disable event. Once the CloudTrail entry has been spotted, alert the security organization

Reveal Solution Hide Solution

Correct Answer: C
Question #66

A company requires that its internally facing web application be nighty available. The architecture is made up of one Amazon EC2 web server instance and one NAT instance that provides outbound internet access for updates and accessing public data

Which combination of architecture adjustments should the company implement to achieve high availability? (Select TWO.)

  • A . Add the NAT instance to an EC2 Auto Scaling group that spans multiple Availability Zones Update the route tables
  • B . Create additional EC2 instances spanning multiple Availability Zones Add an Application Load Balancer to split the load between them
  • C . Configure an Application Load Balancer in front of the EC2 instance Configure Amazon CloudWatch alarms to recover the EC2 instance upon host failure
  • D . Replace the NAT instance with a NAT gateway in each Availability Zone Update the route tables
  • E . Replace the NAT instance with a NAT gateway that spans multiple Availability Zones Update the route tables

Reveal Solution Hide Solution

Correct Answer: A,D
Question #67

A company is running an application on Amazon EC2 instances behind an ELB Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones.

After a recent application update, users are getting HTTP 502 Bad Gateway errors from the application URL. The DevOps Engineer cannot analyze the problem because Auto Scaling is terminating all EC2 instances shortly after launch for being unhealthy.

What steps will allow the DevOps Engineer access to one of the unhealthy instances to troubleshoot the deployed application?

  • A . Create an image from the terminated instance and create a new instance from that image. The Application team can then log into the new instance.
  • B . As soon as a new instance is created by AutoScaling, put the instance into a Standby state as this will prevent the instance from being terminated.
  • C . Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state.
  • D . Edit the Auto Scaling group to enable termination protection as this will protect unhealthy instances from being terminated.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://aws.amazon.com/blogs/aws/auto-scaling-update-lifecycle-standby-detach/

Question #68

During metric analysis, your team has determined that the company’s website during peak hours is experiencing response times higher than anticipated. You currently rely on Auto Scaling to make sure that you are scaling your environment during peak windows.

How can you improve your Auto Scaling policy to reduce this high response time? Choose 2 answers.

  • A . Push custom metrics to CloudWatch to monitor your CPU and network bandwidth from your servers, which will allow your Auto Scaling policy to have betterfine-grain insight.
  • B . IncreaseyourAutoScalinggroup’snumberofmaxservers.
  • C . Create a script that runs and monitors your servers; when it detects an anomaly in load, it posts to an Amazon SNS topic that triggers Elastic Load Balancing to add more servers to the load balancer.
  • D . Push custom metrics to CloudWatch for your application that include more detailed information about your web application, such as how many requests it is handling and how many are waiting to be processed.

Reveal Solution Hide Solution

Correct Answer: B,D
B,D

Explanation:

Option B makes sense because maybe the max servers is low hence the application cannot handle the peak load.

Option D helps in ensuring Autoscaling can scale the group on the right metrics.

For more information on Autoscaling health checks, please refer to the below document link: from AWS

✑ http://docs.aws.a mazon.com/autoscaling/latest/userguide/healthcheck.html

Question #69

A DevOps Engineer manages a web application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an EC2 Auto Scaling group across multiple Availability Zones.

The Engineer needs to implement a deployment strategy that:

Launches a second fleet of instances with the same capacity as the original fleet.

Maintains the original fleet unchanged while the second fleet is launched.

Transitions traffic to the second fleet when the second fleet is fully deployed.

Terminates the original fleet automatically 1 hour after transition.

Which solution will satisfy these requirements?

  • A . Use an AWS CloudFormation template with a retention policy for the ALB set to 1 hour. Update the Amazon Route 53 record to reflect the new ALB.
  • B . Use two AWS Elastic Beanstalk environments to perform a blue/green deployment from the original environment to the new one. Create an application version lifecycle policy to terminate the original environment in 1 hour.
  • C . Use AWS CodeDeploy with a deployment group configured with a blue/green deployment configuration. Select the option Terminate the original instances in the deployment group with a waiting period of 1 hour.
  • D . Use AWS Elastic Beanstalk with the configuration set to Immutable. Create an .ebextension using the Resources key that sets the deletion policy of the ALB to 1 hour, and deploy the application.

Reveal Solution Hide Solution

Correct Answer: B
Question #70

A company is using several AWS CloudFormation templates for deploying infrastructure as code. In most of the deployments, the company uses Amazon EC2

Auto Scaling groups. A DevOps Engineer needs to update the AMIs for the Auto Scaling group in the template if newer AMIs are available.

How can these requirements be met?

  • A . Manage the AMI mappings in the CloudFormation template. Use Amazon CloudWatch Events for detecting new AMIs and updating the mapping in the template. Reference the map in the launch configuration resource block.
  • B . Use conditions in the AWS CloudFormation template to check if new AMIs are available and return the AMI ID. Reference the returned AMI ID in the launch configuration resource block.
  • C . Use an AWS Lambda-backed custom resource in the template to fetch the AMI IDs.
    Reference the returned AMI ID in the launch configuration resource block.
  • D . Launch an Amazon EC2 m4.small instance and run a script on it to check for new AMIs. If new AMIs are available, the script should update the launch configuration resource block with the new AMI ID.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-custom-resources-lambda-lookup-amiids.html

Question #71

A company wants to use a grid system for a proprietary enterprise in-memory data store on top of AWS. This system can run in multiple server nodes in any

Linux-based distribution. The system must be able to reconfigure the entire cluster every time a node is added or removed. When adding or removing nodes, an / etc./cluster/nodes.config file must be updated, listing the IP addresses of the current node members of that cluster

The company wants to automate the task of adding new nodes to a cluster.

What can a DevOps Engineer do to meet these requirements?

  • A . Use AWS OpsWorks Stacks to layer the server nodes of that cluster. Create a Chef recipe that populates the content of the /etc/cluster/nodes.config file and restarts the service by using the current members of the layer. Assign that recipe to the Configure lifecycle event.
  • B . Put the file nodes.config in version control. Create an AWS CodeDeploy deployment configuration and deployment group based on an Amazon EC2 tag value for the cluster nodes. When adding a new node to the cluster, update the file with all tagged instances, and make a commit in version control. Deploy the new file and restart the services.
  • C . Create an Amazon S3 bucket and upload a version of the etc/cluster/nodes.config file. Create a crontab script that will poll for that S3 file and download it frequently. Use a process manager, such as Monit or systemd, to restart the cluster services when it detects that the new file was modified. When adding a node to the cluster, edit the file’s most recent members. Upload the new file to the S3 bucket.
  • D . Create a user data script that lists all members of the current security group of the cluster and automatically updates the /etc/cluster/nodes.config file whenever a new instance is added to the cluster

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html

Question #72

A business has an application that consists of five independent AWS Lambda functions.

The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests, packages, and deploys each Lambda function in sequence. The pipeline uses an Amazon CloudWatch Events rule to ensure the pipeline execution starts as quickly as possible after a change is made to the application source code.

After working with the pipeline for a few months, the DevOps Engineer has noticed the pipeline takes too long to complete.

What should the DevOps Engineer implement to BEST improve the speed of the pipeline?

  • A . Modify the CodeBuild projects within the pipeline to use a compute type with more available network throughput.
  • B . Create a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the builds in parallel.
  • C . Modify the CodePipeline configuration to execute actions for each Lambda function in parallel by specifying the same runOrder.
  • D . Modify each CodeBuild project to run within a VPC and use dedicated instances to increase throughput.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://aws.amazon.com/blogs/devops/using-aws-codepipeline-aws-codebuild-and-aws-lambda-for-serverless-automated-ui-testing/ https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html

Question #73

A company is developing a web application’s infrastructure using AWS CloudFormation. The database engineering team maintains the database resources in a CloudFormation template, and the software development team maintains the web application resources in a separate CloudFormation template As the scope of the application grows, the software development team needs to use resources maintained by the database engineering team However, both teams have their own review and lifecycle management processes that they want to keep Both teams also require resource-level change-set reviews. The software development team would like to deploy changes to this template using their CI/CD pipeline.

Which solution will meet these requirements?

  • A . Create a stack export from the database CloudFormation template and import those references into the web application CloudFormation template
  • B . Create a CloudFormation nested stack to make cross-stack resource references and parameters available in both stacks.
  • C . Create a CloudFormation stack set to make cross-stack resource references and parameters available in both stacks
  • D . Create input parameters in the web application CloudFormation template and pass resource names and IDs from the database stack.

Reveal Solution Hide Solution

Correct Answer: A
Question #74

A company is implementing AWS CodePipeline to automate its testing process.

The company wants to be notified when the execution state fails and used the following custom event pattern in Amazon CloudWatch:

Which type of events will match this event pattern?

  • A . Failed deploy and build actions across all the pipelines.
  • B . All rejected or failed approval actions across all the pipelines.
  • C . All the events across all pipelines.
  • D . Approval actions across all the pipelines.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html

Question #75

A company develops and maintains a web application using Amazon EC2 instances and an Amazon RDS for SQL Server DB instance in a single Availability Zone The resources need to run only when new deployments are being tested using AWS CodePipeline.

Testing occurs one or more times a week and each test takes 2-3 hours to run. A DovOps engineer wants a solution that does not change the architecture components.

Which solution will meet these requirements in the MOST cost-effective manner?

  • A . Convert the RDS database to an Amazon Aurora Serverless database Use an AWS Lambda function to start and stop the EC2 instances before and after tests
  • B . Put the EC2 instances into an Auto Scaling group. Schedule scaling to run at the start of the deployment tests.
  • C . Replace the EC2 instances with EC2 Spot Instances and the RDS database with an RDS Reserved Instance.
  • D . Subscribe Amazon CloudWatch Events to CodePipeline to trigger AWS Systems Manager Automation documents that start and stop all EC2 and RDS instances before and after deployment tests.

Reveal Solution Hide Solution

Correct Answer: D
Question #76

A DevOps engineer notices that all Amazon EC2 instances running behind an Application Load Balancer in an Auto Scaling group are failing to respond to user requests. The EC2 instances are also failing target group HTTP health checks.

Upon inspection, the engineer notices the application process was not running in any EC2 instances. There are a significant number of out of memory messages in the system logs. The engineer needs to improve the resilience of the application to cope with a potential application memory leak. Monitoring and notifications should be enabled to alert when there is an issue.

Which combination of actions will meet these requirements? {Select TWO.)

  • A . Change the Auto Scaling configuration to replace the instances when they fail the load balancer’s health checks.
  • B . Change the target group health check HealthChecklntervalSeconds parameter to reduce the interval between health checks.
  • C . Change the target group health checks from HTTP to TCP to check if the port where the application is listening is reachable.
  • D . Enable the available memory consumption metric within the Amazon CloudWatch dashboard for the entire Auto Scaling group. Create an alarm when the memory utilization is high. Associate an
  • E . Amazon SNS topic to the alarm to receive notifications when the alarm goes off.
  • F . Use the Amazon CloudWatch agent to collect the memory utilization of the EC2 instances in the Auto Scaling group. Create an alarm when the memory utilization is high and associate an Amazon SNS topic to receive a notification.

Reveal Solution Hide Solution

Correct Answer: B,E
Question #77

A company mandates the creation of capture logs for everything running in its AWS account. The account has multiple VPCs with Amazon EC2 instances, Application Load Balancers, Amazon RDS MySQL databases, and AWS WAF rules configured. The logs must be protected from deletion. A daily visual analysis of log anomalies from the previous day is required.

Which combination of actions should a DevOps Engineer take to accomplish this? (Choose three.)

  • A . Configure an AWS Lambda function to send all CloudWatch logs to an Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.
  • B . Configure AWS CloudTrail to send all logs to Amazon Inspector. Create a dashboard report in Amazon QuickSight.
  • C . Configure Amazon S3 MFA Delete on the logging Amazon S3 bucket.
  • D . Configure an Amazon S3 object lock legal hold on the logging Amazon S3 bucket.
  • E . Configure AWS Artifact to send all logs to the logging Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.
  • F . Deploy an Amazon CloudWatch agent to all Amazon EC2 instances.

Reveal Solution Hide Solution

Correct Answer: A,D,F
Question #78

Two teams are working together on different portions of an architecture and are using AWS CloudFormation to manage their resources. One team administers operating system-level updates and patches, while the other team manages application-level dependencies and updates. The Application team must take the most recent AMI when creating new instances and deploying the application.

What is the MOST scalable method for linking these two teams and processes?

  • A . The Operating System team uses CloudFormation to create new versions of their AMIs and lists the Amazon Resource names (ARNs) of the AMIs in an encrypted Amazon S3 object as part of the stack output section. The Application team uses a cross-stack reference to load the encrypted S3 object and obtain the most recent AMI ARNs.
  • B . The Operating System team uses CloudFormation stack to create an AWS CodePipeline pipeline that builds new AMIs, then places the latest AMI ARNs in an encrypted Amazon S3 object as part of the pipeline output. The Application team uses a cross-stack reference within their own CloudFormation template to get that S3 object location and obtain the most recent AMI ARNs to use when deploying their application.
  • C . The Operating System team uses CloudFormation stack to create an AWS CodePipeline pipeline that builds new AMIs. The team then places the AMI ARNs as parameters in AWS Systems Manager Parameter Store as part of the pipeline output. The Application team specifies a parameter of type ssm in their CloudFormation stack to obtain the most recent AMI ARN from the Parameter Store.
  • D . The Operating System team maintains a nested stack that includes both the operating system and Application team templates. The Operating System team uses a stack update to deploy updates to the application stack whenever the Application team changes the application code.

Reveal Solution Hide Solution

Correct Answer: B
Question #79

You have just recently deployed an application on EC2 instances behind an ELB. After a couple of weeks, customers are complaining on receiving errors from the application. You want to diagnose the errors and are trying to get errors from the ELB access logs. But the ELB access logs are empty.

What is the reason for this?

  • A . You do not have the appropriate permissions to access the logs
  • B . You do not have your CloudWatch metrics correctly configured
  • C . ELB Access logs are only available for a maximum of one week.
  • D . Access logging is an optional feature of Elastic Load Balancing that is disabled by default

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Clastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Cach log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues.

Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer. Clastic Load

Balancing captures the logs and stores them in the Amazon S3 bucket that you specify.

You can disable access logging at any time.

For more information on CLB access logs, please refer to the below document link: from AWS

✑ http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection. html

Question #80

A consulting company was hired to assess security vulnerabilities within a client company’s application and propose a plan to remediate all identified issues. The architecture is identified as follows: Amazon S3 storage for content, an Auto Scaling group of Amazon EC2 instances behind an Elastic Load Balancer with attached Amazon EBS storage, and an Amazon RDS MySQL database. There are also several AWS Lambda functions that communicate directly with the RDS database using connection string statements in the code.

The consultants identified the top security threat as follows: the application is not meeting its requirement to have encryption at rest.

What solution will address this issue with the LEAST operational overhead and will provide monitoring for potential future violations?

  • A . Enable SSE encryption on the S3 buckets and RDS database. Enable OS-based
    encryption of data on EBS volumes. Configure Amazon Inspector agents on EC2 instances to report on insecure encryption ciphers. Set up AWS Config rules to periodically check for non-encrypted S3 objects.
  • B . Configure the application to encrypt each file prior to storing on Amazon S3. Enable OS-based encryption of data on EBS volumes. Encrypt data on write to RDS. Run cron jobs on each instance to check for encrypted data and notify via Amazon SNS. Use S3 Events to call an AWS Lambda function and verify if the file is encrypted.
  • C . Enable Secure Sockets Layer (SSL) on the load balancer, ensure that AWS Lambda is using SSL to communicate to the RDS database, and enable S3 encryption. Configure the application to force SSL for incoming connections and configure RDS to only grant access if the session is encrypted. Configure Amazon Inspector agents on EC2 instances to report on insecure encryption ciphers.
  • D . Enable SSE encryption on the S3 buckets, EBS volumes, and the RDS database. Store RDS credentials in EC2 Parameter Store. Enable a policy on the S3 bucket to deny unencrypted puts. Set up AWS Config rules to periodically check for non-encrypted S3 objects and EBS volumes, and to ensure that RDS storage is encrypted.

Reveal Solution Hide Solution

Correct Answer: D

Question #81

A DevOps team manages an API running on-premises that serves as a backend for an Amazon API Gateway endpoint. Customers have been complaining about high response latencies, which the development team has verified using the API Gateway latency metrics in Amazon CloudWatch. To identify the cause, the team needs to collect relevant data without introducing additional latency.

Which actions should be taken to accomplish this? {Select TWO.)

  • A . Install the CloudWatch agent server side and configure the agent to upload relevant logs to CloudWatch.
  • B . Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and upload those segments to X-Ray during each request.
  • C . Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and use the X-Ray daemon to upload segments to X-Ray.
  • D . Modify the on-premises application to send log information back to API Gateway with each request.
  • E . Modify the on-premises application to calculate and upload statistical data relevant to the API service requests to CloudWatch metrics.

Reveal Solution Hide Solution

Correct Answer: A,C
Question #82

A company’s application is running on Amazon EC2 instances in an Auto Scaling group. A DevOps engineer needs to ensure there are at least four application servers running at all times. Whenever an update has to be made to the application, the engineer creates a new AMI with the updated configuration and updates the AWS CloudFormation template with the new AMI ID. After the stack update finishes, the engineer manually terminates the old instances one by one. verifying that the new instance is operational before proceeding. The engineer needs to automate this process.

Which action will allow for the LEAST number of manual steps moving forward?

  • A . Update the CloudFormation template to include the UpdatePolicy attribute with the AutoScalingRollingUpdate policy.
  • B . Update the CloudFormation template to include the UpdatePolicy attribute with the AutoScalingReplacingUpdate policy.
  • C . Use an Auto Scaling lifecycle hook to verify that the previous instance is operational before allowing the DevOps engineer’s selected instance to terminate.
  • D . Use an Auto Scaling lifecycle hook to confirm there are at least four running instances before allowing the DevOps engineer’s selected instance to terminate.

Reveal Solution Hide Solution

Correct Answer: A
Question #83

A Development team is currently using AWS CodeDeploy to deploy an application revision to an Auto Scaling group. If the deployment process fails, it must be rolled back automatically and a notification must be sent.

What is the MOST effective configuration that can satisfy all of the requirements?

  • A . Create Amazon CloudWatch Events rules for CodeDeploy operations. Configure a CloudWatch Events rule to send out an Amazon SNS message when the deployment fails. Configure CodeDeploy to automatically roll back when the deployment fails.
  • B . Use available Amazon CloudWatch metrics for CodeDeploy to create CloudWatch alarms. Configure CloudWatch alarms to send out an Amazon SNS message when the deployment fails. Use AWS CLI to redeploy a previously deployed revision.
  • C . Configure a CodeDeploy agent to create a trigger that will send notification to Amazon SNS topics when the deployment fails. Configure CodeDeploy to automatically roll back when the deployment fails.
  • D . Use AWS CloudTrail to monitor API calls made by or on behalf of CodeDeploy in the AWS account. Send an Amazon SNS message when deployment fails. Use AWS CLI to redeploy a previously deployed revision.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-sns-event-notifications-create-trigger.html#monitoring-sns-event-notifications-create-trigger-console

Question #84

An ecommerce company uses a large number of Amazon EBS backed Amazon EC2 instances. To decrease manual work across all the instances, a DevOps engineer is tasked with automating restart actions when EC2 instance retirement events are scheduled.

How can this be accomplished?

  • A . Create a scheduled Amazon CloudWatch Events rule to execute an AWS Systems Manager automation document that checks if any EC2 instances are scheduled for retirement once a week. If the instance is scheduled for retirement, the automation document will hibernate the instance.
  • B . Enable EC2 Auto Recovery on all of the instances. Create an AWS Config rule to limit the recovery to occur during a maintenance window only.
  • C . Reboot all EC2 instances during an approved maintenance window that is oil^ide of standard business hours. Set up Amazon CloudWatch alarms to send a notification in case any instance is failing EC2 instance status checks.
  • D . Set up an AWS Health Amazon CloudWatch Events rule to execute AWS Systems Manager automation documents that stop and start the EC2 instance when a retirement scheduled event occurs.

Reveal Solution Hide Solution

Correct Answer: D
Question #85

A DevOps Engineer is building a continuous deployment pipeline for a serverless application using AWS CodePipeline and AWS CodeBuild. The source, build, and test stages have been created with the deploy stage remaining. The company wants to reduce the risk of an unsuccessful deployment by deploying to a specified subset of customers and monitoring prior to a full release to all customers.

How should the deploy stage be configured to meet these requirements?

  • A . Use AWS CloudFormation to publish a new version on every stack update. Then set up a CodePipeline approval action for a Developer to test and approve the new version. Finally, use a CodePipeline invoke action to update an AWS Lambda function to use the production alias
  • B . Use CodeBuild to use the AWS CLI to update the AWS Lambda function code, then publish a new version of the function and update the production alias to point to the new version of the function.
  • C . Use AWS CloudFormation to define the serverless application and AWS CodeDeploy to deploy the AWS Lambda functions using DeploymentPreference: . Canary10Percent15Minutes
  • D . Use AWS CloudFormation to publish a new version on every stack update. Use the RoutingConfig property of the AWS::Lambda::Alias resource to update the traffic routing during the stack update.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html

Question #86

A company is using an AWS CloudFormation template to deploy web applications. The template requires that manual changes be made for each of the three major environments: production, staging, and development. The current sprint includes the new implementation and configuration of AWS CodePipeline for automated deployments.

What changes should the DevOps Engineer make to ensure that the CloudFormation template is reusable across multiple pipelines?

  • A . Use a CloudFormation custom resource to query the status of the CodePipeline to determine which environment is launched. Dynamically alter the launch configuration of the Amazon EC2 instances.
  • B . Set up a CodePipeline pipeline for each environment to use input parameters. Use CloudFormation mappings to switch associated UserData for the Amazon EC2 instances to match the environment being launched.
  • C . Set up a CodePipeline pipeline that has multiple stages, one for each development environment. Use AWS Lambda functions to trigger CloudFormation deployments to dynamically alter the UserData of the Amazon EC2 instances launched in each environment.
  • D . Use CloudFormation input parameters to dynamically alter the LaunchConfiguration and UserData sections of each Amazon EC2 instance every time the CloudFormation stack is updated.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-parameter-override-functions.html

Question #87

A company uses AWS KMS with CMKs and manual key rotation to meet regulatory compliance requirements. The security team wants to be notified when any keys have not been rotated after 90 days.

Which solution will accomplish this?

  • A . Configure AWS KMS to publish to an Amazon SNS topic when keys are more than 90 days old.
  • B . Configure an Amazon CloudWatch Events event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon SNS topic
  • C . Develop an AWS Config custom rule that publishes to an Amazon SNS topic when keys are more than 90 days old
  • D . Configure AWS Security Hub to publish to an Amazon SNS topic when keys are more than 90 days old.

Reveal Solution Hide Solution

Correct Answer: A
Question #88

An application is deployed on Amazon EC2 instances running in an Auto Scaling group. During the bootstrapping process, the instances register their private IP addresses with a monitoring system. The monitoring system performs health checks frequently by sending ping requests to those IP addresses and sending alerts if an instance becomes non-responsive.

The existing deployment strategy replaces the current EC2 instances with new ones. A DevOps engineer has noticed that the monitoring system is sending false alarms during a deployment, and is tasked with stopping these false alarms.

Which solution will meet these requirements without affecting the current deployment method?

  • A . Define an Amazon CloudWatch Events target, an AWS Lambda function, and a lifecycle
    hook attached to the Auto Scaling group. Configure CloudWatch Events to invoke Amazon SNS to send a message to the systems administrator group for remediation.
  • B . Define an AWS Lambda function and a lifecycle hook attached to the Auto Scaling group. Configure the lifecycle hook to invoke the Lambda function, which removes the entry of the private IP from the monitoring system upon instance termination.
  • C . Define an Amazon CloudWatch Events target, an AWS Lambda function, and a lifecycle hook attached to the Auto Scaling group. Configure CloudWatch Events to invoke the Lambda function, which removes the entry of the private IP from the monitoring system upon instance termination.
  • D . Define an AWS Lambda function that will run a script when instance termination occurs in an Auto Scaling group. The script will remove the entry of the private IP from the monitoring system.

Reveal Solution Hide Solution

Correct Answer: C
Question #89

A development team is using AWS CodeCommit to version control application code and AWS CodePipeline to orchestrate software deployments. The team has decided to use a remote master branch as the trigger (or the pipeline to integrate code changes. A developer has pushed code changes to the CodeCommit repository, but noticed that the pipeline had no reaction, even after 10 minutes.

Which of the following actions should be taken to troubleshoot this issue?

  • A . Check that an Amazon CloudWatch Events rule has been created for the master branch to trigger the pipeline.
  • B . Check that the CodePipeline service role has permission to access the CodeCommit repository.
  • C . Check that the developer’s IAM role has permission to push to the CodeCommit repository.
  • D . Check to see if the pipeline failed to start because of CodeCommit errors in Amazon CloudWatch Logs.

Reveal Solution Hide Solution

Correct Answer: C
Question #90

A company is beginning to move to the AWS Cloud. Internal customers are classified into two groups according to their AWS skills: beginners and experts.

The DevOps Engineer needs to build a solution to allow beginners to deploy a restricted set of AWS architecture blueprints expresses as AWS CloudFormation templates. Deployment should only be possible on predetermined Virtual Private Clouds (VPCs). However, expert users should be able to deploy blueprints without constraints. Experts should also be able to access other AWS services, as needed.

How can the Engineer implement a solution to meet these requirements with the LEAST amount of overhead?

  • A . Apply constraints to the parameters in the templates, limiting the VPCs available for deployments. Store the templates on Amazon S3. Create an IAM group for beginners and give them access to the templates and CloudFormation. Create a separate group for experts, giving them access to the templates, CloudFormation, and other AWS services.
  • B . Store the templates on Amazon S3. Use AWS Service Catalog to create a portfolio of products based on those templates. Apply template constraints to the products with rules limiting VPCs available for deployments. Create an IAM group for beginners giving them access to the portfolio. Create a separate group for experts giving them access to the templates, CloudFormation, and other AWS services.
  • C . Store the templates on Amazon S3. Use AWS Service Catalog to create a portfolio of products based on those templates. Create an IAM role restricting VPCs available for creation of AWS resources. Apply a launch constraint to the products using this role. Create an IAM group for beginners giving them access to the portfolio. Create a separate group for experts giving them access to the portfolio and other AWS services.
  • D . Create two templates for each architecture blueprint where only one of them limits the VPC available for deployments. Store the templates in Amazon DynamoDB. Create an IAM group for beginners giving them access to the constrained templates and CloudFormation. Create a separate group for experts giving them access to the unconstrained templates, CloudFormation, and other AWS services.

Reveal Solution Hide Solution

Correct Answer: B
Exit mobile version