A Sysops administrator creates an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that uses AWS Fargate. The cluster is deployed successfully. The Sysops administrator needs to manage the cluster by using the kubect1 command line tool.
Which of the following must be configured on the Sysops administrator’s machine so that kubect1 can communicate with the cluster API server?
- A . The kubeconfig file
- B . The kube-proxy Amazon EKS add-on
- C . The Fargate profile
- D . The eks-connector.yaml file
A
Explanation:
The kubeconfig file is a configuration file used to store cluster authentication information, which is required to make requests to the Amazon EKS cluster API server. The kubeconfig file will need to be configured on the SysOps administrator’s machine in order for kubectl to be able to communicate with the cluster API server.
https://aws.amazon.com/blogs/developer/running-a-kubernetes-job-in-amazon-eks-on-aws-fargate-using-aws-stepfunctions/
A Sysops administrator needs to configure automatic rotation for Amazon RDS database credentials.
The credentials must rotate every 30 days. The solution must integrate with Amazon RDS.
Which solution will meet these requirements with the LEAST operational overhead?
- A . Store the credentials in AWS Systems Manager Parameter Store as a secure string. Configure automatic rotation with a rotation interval of 30 days.
- B . Store the credentials in AWS Secrets Manager. Configure automatic rotation with a rotation interval of 30 days.
- C . Store the credentials in a file in an Amazon S3 bucket. Deploy an AWS Lambda function to automatically rotate the credentials every 30 days.
- D . Store the credentials in AWS Secrets Manager. Deploy an AWS Lambda function to automatically rotate the credentials every 30 days.
B
Explanation:
Storing the credentials in AWS Secrets Manager and configuring automatic rotation with a rotation interval of 30 days is the most efficient way to meet the requirements with the least operational overhead. AWS Secrets Manager automatically rotates the credentials at the specified interval, so there is no need for an additional AWS Lambda function or manual rotation. Additionally, Secrets Manager is integrated with Amazon RDS, so the credentials can be easily used with the RDS database.
A company has an application that runs only on Amazon EC2 Spot Instances. The instances run in an Amazon EC2 Auto Scaling group with scheduled scaling actions.
However, the capacity does not always increase at the scheduled times, and instances terminate many times a day. A Sysops administrator must ensure that the instances launch on time and have fewer interruptions.
Which action will meet these requirements?
- A . Specify the capacity-optimized allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.
- B . Specify the capacity-optimized allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.
- C . Specify the lowest-price allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.
- D . Specify the lowest-price allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.
A
Explanation:
Specifying the capacity-optimized allocation strategy for Spot Instances and adding more instance types to the Auto Scaling group is the best action to meet the requirements. Increasing the size of the instances in the Auto Scaling group will not necessarily help with the launch time or reduce interruptions, as the Spot Instances could still be interrupted even with larger instance sizes.
A company stores its data in an Amazon S3 bucket. The company is required to classify the data and find any sensitive personal information in its S3 files.
Which solution will meet these requirements?
- A . Create an AWS Config rule to discover sensitive personal information in the S3 files and mark them as noncompliant.
- B . Create an S3 event-driven artificial intelligence/machine learning (AI/ML) pipeline to classify sensitive personal information by using Amazon Recognition.
- C . Enable Amazon GuardDuty. Configure S3 protection to monitor all data inside Amazon S3.
- D . Enable Amazon Macie. Create a discovery job that uses the managed data identifier.
D
Explanation:
Amazon Macie is a security service designed to help organizations find, classify, and protect sensitive data stored in Amazon S3. Amazon Macie uses machine learning to automatically discover, classify, and protect sensitive data in Amazon S3. Creating a discovery job with the managed data identifier will allow Macie to identify sensitive personal information in the S3 files and classify it accordingly. Enabling AWS Config and Amazon GuardDuty will not help with this requirement as they are not designed to automatically classify and protect data.
A company has an application that customers use to search for records on a website. The application’s data is stored in an Amazon Aurora DB cluster. The application’s usage varies by season and by day of the week.
The website’s popularity is increasing, and the website is experiencing slower performance because of increased load on the DB cluster during periods of peak activity. The application logs show that the performance issues occur when users are searching for information. The same search is rarely performed multiple times.
A SysOps administrator must improve the performance of the platform by using a solution that maximizes resource efficiency.
Which solution will meet these requirements?
- A . Deploy an Amazon ElastiCache for Redis cluster in front of the DB cluster. Modify the application to check the cache before the application issues new queries to the database. Add the results of any queries to the cache.
- B . Deploy an Aurora Replica for the DB cluster. Modify the application to use the reader endpoint for
search operations. Use Aurora Auto Scaling to scale the number of replicas based on load. Most Voted - C . Use Provisioned IOPS on the storage volumes that support the DB cluster to improve performance sufficiently to support the peak load on the application.
- D . Increase the instance size in the DB cluster to a size that is sufficient to support the peak load on the application. Use Aurora Auto Scaling to scale the instance size based on load.
A
Explanation:
Step-by-Step
Understand the Problem:
The application experiences slower performance during peak activity due to increased load on the Amazon Aurora DB cluster.
Performance issues occur primarily during search operations.
The goal is to improve performance and maximize resource efficiency.
Analyze the Requirements:
The solution should improve the performance of the platform.
It should maximize resource efficiency, which implies cost-effective and scalable options.
Evaluate the Options:
Option A: Deploy an Amazon ElastiCache for Redis cluster.
ElastiCache for Redis is a managed in-memory caching service that can significantly reduce the load on the database by caching frequently accessed data.
By modifying the application to check the cache before querying the database, repeated searches for the same information will be served from the cache, reducing the number of database reads.
This is efficient and cost-effective as it reduces database load and improves response times.
Option B: Deploy an Aurora Replica and use Auto Scaling.
Adding Aurora Replicas can help distribute read traffic and improve performance.
Aurora Auto Scaling can adjust the number of replicas based on the load.
However, this option may not be as efficient in terms of resource usage compared to caching because it still involves querying the database.
Option C: Use Provisioned IOPS.
Provisioned IOPS can improve performance by providing fast and consistent I/O.
This option focuses on improving the underlying storage performance but doesn’t address the inefficiency of handling repeated searches directly.
Option D: Increase the instance size and use Auto Scaling.
Increasing the instance size can provide more resources to handle peak loads.
Aurora Auto Scaling can adjust instance sizes based on the load.
This option can be costly and may not be as efficient as caching in handling repeated searches.
Select the Best Solution:
Option A is the best solution because it leverages caching to reduce the load on the database, which directly addresses the issue of repeated searches causing performance problems. Caching is generally more resource-efficient and cost-effective compared to scaling database instances or storage.
Reference: Amazon ElastiCache for Redis Documentation
Amazon Aurora Documentation
AWS Auto Scaling
Using ElastiCache for Redis aligns with best practices for improving application performance by offloading repetitive read queries from the database, leading to faster response times and more efficient resource usage.
The security team is concerned because the number of AWS Identity and Access Management (IAM) policies being used in the environment is increasing. The team tasked a SysOps administrator to report on the current number of IAM policies in use and the total available IAM policies.
Which AWS service should the administrator use to check how current IAM policy usage compares to current service limits?
- A . AWS Trusted Advisor
- B . Amazon Inspector
- C . AWS Config
- D . AWS Organizations
A
Explanation:
Step-by-Step
Understand the Problem:
The security team is concerned about the increasing number of IAM policies.
The task is to report on the current number of IAM policies and compare them to the service limits.
Analyze the Requirements:
The solution should help in checking the usage of IAM policies against the service limits.
Evaluate the Options:
Option A: AWS Trusted Advisor
AWS Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices.
It includes a service limits check that alerts you when you are approaching the limits of your AWS service usage, including IAM policies.
Option B: Amazon Inspector
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It does not report on IAM policy usage.
Option C: AWS Config
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. While useful for compliance, it does not provide a comparison against service limits.
Option D: AWS Organizations
AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. It does not provide insights into IAM policy limits.
Select the Best Solution:
Option A: AWS Trusted Advisor is the correct answer because it includes a service limits check that can report on the current number of IAM policies in use and compare them to the service limits.
Reference: AWS Trusted Advisor Documentation
IAM Service Limits
AWS Trusted Advisor is the appropriate tool for monitoring IAM policy usage and comparing it against service limits, providing the necessary insights to manage and optimize IAM policies effectively.
A company has a stateless application that is hosted on a fleet of 10 Amazon EC2 On-Demand Instances in an Auto Scaling group. A minimum of 6 instances are needed to meet service requirements.
Which action will maintain uptime for the application MOST cost-effectively?
- A . Use a Spot Fleet with an On-Demand capacity of 6 instances.
- B . Update the Auto Scaling group with a minimum of 6 On-Demand Instances and a maximum of 10 On-Demand Instances.
- C . Update the Auto Scaling group with a minimum of 1 On-Demand Instance and a maximum of 6 On-Demand Instances.
- D . Use a Spot Fleet with a target capacity of 6 instances.
A
Explanation:
Step-by-Step
Understand the Problem:
The company has a stateless application on 10 EC2 On-Demand Instances in an Auto Scaling group.
At least 6 instances are needed to meet service requirements.
The goal is to maintain uptime cost-effectively.
Analyze the Requirements:
Maintain a minimum of 6 instances to meet service requirements. Optimize costs by using a mix of instance types. Evaluate the Options:
Option A: Use a Spot Fleet with an On-Demand capacity of 6 instances.
Spot Fleets allow you to request a combination of On-Demand and Spot Instances.
Ensuring a minimum of 6 On-Demand Instances guarantees the required capacity while leveraging lower-cost Spot Instances to meet additional demand.
Option B: Update the Auto Scaling group with a minimum of 6 On-Demand Instances and a maximum of 10 On-Demand Instances.
This option ensures the minimum required capacity but does not optimize costs since it only uses On-Demand Instances.
Option C: Update the Auto Scaling group with a minimum of 1 On-Demand Instance and a maximum of 6 On-Demand Instances.
This does not meet the requirement of maintaining at least 6 instances at all times.
Option D: Use a Spot Fleet with a target capacity of 6 instances.
This option relies entirely on Spot Instances, which may not always be available, risking insufficient capacity.
Select the Best Solution:
Option A: Using a Spot Fleet with an On-Demand capacity of 6 instances ensures the necessary uptime with a cost-effective mix of On-Demand and Spot Instances.
Reference: Amazon EC2 Auto Scaling
Amazon EC2 Spot Instances
Spot Fleet Documentation
Using a Spot Fleet with a combination of On-Demand and Spot Instances offers a cost-effective solution while ensuring the required minimum capacity for the application.
A SysOps administrator has launched a large general purpose Amazon EC2 instance to regularly process large data files. The instance has an attached 1 TB General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume. The instance also is EBS-optimized. To save costs, the SysOps administrator stops the instance each evening and restarts the instance each morning.
When data processing is active, Amazon CloudWatch metrics on the instance show a consistent 3.000 VolumeReadOps. The SysOps administrator must improve the I/O performance while ensuring data integrity.
Which action will meet these requirements?
- A . Change the instance type to a large, burstable, general purpose instance.
- B . Change the instance type to an extra large general purpose instance.
- C . Increase the EBS volume to a 2 TB General Purpose SSD (gp2) volume.
- D . Move the data that resides on the EBS volume to the instance store.
C
Explanation:
Step-by-Step
Understand the Problem:
The EC2 instance processes large data files and uses a 1 TB General Purpose SSD (gp2) EBS volume.
CloudWatch metrics show consistent high VolumeReadOps.
The requirement is to improve I/O performance while ensuring data integrity.
Analyze the Requirements:
Improve I/O performance.
Maintain data integrity.
Evaluate the Options:
Option A: Change the instance type to a large, burstable, general-purpose instance.
Burstable instances provide a baseline level of CPU performance with the ability to burst to a higher level when needed. However, this does not address the I/O performance directly.
Option B: Change the instance type to an extra-large general-purpose instance.
A larger instance type might improve performance, but it does not directly address the I/O performance of the EBS volume.
Option C: Increase the EBS volume to a 2 TB General Purpose SSD (gp2) volume.
Increasing the size of a General Purpose SSD (gp2) volume can increase its IOPS. The larger the volume, the higher the baseline performance in terms of IOPS.
Option D: Move the data that resides on the EBS volume to the instance store.
Instance store volumes provide high I/O performance but are ephemeral, meaning data will be lost if the instance is stopped or terminated. This does not ensure data integrity.
Select the Best Solution:
Option C: Increasing the EBS volume size to 2 TB will provide higher IOPS, improving I/O performance while maintaining data integrity.
Reference: Amazon EBS Volume Types
General Purpose SSD (gp2) Volumes
Increasing the size of the General Purpose SSD (gp2) volume is an effective way to improve I/O performance while ensuring data integrity remains intact.
With the threat of ransomware viruses encrypting and holding company data hostage, which action should be taken to protect an Amazon S3 bucket?
- A . Deny Post. Put. and Delete on the bucket.
- B . Enable server-side encryption on the bucket.
- C . Enable Amazon S3 versioning on the bucket.
- D . Enable snapshots on the bucket.
C
Explanation:
Step-by-Step
Understand the Problem:
The threat of ransomware encrypting and holding company data hostage.
Need to protect an Amazon S3 bucket.
Analyze the Requirements:
Ensure that data in the S3 bucket is protected against unauthorized encryption or deletion.
Evaluate the Options:
Option A: Deny Post, Put, and Delete on the bucket.
Denying these actions would prevent any uploads or modifications to the bucket, making it unusable.
Option B: Enable server-side encryption on the bucket.
Server-side encryption protects data at rest but does not prevent the encryption of data by ransomware.
Option C: Enable Amazon S3 versioning on the bucket.
S3 versioning keeps multiple versions of an object in the bucket. If a file is overwritten or encrypted by ransomware, previous versions of the file can still be accessed.
Option D: Enable snapshots on the bucket.
Amazon S3 does not have a snapshot feature; this option is not applicable.
Select the Best Solution:
Option C: Enabling Amazon S3 versioning is the best solution as it allows access to previous versions
of objects, providing protection against ransomware encryption by retaining prior, unencrypted versions.
Reference: Amazon S3 Versioning
Best Practices for Protecting Data with Amazon S3
Enabling S3 versioning ensures that previous versions of objects are preserved, providing a safeguard against ransomware by allowing recovery of unencrypted versions of data.
A SysOps administrator is evaluating Amazon Route 53 DNS options to address concerns about high availability for an on-premises website. The website consists of two servers: a primary active server and a secondary passive server. Route 53 should route traffic to the primary server if the associated health check returns 2xx or 3xx HTTP codes. All other traffic should be directed to the secondary passive server. The failover record type, set ID. and routing policy have been set appropriately for both primary and secondary servers.
Which next step should be taken to configure Route 53?
- A . Create an A record for each server. Associate the records with the Route 53 HTTP health check.
- B . Create an A record for each server. Associate the records with the Route 53 TCP health check.
- C . Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 HTTP health check.
- D . Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 TCP health check.
C
Explanation:
To configure Route 53 for high availability with failover between a primary and a secondary server, the following steps should be taken:
Create Health Checks:
Create HTTP health checks for both the primary and secondary servers. Ensure these health checks are configured to look for HTTP 2xx or 3xx status codes.
Reference: Creating and Updating Health Checks
Create Alias Records:
Create an alias record for the primary server. Set "Evaluate Target Health" to Yes. Associate this record with the primary server’s HTTP health check.
Create an alias record for the secondary server. Set "Evaluate Target Health" to Yes. Associate this record with the secondary server’s HTTP health check.
Reference: Creating Records by Using the Amazon Route 53 Console Set Routing Policy:
Ensure the routing policy for both records is set to "Failover."
Assign appropriate "Set IDs" and configure the primary record as the primary failover record and the secondary record as the secondary failover record.
Reference: Route 53 Routing Policies
Test Configuration:
Test the failover configuration to ensure that when the primary server health check fails, traffic is routed to the secondary server.
Reference: Testing Failover
A SysOps administrator noticed that a large number of Elastic IP addresses are being created on the company’s AWS account, but they are not being associated with Amazon EC2 instances, and are incurring Elastic IP address charges in the monthly bill.
How can the administrator identify who is creating the Elastic IP addresses?
- A . Attach a cost-allocation tag to each requested Elastic IP address with the IAM user name of the developer who creates it.
- B . Query AWS CloudTrail logs by using Amazon Athena to search for Elastic IP address events.
- C . Create a CloudWatch alarm on the ElPCreated metric and send an Amazon SNS notification when the alarm triggers.
- D . Use Amazon Inspector to get a report of all Elastic IP addresses created in the last 30 days.
B
Explanation:
To identify who is creating the Elastic IP addresses, the following steps should be taken:
Enable CloudTrail Logging:
Ensure AWS CloudTrail is enabled to log all API activities in your AWS account.
Reference: Setting Up AWS CloudTrail
Create an Athena Table for CloudTrail Logs:
Set up an Athena table that points to the S3 bucket where CloudTrail logs are stored.
Reference: Creating Tables in Athena
Query CloudTrail Logs:
Use Athena to run SQL queries to search for AllocateAddress events, which represent the creation of Elastic IP addresses.
Example Query:
sql
Copy code
SELECT userIdentity.userName, eventTime, eventSource, eventName, requestParameters FROM cloudtrail_logs
WHERE eventName = ‘AllocateAddress’;
Reference: Analyzing AWS CloudTrail Logs
Review Results:
Review the results to identify which IAM user or role is creating the Elastic IP addresses.
Reference: AWS CloudTrail Log Analysis
A company has an Amazon CloudFront distribution that uses an Amazon S3 bucket as its origin. During a review of the access logs, the company determines that some requests are going directly to the S3 bucket by using the website hosting endpoint. A SysOps administrator must secure the S3 bucket to allow requests only from CloudFront.
What should the SysOps administrator do to meet this requirement?
- A . Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Remove access to and from other principals in the S3 bucket policy. Update the S3 bucket policy to allow access only from the OAI.
- B . Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OAI. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
- C . Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OAI. Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
- D . Update the S3 bucket policy to allow access only from the CloudFront distribution. Remove access to and from other principals in the S3 bucket policy. Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
A
Explanation:
To secure the S3 bucket and allow access only from CloudFront, the following steps should be taken:
Create an OAI in CloudFront:
In the CloudFront console, create an origin access identity (OAI) and associate it with your CloudFront distribution.
Reference: Restricting Access to S3 Buckets
Update S3 Bucket Policy:
Modify the S3 bucket policy to allow access only from the OAI. This involves adding a policy statement that grants the OAI permission to get objects from the bucket and removing any other public access permissions.
Example Policy:
json
Copy code
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3EXAMPLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example-bucket/*"
}
]
}
Reference: Bucket Policy Examples
Test Configuration:
Ensure that the S3 bucket is not publicly accessible and that requests to the bucket through the CloudFront distribution are successful.
Reference: Testing CloudFront OAI
A SysOps administrator must create an IAM policy for a developer who needs access to specific AWS services.
Based on the requirements, the SysOps administrator creates the following policy:
Which actions does this policy allow? (Select TWO.)
- A . Create an AWS Storage Gateway.
- B . Create an IAM role for an AWS Lambda function.
- C . Delete an Amazon Simple Queue Service (Amazon SQS) queue.
- D . Describe AWS load balancers.
- E . Invoke an AWS Lambda function.
DE
Explanation:
The provided IAM policy grants the following permissions:
Describe AWS Load Balancers:
The policy allows actions with the prefix elasticloadbalancing:. This includes actions like DescribeLoadBalancers and other Describe* actions related to Elastic Load Balancing.
Reference: Elastic Load Balancing API Actions
Invoke AWS Lambda Function:
The policy allows actions with the prefix lambda:, which includes InvokeFunction and other actions that allow listing and describing Lambda functions.
Reference: AWS Lambda API Actions
The actions related to AWS Storage Gateway (create), IAM role (create), and Amazon SQS (delete) are not allowed by this policy. The policy only grants describe/list permissions for storagegateway, elasticloadbalancing, lambda, and list permissions for SQS.
A company is trying to connect two applications. One application runs in an on-premises data center that has a hostname of hostl .onprem.private. The other application runs on an Amazon EC2 instance that has a hostname of hostl.awscloud.private. An AWS Site-to-Site VPN connection is in place between the on-premises network and AWS.
The application that runs in the data center tries to connect to the application that runs on the EC2 instance, but DNS resolution fails. A SysOps administrator must implement DNS resolution between on-premises and AWS resources.
Which solution allows the on-premises application to resolve the EC2 instance hostname?
- A . Set up an Amazon Route 53 inbound resolver endpoint with a forwarding rule for the onprem.private hosted zone. Associate the resolver with the VPC of the EC2 instance. Configure the on-premises DNS resolver to forward onprem.private DNS queries to the inbound resolver endpoint.
- B . Set up an Amazon Route 53 inbound resolver endpoint. Associate the resolver with the VPC of the EC2 instance. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the inbound resolver endpoint.
- C . Set up an Amazon Route 53 outbound resolver endpoint with a forwarding rule for the onprem.private hosted zone. Associate the resolver with the AWS Region of the EC2 instance. Configure the on-premises DNS resolver to forward onprem.private DNS queries to the outbound resolver endpoint.
- D . Set up an Amazon Route 53 outbound resolver endpoint. Associate the resolver with the AWS Region of the EC2 instance. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the outbound resolver endpoint.
A
Explanation:
Step-by-Step
Understand the Problem:
There are two applications, one in an on-premises data center and the other on an Amazon EC2 instance.
DNS resolution fails when the on-premises application tries to connect to the EC2 instance. The goal is to implement DNS resolution between on-premises and AWS resources.
Analyze the Requirements:
Need to resolve the hostname of the EC2 instance from the on-premises network. Utilize the existing AWS Site-to-Site VPN connection for DNS queries.
Evaluate the Options:
Option A: Set up an Amazon Route 53 inbound resolver endpoint with a forwarding rule for the onprem.private hosted zone.
This allows DNS queries from on-premises to be forwarded to Route 53 for resolution.
The resolver endpoint is associated with the VPC, enabling resolution of AWS resources.
Option B: Set up an Amazon Route 53 inbound resolver endpoint without specifying the forwarding rule.
This option does not address the specific need to resolve onprem.private DNS queries.
Option C: Set up an Amazon Route 53 outbound resolver endpoint.
Outbound resolver endpoints are used for forwarding DNS queries from AWS to on-premises, not vice versa.
Option D: Set up an Amazon Route 53 outbound resolver endpoint without specifying the forwarding rule.
Similar to Option C, this does not meet the requirement of resolving on-premises queries in AWS.
Select the Best Solution:
Option A: Setting up an inbound resolver endpoint with a forwarding rule for onprem.private and associating it with the VPC ensures that DNS queries from on-premises can resolve AWS resources effectively.
Reference: Amazon Route 53 Resolver
Integrating AWS and On-Premises Networks with Route 53
Using an Amazon Route 53 inbound resolver endpoint with a forwarding rule ensures that on-premises applications can resolve EC2 instance hostnames effectively.
While setting up an AWS managed VPN connection, a SysOps administrator creates a customer gateway resource in AWS. The customer gateway device resides in a data center with a NAT gateway in front of it.
What address should be used to create the customer gateway resource?
- A . The private IP address of the customer gateway device
- B . The MAC address of the NAT device in front of the customer gateway device
- C . The public IP address of the customer gateway device
- D . The public IP address of the NAT device in front of the customer gateway device
D
Explanation:
Step-by-Step
Understand the Problem:
Setting up an AWS managed VPN connection requires creating a customer gateway resource.
The customer gateway device is behind a NAT gateway in the data center.
Analyze the Requirements:
The customer gateway resource needs to be created using an IP address that can be reached by AWS.
Evaluate the Options:
Option A: The private IP address of the customer gateway device.
A private IP address is not reachable by AWS over the internet.
Option B: The MAC address of the NAT device.
MAC addresses are not used for identifying gateways in AWS.
Option C: The public IP address of the customer gateway device.
This would be correct if the device were directly connected to the internet, but it is behind a NAT.
Option D: The public IP address of the NAT device in front of the customer gateway device.
The NAT device’s public IP address is reachable by AWS and will route traffic to the customer gateway device.
Select the Best Solution:
Option D: Using the public IP address of the NAT device ensures that AWS can establish a VPN connection with the customer gateway device behind the NAT.
Reference: AWS Site-to-Site VPN Documentation
Customer Gateway Devices Behind a NAT
Specifying the public IP address of the NAT device ensures proper routing of VPN traffic to the customer gateway device.
A large company is using AWS Organizations to manage its multi-account AWS environment. According to company policy, all users should have read-level access to a particular Amazon S3 bucket in a central account. The S3 bucket data should not be available outside the organization. A SysOps administrator must set up the permissions and add a bucket policy to the S3 bucket.
Which parameters should be specified to accomplish this in the MOST efficient manner?
- A . Specify "’ as the principal and PrincipalOrgld as a condition.
- B . Specify all account numbers as the principal.
- C . Specify PrincipalOrgld as the principal.
- D . Specify the organization’s management account as the principal.
A
Explanation:
Step-by-Step
Understand the Problem:
Ensure all users in the organization have read-level access to a specific S3 bucket. The data should not be accessible outside the organization. Analyze the Requirements:
Grant read access to users within the organization.
Prevent access from outside the organization.
Evaluate the Options:
Option A: Specify "*" as the principal and PrincipalOrgId as a condition.
This grants access to all AWS principals but restricts it to those within the specified organization using the PrincipalOrgId condition.
Option B: Specify all account numbers as the principal.
This is impractical for a large organization and requires constant updates if accounts are added or removed.
Option C: Specify PrincipalOrgId as the principal.
The PrincipalOrgId condition must be used within a policy, not as a principal.
Option D: Specify the organization’s management account as the principal.
This grants access only to the management account, not to all users within the organization.
Select the Best Solution:
Option A: Using "*" as the principal with the PrincipalOrgId condition ensures all users within the organization have the required access while preventing external access.
Reference: Amazon S3 Bucket Policies
AWS Organizations Policy Examples
Using "*" as the principal with the PrincipalOrgId condition efficiently grants read access to the S3 bucket for all users within the organization.
A SysOps administrator is attempting to download patches from the internet into an instance in a private subnet. An internet gateway exists for the VPC, and a NAT gateway has been deployed on the public subnet; however, the instance has no internet connectivity. The resources deployed into the private subnet must be inaccessible directly from the public internet.
What should be added to the private subnet’s route table in order to address this issue, given the information provided?
- A . 0.0.0.0/0 IGW
- B . 0.0.0.0/0 NAT
- C . 10.0.1.0/24 IGW
- D . 10.0.1.0/24 NAT
B
Explanation:
Understand the Problem:
An instance in a private subnet needs internet access for downloading patches.
There is an existing NAT gateway in the public subnet.
Analyze the Requirements:
Provide internet access to the private subnet instance through the NAT gateway.
Ensure resources in the private subnet remain inaccessible from the public internet.
Evaluate the Options:
Option A: 0.0.0.0/0 IGW.
This would route traffic directly to the internet gateway, exposing the instance to the public internet.
Option B: 0.0.0.0/0 NAT.
This routes traffic destined for the internet through the NAT gateway, allowing outbound connections while keeping the instance protected from inbound internet traffic.
Option C: 10.0.1.0/24 IGW.
This does not provide the necessary route for internet access and incorrectly uses the internet gateway for local traffic.
Option D: 10.0.1.0/24 NAT.
This also incorrectly uses the NAT gateway for local traffic, which is unnecessary.
Select the Best Solution:
Option B: Adding a route for 0.0.0.0/0 with the target set to the NAT gateway ensures that the private subnet instance can access the internet while remaining protected from inbound internet traffic.
Reference: Amazon VPC NAT Gateways
Private Subnet Route Table
Configuring the private subnet route table to use the NAT gateway for 0.0.0.0/0 ensures secure and efficient internet access for instances in the private subnet.
A SysOps administrator applies the following policy to an AWS CloudFormation stack:
What is the result of this policy?
- A . Users that assume an IAM role with a logical ID that begins with "Production" are prevented from running the update-stack command.
- B . Users can update all resources in the stack except for resources that have a logical ID that begins with "Production".
- C . Users can update all resources in the stack except for resources that have an attribute that begins with "Production".
- D . Users in an IAM group with a logical ID that begins with "Production" are prevented from running the update-stack command.
B
Explanation:
The policy provided includes two statements:
The first statement explicitly denies the Update:* action on resources with a LogicalResourceId that begins with "Production".
The second statement allows the Update:* action on all resources.
In AWS IAM policy evaluation logic, explicit denies always take precedence over allows. Therefore, the effect of this policy is that users can update all resources in the stack except for those with a logical ID that begins with "Production".
Reference: IAM JSON Policy Elements: Effect
Policy Evaluation Logic
A company’s IT department noticed an increase in the spend of their developer AWS account. There are over 50 developers using the account, and the finance team wants to determine the service costs incurred by each developer.
What should a SysOps administrator do to collect this information? (Select TWO.)
- A . Activate the createdBy tag in the account.
- B . Analyze the usage with Amazon CloudWatch dashboards.
- C . Analyze the usage with Cost Explorer.
- D . Configure AWS Trusted Advisor to track resource usage.
- E . Create a billing alarm in AWS Budgets.
AC
Explanation:
To determine the service costs incurred by each developer, follow these steps:
Activate the createdBy Tag:
Tagging resources with a createdBy tag helps identify which user created the resource. This tag should be applied consistently across all resources created by the developers.
Reference: Tagging Your Resources
Analyze Usage with Cost Explorer:
Use Cost Explorer to filter and group cost and usage data by the createdBy tag. This provides a breakdown of costs incurred by each developer.
Reference: Analyzing Your Costs with Cost Explorer
These two steps together will provide a detailed analysis of the costs incurred by each developer in the AWS account.
A company website contains a web tier and a database tier on AWS. The web tier consists of Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones. The database tier runs on an Amazon ROS for MySQL Multi-AZ DB instance. The database subnet network ACLs are restricted to only the web subnets that need access to the database. The web subnets use the default network ACL with the default rules.
The company’s operations team has added a third subnet to the Auto Scaling group configuration. After an Auto Scaling event occurs, some users report that they intermittently receive an error
message. The error message states that the server cannot connect to the database. The operations team has confirmed that the route tables are correct and that the required ports are open on all security groups.
Which combination of actions should a SysOps administrator take so that the web servers can communicate with the DB instance? (Select TWO.)
- A . On the default ACL. create inbound Allow rules of type TCP with the ephemeral port range and the source as the database subnets.
- B . On the default ACL, create outbound Allow rules of type MySQL/Aurora (3306). Specify the destinations as the database subnets.
- C . On the network ACLs for the database subnets, create an inbound Allow rule of type MySQL/Aurora (3306). Specify the source as the third web subnet.
- D . On the network ACLs for the database subnets, create an outbound Allow rule of type TCP with the ephemeral port range and the destination as the third web subnet.
- E . On the network ACLs for the database subnets, create an outbound Allow rule of type MySQL/Aurora (3306). Specify the destination as the third web subnet.
CD
Explanation:
To ensure that the new web subnet can communicate with the database instance, follow these steps:
Create an Inbound Allow Rule for MySQL/Aurora (3306):
On the network ACL for the database subnets, add an inbound allow rule to permit traffic from the third web subnet on port 3306 (MySQL/Aurora).
Reference: Network ACLs
Create an Outbound Allow Rule for Ephemeral Ports:
On the network ACL for the database subnets, add an outbound allow rule to permit traffic to the third web subnet on the ephemeral port range (1024-65535).
Reference: Ephemeral Ports
These changes will ensure that the new subnet can communicate with the database, resolving the connectivity issues.
A company is running an application on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances are launched by an Auto Scaling group and are automatically registered in a target group. A SysOps administrator must set up a notification to alert application owners when targets fail health checks.
What should the SysOps administrator do to meet these requirements?
- A . Create an Amazon CloudWatch alarm on the UnHealthyHostCount metric. Configure an action to send an Amazon Simple Notification Service (Amazon SNS) notification when the metric is greater than 0.
- B . Configure an Amazon EC2 Auto Scaling custom lifecycle action to send an Amazon Simple Notification Service (Amazon SNS) notification when an instance is in the Pending:Wait state.
- C . Update the Auto Scaling group. Configure an activity notification to send an Amazon Simple Notification Service (Amazon SNS) notification for the Unhealthy event type.
- D . Update the ALB health check to send an Amazon Simple Notification Service (Amazon SNS) notification when an instance is unhealthy.
A
Explanation:
To set up a notification for failed health checks of targets in the ALB, follow these steps:
Create a CloudWatch Alarm:
Navigate to CloudWatch and create a new alarm based on the UnHealthyHostCount metric of the target group.
Reference: Creating Alarms
Configure the Alarm Action:
Configure the alarm to send an Amazon SNS notification when the UnHealthyHostCount metric is greater than 0.
Reference: Using Amazon SNS for CloudWatch Alarms
This setup will notify application owners whenever a target fails health checks.
A company wants to build a solution for its business-critical Amazon RDS for MySQL database. The database requires high availability across different geographic locations. A SysOps administrator must build a solution to handle a disaster recovery (DR) scenario with the lowest recovery time objective (RTO) and recovery point objective (RPO).
Which solution meets these requirements?
- A . Create automated snapshots of the database on a schedule. Copy the snapshots to the DR Region.
- B . Create a cross-Region read replica for the database.
- C . Create a Multi-AZ read replica for the database.
- D . Schedule AWS Lambda functions to create snapshots of the source database and to copy the snapshots to a DR Region.
B
Explanation:
To ensure high availability and disaster recovery for the RDS for MySQL database, follow these steps:
Create a Cross-Region Read Replica:
In the RDS console, create a cross-Region read replica of the primary database instance. This read replica will provide high availability and the lowest RTO and RPO in case of a disaster.
Reference: Creating a Read Replica in a Different AWS Region
This solution ensures that your database is replicated across regions, providing robust disaster recovery capabilities with minimal recovery time and data loss.
A SysOps administrator is using Amazon EC2 instances to host an application. The SysOps administrator needs to grant permissions for the application to access an Amazon DynamoDB table.
Which solution will meet this requirement?
- A . Create access keys to access the DynamoDB table. Assign the access keys to the EC2 instance profile.
- B . Create an EC2 key pair to access the DynamoDB table. Assign the key pair to the EC2 instance profile.
- C . Create an IAM user to access the DynamoDB table. Assign the IAM user to the EC2 instance profile.
- D . Create an IAM role to access the DynamoDB table. Assign the IAM role to the EC2 instance profile.
D
Explanation:
Access to Amazon DynamoDB requires credentials. Those credentials must have permissions to access AWS resources, such as an Amazon DynamoDB table or an Amazon Elastic Compute Cloud (Amazon EC2) instance. The following sections provide details on how you can use AWS Identity and Access Management (IAM) and DynamoDB to help secure access to your resources. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/authentication-and-access-control.html
A company has a web application with a database tier that consists of an Amazon EC2 instance that runs MySQL. A SysOps administrator needs to minimize potential data loss and the time that is required to recover in the event of a database failure.
What is the MOST operationally efficient solution that meets these requirements?
- A . Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric to invoke an AWS Lambda function that stops and starts the EC2 instance.
- B . Create an Amazon RDS for MySQL Multi-AZ DB instance. Use a MySQL native backup that is stored in Amazon S3 to restore the data to the new database. Update the connection string in the web application.
- C . Create an Amazon RDS for MySQL Single-AZ DB instance with a read replica. Use a MySQL native backup that is stored in Amazon S3 to restore the data to the new database. Update the connection string in the web application.
- D . Use Amazon Data Lifecycle Manager (Amazon DLM) to take a snapshot of the Amazon Elastic Block Store (Amazon EBS) volume every hour. In the event of an EC2 instance failure, restore the EBS volume from a snapshot.
B
Explanation:
Step-by-Step
Understand the Problem:
Minimize potential data loss and recovery time for a MySQL database running on an Amazon EC2 instance.
Analyze the Requirements:
Reduce the risk of data loss.
Ensure quick recovery in the event of a database failure.
Aim for operational efficiency.
Evaluate the Options:
Option A: Create a CloudWatch alarm to invoke a Lambda function that stops and starts the EC2 instance.
This addresses system failures but does not help with minimizing data loss or ensuring database redundancy.
Option B: Create an Amazon RDS for MySQL Multi-AZ DB instance and use a MySQL native backup stored in Amazon S3.
RDS Multi-AZ deployments provide high availability and durability by automatically replicating data to a standby instance in a different Availability Zone.
Using a MySQL native backup stored in Amazon S3 ensures data can be restored efficiently.
Option C: Create an RDS for MySQL Single-AZ DB instance with a read replica.
This provides read scalability but does not ensure high availability or failover capabilities.
Option D: Use Amazon DLM to take hourly snapshots of the EBS volume.
This helps with backups but does not provide immediate failover capabilities or minimize downtime effectively.
Select the Best Solution:
Option B: Using Amazon RDS for MySQL Multi-AZ ensures high availability and automated backups, significantly minimizing data loss and recovery time.
Reference: Amazon RDS Multi-AZ Deployments
Backing Up and Restoring an Amazon RDS DB Instance
Using Amazon RDS for MySQL Multi-AZ provides a highly available and durable solution with automated backups, ensuring minimal data loss and quick recovery.
A company migrated an I/O intensive application to an Amazon EC2 general purpose instance. The EC2 instance has a single General Purpose SSD Amazon Elastic Block Store (Amazon EBS) volume attached.
Application users report that certain actions that require intensive reading and writing to the disk are taking much longer than normal or are failing completely. After reviewing the performance metrics of the EBS volume, a SysOps administrator notices that the VolumeQueueLength metric is consistently high during the same times in which the users are reporting issues. The SysOps administrator needs to resolve this problem to restore full performance to the application.
Which action will meet these requirements?
- A . Modify the instance type to be storage optimized.
- B . Modify the volume properties by deselecting Auto-Enable Volume 10.
- C . Modify the volume properties to increase the IOPS.
- D . Modify the instance to enable enhanced networking.
C
Explanation:
Step-by-Step
Understand the Problem:
An I/O intensive application on an Amazon EC2 general purpose instance is experiencing performance issues.
Users report delays or failures during intensive read/write operations. The VolumeQueueLength metric is consistently high during these periods. Analyze the Requirements:
Address the high VolumeQueueLength, which indicates that the EBS volume is unable to handle the I/O requests efficiently.
Improve the disk I/O performance to restore full application performance.
Evaluate the Options:
Option A: Modify the instance type to be storage optimized.
Storage optimized instances are designed for workloads that require high, sequential read and write access to large data sets on local storage.
This could help, but if the issue is primarily with EBS volume performance, increasing IOPS would be a more direct solution.
Option B: Modify the volume properties by deselecting Auto-Enable Volume I/O.
Auto-Enable Volume I/O is a setting that automatically enables I/O for the EBS volume after an event such as a snapshot restore.
Deselecting this option will not address the issue of high I/O demand.
Option C: Modify the volume properties to increase the IOPS.
Increasing the IOPS (Input/Output Operations Per Second) will directly address the high VolumeQueueLength by allowing the volume to handle more I/O operations concurrently.
This is the most effective and direct solution to improve the performance of I/O intensive tasks.
Option D: Modify the instance to enable enhanced networking.
Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies.
While beneficial for network performance, this does not directly impact the EBS volume’s I/O performance.
Select the Best Solution:
Option C: Modifying the volume properties to increase the IOPS directly addresses the high VolumeQueueLength and improves the EBS volume’s ability to handle intensive read/write operations.
Reference: Amazon EBS Volume Types
Amazon EBS Volume Performance
Optimizing EBS Performance
Increasing the IOPS of the EBS volume ensures that the application can handle the required intensive read/write operations more efficiently, directly addressing the high VolumeQueueLength and restoring full performance.
A SysOps administrator is trying to set up an Amazon Route 53 domain name to route traffic to a website hosted on Amazon S3. The domain name of the website is www.anycompany.com and the S3 bucket name is anycompany-static. After the record set is set up in Route 53, the domain name www.anycompany.com does not seem to work, and the static website is not displayed in the browser.
Which of the following is a cause of this?
- A . The S3 bucket must be configured with Amazon CloudFront first.
- B . The Route 53 record set must have an IAM role that allows access to the S3 bucket.
- C . The Route 53 record set must be in the same region as the S3 bucket.
- D . The S3 bucket name must match the record set name in Route 53.
C
Explanation:
Step-by-Step
Understand the Problem:
The application on a general-purpose EC2 instance experiences performance issues during I/O intensive tasks.
High VolumeQueueLength indicates a bottleneck in I/O operations.
Analyze the Requirements:
Improve disk I/O performance to handle intensive read/write operations.
Evaluate the Options:
Option A: Modify the instance type to be storage optimized.
This could help but may not be necessary if increasing IOPS on the EBS volume resolves the issue.
Option B: Deselect Auto-Enable Volume I/O.
This option is not relevant to addressing high VolumeQueueLength.
Option C: Modify the volume properties to increase the IOPS.
Increasing IOPS directly addresses the I/O performance bottleneck.
Option D: Enable enhanced networking.
Enhanced networking improves network performance but not disk I/O performance.
Select the Best Solution:
Option C: Increasing the IOPS of the EBS volume directly addresses the high VolumeQueueLength and improves I/O performance.
Reference: Amazon EBS Volume Types
Optimizing EBS Performance
Modifying the volume properties to increase the IOPS ensures that the application can handle intensive read/write operations more effectively.
An Amazon EC2 instance needs to be reachable from the internet. The EC2 instance is in a subnet with the following route table:
Which entry must a SysOps administrator add to the route table to meet this requirement?
- A . A route for 0.0.0.0/0 that points to a NAT gateway
- B . A route for 0.0.0.0/0 that points to an egress-only internet gateway
- C . A route for 0.0.0.0/0 that points to an internet gateway
- D . A route for 0.0.0.0/0 that points to an elastic network interface
C
Explanation:
Step-by-Step
Understand the Problem:
An EC2 instance needs to be reachable from the internet.
Analyze the Requirements:
Ensure proper routing for internet connectivity.
Evaluate the Options:
Option A: A route for 0.0.0.0/0 that points to a NAT gateway.
NAT gateways allow instances in private subnets to access the internet, not the other way around.
Option B: A route for 0.0.0.0/0 that points to an egress-only internet gateway.
Egress-only gateways are for IPv6 traffic and do not provide inbound internet access.
Option C: A route for 0.0.0.0/0 that points to an internet gateway.
Internet gateways provide both inbound and outbound internet access. Option D: A route for 0.0.0.0/0 that points to an elastic network interface. Elastic network interfaces do not provide internet routing. Select the Best Solution:
Option C: Adding a route to the internet gateway ensures the EC2 instance is reachable from the internet.
Reference: Amazon VPC Internet Gateway
Configuring a route to the internet gateway ensures proper internet connectivity for the EC2 instance.
A SysOps administrator has enabled AWS CloudTrail in an AWS account. If CloudTrail is disabled, it must be re-enabled immediately.
What should the SysOps administrator do to meet these requirements WITHOUT writing custom code?
- A . Add the AWS account to AWS Organizations. Enable CloudTrail in the management account.
- B . Create an AWS Config rule that is invoked when CloudTrail configuration changes. Apply the AWS-ConfigureCloudTrailLogging automatic remediation action.
- C . Create an AWS Config rule that is invoked when CloudTrail configuration changes. Configure the rule to invoke an AWS Lambda function to enable CloudTrail.
- D . Create an Amazon EventBridge (Amazon CloudWatch Events) hourly rule with a schedule pattern to run an AWS Systems Manager Automation document to enable CloudTrail.
B
Explanation:
Step-by-Step
Understand the Problem:
CloudTrail must be re-enabled immediately if it is disabled.
Analyze the Requirements:
Implement an automatic solution to monitor and re-enable CloudTrail.
Evaluate the Options:
Option A: Add the AWS account to AWS Organizations and enable CloudTrail in the management account.
This provides centralized management but does not ensure automatic re-enabling of CloudTrail.
Option B: Create an AWS Config rule with automatic remediation.
AWS Config can monitor changes and automatically remediate by re-enabling CloudTrail.
Option C: Create an AWS Config rule that invokes a Lambda function.
This requires custom code, which is not preferred.
Option D: Create an EventBridge rule with a Systems Manager Automation document.
This can re-enable CloudTrail but is more complex compared to AWS Config’s built-in remediation.
Select the Best Solution:
Option B: Using AWS Config with automatic remediation ensures CloudTrail is re-enabled without writing custom code.
Reference: AWS Config Rules
Automatic Remediation with AWS Config
Creating an AWS Config rule with automatic remediation ensures that CloudTrail is immediately re-enabled if it is disabled.
A company has a stateless application that runs on four Amazon EC2 instances. The application requires tour instances at all times to support all traffic. A SysOps administrator must design a highly available, fault-tolerant architecture that continually supports all traffic if one Availability Zone becomes unavailable.
Which configuration meets these requirements?
- A . Deploy two Auto Scaling groups in two Availability Zones with a minimum capacity of two instances in each group.
- B . Deploy an Auto Scaling group across two Availability Zones with a minimum capacity of four instances.
- C . Deploy an Auto Scaling group across three Availability Zones with a minimum capacity of four instances.
- D . Deploy an Auto Scaling group across three Availability Zones with a minimum capacity of six instances.
A company’s backend infrastructure contains an Amazon EC2 instance in a private subnet. The private subnet has a route to the internet through a NAT gateway in a public subnet. The instance must allow connectivity to a secure web server on the internet to retrieve data at regular intervals.
The client software times out with an error message that indicates that the client software could not establish the TCP connection.
What should a SysOps administrator do to resolve this error?
- A . Add an inbound rule to the security group for the EC2 instance with the following parameters:
Type – HTTP, Source – 0.0.0.0/0. - B . Add an inbound rule to the security group for the EC2 instance with the following parameters:
Type – HTTPS, Source – 0.0.0.0/0. - C . Add an outbound rule to the security group for the EC2 instance with the following parameters:
Type – HTTP, Destination – 0.0.0.0/0. - D . Add an outbound rule to the security group for the EC2 instance with the following parameters:
Type – HTTPS. Destination – 0.0.0.0/0.
D
Explanation:
To allow the EC2 instance in the private subnet to establish a secure connection to an external web server, follow these steps:
Modify Security Group:
Add an outbound rule to the security group of the EC2 instance with the following parameters:
Type: HTTPS
Destination: 0.0.0.0/0
This allows outbound HTTPS traffic to the internet.
Reference: Security Group Rules
Ensure NAT Gateway Configuration:
Ensure that the NAT gateway is properly configured in the public subnet to allow internet access for the instances in the private subnet.
Reference: NAT Gateways
This configuration will resolve the connectivity issue.
A software development company has multiple developers who work on the same product. Each developer must have their own development environment, and these development environments must be identical. Each development environment consists of Amazon EC2 instances and an Amazon RDS DB instance. The development environments should be created only when necessary, and they must be terminated each night to minimize costs.
What is the MOST operationally efficient solution that meets these requirements?
- A . Provide developers with access to the same AWS CloudFormation template so that they can provision their development environment when necessary. Schedule a nightly cron job on each development instance to stop all running processes to reduce CPU utilization to nearly zero.
- B . Provide developers with access to the same AWS CloudFormation template so that they can provision their development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function to delete the AWS CloudFormation stacks.
- C . Provide developers with CLI commands so that they can provision their own development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function to terminate all EC2 instances and the DB instance.
- D . Provide developers with CLI commands so that they can provision their own development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to cause AWS CloudFormation to delete all of the development environment resources.
B
Explanation:
To efficiently manage and automate the creation and termination of development environments:
AWS CloudFormation Templates:
Provide a standardized CloudFormation template for developers to create identical development environments.
Reference: AWS CloudFormation User Guide
Automate Termination:
Use Amazon EventBridge (CloudWatch Events) to schedule a nightly rule that invokes an AWS Lambda function.
The Lambda function should be designed to delete the CloudFormation stacks created for development environments.
Reference: Amazon EventBridge
This solution ensures operational efficiency and cost management.
A company runs a stateless application that is hosted on an Amazon EC2 instance. Users are reporting performance issues. A SysOps administrator reviews the Amazon CloudWatch metrics for the application and notices that the instance’s CPU utilization frequently reaches 90% during business hours.
What is the MOST operationally efficient solution that will improve the application’s responsiveness?
- A . Configure CloudWatch logging on the EC2 instance. Configure a CloudWatch alarm for CPU utilization to alert the SysOps administrator when CPU utilization goes above 90%.
- B . Configure an AWS Client VPN connection to allow the application users to connect directly to the EC2 instance private IP address to reduce latency.
- C . Create an Auto Scaling group, and assign it to an Application Load Balancer. Configure a target tracking scaling policy that is based on the average CPU utilization of the Auto Scaling group.
- D . Create a CloudWatch alarm that activates when the EC2 instance’s CPU utilization goes above 80%. Configure the alarm to invoke an AWS Lambda function that vertically scales the instance.
C
Explanation:
To improve application responsiveness and handle high CPU utilization:
Create Auto Scaling Group:
Create an Auto Scaling group (ASG) for the EC2 instances running the application.
Reference: Auto Scaling Groups
Assign to Application Load Balancer:
Use an Application Load Balancer (ALB) to distribute traffic across the instances in the ASG.
Reference: Application Load Balancers
Configure Target Tracking Scaling Policy:
Set up a target tracking scaling policy based on average CPU utilization, for example, keeping CPU utilization around 50-60%.
Reference: Scaling Policies for Auto Scaling
This configuration ensures that the application scales out to handle increased load and improves performance during peak times.
A company is testing Amazon Elasticsearch Service (Amazon ES) as a solution for analyzing system logs from a fleet of Amazon EC2 instances. During the test phase, the domain operates on a single-node cluster. A SysOps administrator needs to transition the test domain into a highly available production-grade deployment.
Which Amazon ES configuration should the SysOps administrator use to meet this requirement?
- A . Use a cluster of four data nodes across two AWS Regions. Deploy four dedicated master nodes in each Region.
- B . Use a cluster of six data nodes across three Availability Zones. Use three dedicated master nodes.
- C . Use a cluster of six data nodes across three Availability Zones. Use six dedicated master nodes.
- D . Use a cluster of eight data nodes across two Availability Zones. Deploy four master nodes in a failover AWS Region.
B
Explanation:
To transition the Amazon Elasticsearch Service (Amazon ES) domain to a highly available, production-grade deployment:
Cluster Configuration:
Use a cluster of six data nodes to handle data ingestion and querying.
Distribute these nodes across three Availability Zones (AZs) for high availability and fault tolerance.
Reference: Amazon Elasticsearch Service Best Practices
Dedicated Master Nodes:
Use three dedicated master nodes to manage the cluster state and perform cluster management tasks.
This separation of master and data nodes helps in maintaining cluster stability and performance.
Reference: Dedicated Master Nodes
This configuration ensures that the Elasticsearch cluster is highly available and can handle production workloads effectively.
A company recently acquired another corporation and all of that corporation’s AWS accounts. A financial analyst needs the cost data from these accounts. A SysOps administrator uses Cost Explorer to generate cost and usage reports. The SysOps administrator notices that "No Tagkey" represents 20% of the monthly cost.
What should the SysOps administrator do to tag the "No Tagkey" resources?
- A . Add the accounts to AWS Organizations. Use a service control policy (SCP) to tag all the untagged resources.
- B . Use an AWS Config rule to find the untagged resources. Set the remediation action to terminate the resources.
- C . Use Cost Explorer to find and tag all the untagged resources.
- D . Use Tag Editor to find and taq all the untaqqed resources.
D
Explanation:
"You can add tags to resources when you create the resource. You can use the resource’s service console or API to add, change, or remove those tags one resource at a time. To add tags to―or edit or delete tags of―multiple resources at once, use Tag Editor. With Tag Editor, you search for the resources that you want to tag, and then manage tags for the resources in your search results." https://docs.aws.amazon.com/ARG/latest/userguide/tag-editor.html
A company is using Amazon Elastic File System (Amazon EFS) to share a file system among several Amazon EC2 instances. As usage increases, users report that file retrieval from the EFS file system is slower than normal.
Which action should a SysOps administrator take to improve the performance of the file system?
- A . Configure the file system for Provisioned Throughput.
- B . Enable encryption in transit on the file system.
- C . Identify any unused files in the file system, and remove the unused files.
- D . Resize the Amazon Elastic Block Store (Amazon EBS) volume of each of the EC2 instances.
A
Explanation:
Step-by-Step
Understand the Problem:
Users report that file retrieval from the Amazon EFS file system is slower than normal.
Analyze the Requirements:
Improve the performance of the EFS file system to handle increased usage and file retrieval speed.
Evaluate the Options:
Option A: Configure the file system for Provisioned Throughput.
Provisioned Throughput mode allows you to specify the throughput of your file system independent of the amount of data stored.
This option is suitable for applications with high throughput-to-storage ratio requirements and ensures consistent performance.
Option B: Enable encryption in transit on the file system.
While this enhances security, it does not directly improve performance.
Option C: Identify any unused files in the file system, and remove the unused files.
This may free up some resources but does not address the root cause of slow performance.
Option D: Resize the Amazon EBS volume of each of the EC2 instances.
EFS performance issues are not directly related to the size of EBS volumes attached to EC2 instances.
Select the Best Solution:
Option A: Configuring the file system for Provisioned Throughput ensures consistent performance by allowing you to set the required throughput regardless of the amount of data stored.
Reference: Amazon EFS Performance
Provisioned Throughput Mode
Configuring Provisioned Throughput for Amazon EFS provides a consistent and higher performance level, which is suitable for high throughput requirements.
A SysOps administrator is helping a development team deploy an application to AWS Trie AWS CloudFormat on temp ate includes an Amazon Linux EC2 Instance an Amazon Aurora DB cluster and a hard coded database password that must be rotated every 90 days
What is the MOST secure way to manage the database password?
- A . Use the AWS SecretsManager Secret resource with the GenerateSecretString property to automatically generate a password Use the AWS SecretsManager RotationSchedule resource lo define a rotation schedule lor the password Configure the application to retrieve the secret from AWS Secrets Manager access the database
- B . Use me AWS SecretsManager Secret resource with the SecretStrmg property Accept a password as a CloudFormation parameter Use the AllowedPatteen property of the CloudFormaton parameter to require e minimum length, uppercase and lowercase letters and special characters Configure me application to retrieve the secret from AWS Secrets Manager to access the database
- C . Use the AWS SSM Parameter resource Accept input as a Qoudformatton parameter to store the parameter as a secure sting Configure the application to retrieve the parameter from AWS Systems Manager Parameter Store to access the database
- D . Use the AWS SSM Parameter resource Accept input as a Cloudf ormetton parameter to store the parameter as a string Configure the application to retrieve the parameter from AWS Systems Manager Parameter Store to access the database
A
Explanation:
Step-by-Step
Understand the Problem:
Manage a database password securely and rotate it every 90 days.
Analyze the Requirements:
Ensure secure management and automatic rotation of the database password. Minimize manual intervention and risk of exposing the password.
Evaluate the Options:
Option A: Use the AWS SecretsManager Secret resource with the GenerateSecretString property.
Secrets Manager can automatically generate a strong password.
The RotationSchedule resource defines a rotation schedule to rotate the password every 90 days.
Option B: Use the AWS SecretsManager Secret resource with the SecretString property.
Requires accepting a password as a CloudFormation parameter and does not automate password generation.
Option C: Use the AWS SSM Parameter resource.
AWS Systems Manager Parameter Store can store secure strings, but it does not support automatic rotation.
Option D: Use the AWS SSM Parameter resource without secure string.
This option does not offer the security of Secrets Manager and does not support automatic rotation.
Select the Best Solution:
Option A: Using AWS Secrets Manager with the GenerateSecretString property and RotationSchedule resource ensures secure management and automatic rotation of the database password.
Reference: AWS Secrets Manager
Rotate AWS Secrets Manager Secrets
AWS Secrets Manager provides secure storage, automatic rotation, and seamless integration with applications for accessing secrets.
An application team uses an Amazon Aurora MySQL DB cluster with one Aurora Replica. The application team notices that the application read performance degrades when user connections exceed 200. The number of user connections is typically consistent around 180. with occasional sudden increases above 200 connections. The application team wants the application to automatically scale as user demand increases or decreases.
Which solution will meet these requirements?
- A . Migrate to a new Aurora multi-master DB cluster. Modify the application database connection string.
- B . Modify the DB cluster by changing to serverless mode whenever user connections exceed 200.
- C . Create an auto scaling policy with a target metric of 195 DatabaseConnections
- D . Modify the DB cluster by increasing the Aurora Replica instance size.
C
Explanation:
Step-by-Step
Understand the Problem:
The application read performance degrades when user connections exceed 200.
Need to automatically scale the database based on user demand.
Analyze the Requirements:
Implement auto scaling based on user connections to ensure optimal performance.
Evaluate the Options:
Option A: Migrate to a new Aurora multi-master DB cluster.
Multi-master clusters provide read and write scalability but may require significant changes to the application.
Option B: Modify the DB cluster to serverless mode.
Aurora Serverless provides automatic scaling, but it is more suitable for variable workloads with unpredictable demand.
Option C: Create an auto scaling policy with a target metric of 195 DatabaseConnections.
This directly addresses the need to scale based on user connections.
Auto scaling can add or remove replicas to maintain optimal performance.
Option D: Increase the Aurora Replica instance size.
This may improve performance but does not address scalability for sudden increases in user connections.
Select the Best Solution:
Option C: Creating an auto scaling policy with a target metric of 195 DatabaseConnections ensures the DB cluster scales automatically to handle increased load.
Reference: Amazon Aurora Auto Scaling
Scaling Aurora DB Instances
Auto scaling based on DatabaseConnections ensures the Aurora DB cluster maintains optimal performance as user demand fluctuates.
A company’s SysOps administrator has created an Amazon EC2 instance with custom software that will be used as a template for all new EC2 instances across multiple AWS accounts. The Amazon Elastic Block Store (Amazon EBS) volumes that are attached to the EC2 instance are encrypted with
AWS managed keys.
The SysOps administrator creates an Amazon Machine Image (AMI) of the custom EC2 instance and plans to share the AMI with the company’s other AWS accounts. The company requires that all AMIs are encrypted with AWS Key Management Service (AWS KMS) keys and that only authorized AWS accounts can access the shared AMIs.
Which solution will securely share the AMI with the other AWS accounts?
- A . In the account where the AMI was created, create a customer master key (CMK). Modify the key policy to provide kms:DescribeKey, kms ReEncrypf, kms:CreateGrant, and kms:Decrypt permissions to the AWS accounts that the AMI will be shared with. Modify the AMI permissions to specify the AWS account numbers that the AMI will be shared with.
- B . In the account where the AMI was created, create a customer master key (CMK). Modify the key policy to provide kms:DescribeKey, kms:ReEncrypt*. kms:CreateGrant, and kms;Decrypt permissions to the AWS accounts that the AMI will be shared with. Create a copy of the AMI. and specify the CMK. Modify the permissions on the copied AMI to specify the AWS account numbers that the AMI will be shared with.
- C . In the account where the AMI was created, create a customer master key (CMK). Modify the key policy to provide kms:DescrlbeKey, kms:ReEncrypt kms:CreateGrant, and kms:Decrypt permissions to the AWS accounts that the AMI will be shared with. Create a copy of the AMI. and specify the CMK. Modify the permissions on the copied AMI to make it public.
- D . In the account where the AMI was created, modify the key policy of the AWS managed key to provide kms:DescnbeKey. kms:ReEncrypt kms:CreateGrant, and kms:Decrypt permissions to the AWS accounts that the AMI will be shared with. Modify the AMI permissions to specify the AWS account numbers that the AMI will be shared with.
B
Explanation:
Step-by-Step
Understand the Problem:
Share an AMI with other AWS accounts while ensuring it is encrypted with AWS KMS keys and accessible only to authorized accounts.
Analyze the Requirements:
Encrypt the AMI with a customer-managed key (CMK).
Share the AMI securely with specific AWS accounts.
Evaluate the Options:
Option A: Create a CMK and modify the key policy but do not create a copy of the AMI.
This does not re-encrypt the AMI with the CMK.
Option B: Create a CMK, modify the key policy, create a copy of the AMI, and specify the CMK.
This ensures the AMI is encrypted with the CMK and shared securely.
Option C: Create a CMK, modify the key policy, create a copy of the AMI, and make it public.
Making the AMI public does not meet the requirement of sharing with specific accounts only.
Option D: Modify the key policy of the AWS managed key and share the AMI.
AWS managed keys cannot have their key policies modified to add permissions for other accounts.
Select the Best Solution:
Option B: Creating a CMK, modifying the key policy, creating a copy of the AMI with the CMK, and specifying the permissions ensures the AMI is securely shared with specific AWS accounts.
Reference: Copying an AMI
Creating and Managing Keys
Sharing AMIs
This solution ensures the AMI is encrypted with a customer-managed key and shared securely with the specified AWS accounts.
A SysOps administrator is provisioning an Amazon Elastic File System (Amazon EFS) file system to provide shared storage across multiple Amazon EC2 instances The instances all exist in the same VPC across multiple Availability Zones. There are two instances In each Availability Zone. The SysOps administrator must make the file system accessible to each instance with the lowest possible latency.
Which solution will meet these requirements?
- A . Create a mount target for the EFS file system in the VPC. Use the mount target to mount the file system on each of the instances
- B . Create a mount target for the EFS file system in one Availability Zone of the VPC. Use the mount target to mount the file system on the instances in that Availability Zone. Share the directory with the other instances.
- C . Create a mount target for each instance. Use each mount target to mount the EFS file system on each respective instance.
- D . Create a mount target in each Availability Zone of the VPC Use the mount target to mount the EFS file system on the Instances in the respective Availability Zone.
D
Explanation:
A mount target provides an IP address for an NFSv4 endpoint at which you can mount an Amazon EFS file system. You mount your file system using its Domain Name Service (DNS) name, which resolves to the IP address of the EFS mount target in the same Availability Zone as your EC2 instance. You can create one mount target in each Availability Zone in an AWS Region. If there are multiple subnets in an Availability Zone in your VPC, you create a mount target in one of the subnets. Then all EC2 instances in that Availability Zone share that mount target. https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html
A SysOps administrator has used AWS Cloud Formal ion to deploy a serverless application Into a production VPC. The application consists of an AWS Lambda function an Amazon DynamoDB table, and an Amazon API Gateway API. The SysOps administrator must delete the AWS Cloud Formation stack without deleting the DynamoDB table.
Which action should the SysOps administrator take before deleting the AWS Cloud Formation stack?
- A . Add a Retain deletion policy to the DynamoDB resource in the AWS CloudFormation stack
- B . Add a Snapshot deletion policy to the DynamoDB resource in the AWS CloudFormation stack.
- C . Enable termination protection on the AWS Cloud Formation stack.
- D . Update the application’s IAM policy with a Deny statement for the dynamodb:DeleteTabie action.
A
Explanation:
To delete the AWS CloudFormation stack without deleting the DynamoDB table, you need to apply a deletion policy to the DynamoDB resource. The Retain deletion policy ensures that the specified resource is not deleted when the stack is deleted. Instead, it is retained.
Steps:
Modify the CloudFormation Template:
Add a deletion policy to the DynamoDB table resource.
json
Copy code
{
"Resources": {
"MyDynamoDBTable": {
"Type": "AWS::DynamoDB::Table",
"DeletionPolicy": "Retain",
…
}
}
}
Reference: AWS CloudFormation DeletionPolicy Attribute
Update the Stack:
Update the CloudFormation stack with the modified template.
Reference: Updating a Stack
Delete the Stack:
Proceed to delete the stack. The DynamoDB table will be retained.
Reference: Deleting a Stack
A SysOps administrator Is troubleshooting an AWS Cloud Formation template whereby multiple Amazon EC2 instances are being created
The template is working In us-east-1. but it is failing In us-west-2 with the error code:
How should the administrator ensure that the AWS Cloud Formation template is working in every region?
- A . Copy the source region’s Amazon Machine Image (AMI) to the destination region and assign it the same ID.
- B . Edit the AWS CloudFormatton template to specify the region code as part of the fully qualified AMI ID.
- C . Edit the AWS CloudFormatton template to offer a drop-down list of all AMIs to the user by using the aws :: EC2:: ami :: imageiD control.
- D . Modify the AWS CloudFormation template by including the AMI IDs in the "Mappings" section. Refer to the proper mapping within the template for the proper AMI ID.
D
Explanation:
To ensure that the AWS CloudFormation template works in every region, you should use the Mappings section to specify region-specific AMI IDs. This allows the template to dynamically reference the correct AMI ID based on the region where the stack is being deployed.
Steps:
Add Mappings to the Template:
Define the AMI IDs for each region in the Mappings section of the CloudFormation template.
json
Copy code
{
"Mappings": {
"RegionMap": {
"us-east-1": { "AMI": "ami-0123456789abcdef0" },
"us-west-2": { "AMI": "ami-abcdef0123456789" }
}
}
}
Reference the Mapping in the Template:
Use the Fn::FindInMap function to reference the correct AMI ID based on the region.
json
Copy code
{
"Resources": {
"MyEC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": { "Fn::FindInMap": [ "RegionMap", { "Ref": "AWS::Region" }, "AMI" ] },
…
}
}
}
}
Deploy the Template:
Deploy the CloudFormation stack in any region, and it will use the correct AMI ID.
Reference: AWS CloudFormation Mappings
A company runs us Infrastructure on Amazon EC2 Instances that run In an Auto Scaling group. Recently, the company promoted faulty code to the entire EC2 fleet. This faulty code caused the Auto Scaling group to scale the instances before any of the application logs could be retrieved.
What should a SysOps administrator do to retain the application logs after instances are terminated?
- A . Configure an Auto Scaling lifecycle hook to create a snapshot of the ephemeral storage upon termination of the instances.
- B . Create a new Amazon Machine Image (AMI) that has the Amazon CloudWatch agent installed and configured to send logs to Amazon CloudWatch Logs. Update the launch template to use the new AMI.
- C . Create a new Amazon Machine Image (AMI) that has a custom script configured to send logs to AWS CloudTrail. Update the launch template to use the new AMI.
- D . Install the Amazon CloudWatch agent on the Amazon Machine Image (AMI) that is defined in the launch template. Configure the CloudWatch agent to back up the logs to ephemeral storage.
B
Explanation:
To retain application logs after instances are terminated, configure the EC2 instances to send logs to Amazon CloudWatch Logs.
Steps:
Create a New AMI:
Create a new AMI that includes the Amazon CloudWatch agent.
Install and configure the CloudWatch agent to send logs to CloudWatch Logs.
Reference: Installing the CloudWatch Agent
Update the Launch Template:
Update the Auto Scaling group’s launch template or launch configuration to use the new AMI.
Reference: Launch Templates
Configure Log Group and Retention:
Ensure that the CloudWatch Logs group is properly configured with the desired retention policies.
Reference: CloudWatch Logs Retention
Test the Configuration:
Test the setup to ensure logs are properly being sent to CloudWatch Logs upon instance termination.
Reference: Using CloudWatch Logs
A company has a critical serverless application that uses multiple AWS Lambda functions. Each Lambda function generates 1 GB of log data daily in tts own Amazon CloudWatch Logs log group. The company’s security team asks for a count of application errors, grouped by type, across all of the log groups.
What should a SysOps administrator do to meet this requirement?
- A . Perform a CloudWatch Logs Insights query that uses the stats command and count function.
- B . Perform a CloudWatch Logs search that uses the groupby keyword and count function.
- C . Perform an Amazon Athena query that uses the SELECT and GROUP BY keywords.
- D . Perform an Amazon RDS query that uses the SELECT and GROUP BY keywords.
A
Explanation:
To count application errors grouped by type across all CloudWatch Logs log groups, use CloudWatch Logs Insights.
Steps:
Access CloudWatch Logs Insights:
Open the CloudWatch console and navigate to Logs Insights.
Reference: Analyzing Log Data with CloudWatch Logs Insights Perform the Query:
Use the following query to count errors grouped by type:
sql
Copy code
fields @timestamp, @message
| filter @message like /error/
| stats count(*) by @message
This query filters for log messages containing the word "error" and counts occurrences grouped by the message content.
Run the Query Across Log Groups:
Select the log groups for all Lambda functions and run the query.
Reference: CloudWatch Logs Insights Query Syntax
Analyze Results:
Analyze the query results to get a count of application errors grouped by type.
Reference: Using CloudWatch Logs Insights
This method provides an efficient way to analyze and count application errors across multiple log
groups.
A company monitors its account activity using AWS CloudTrail. and is concerned that some log files are being tampered with after the logs have been delivered to the account’s Amazon S3 bucket.
Moving forward, how can the SysOps administrator confirm that the log files have not been modified after being delivered to the S3 bucket?
- A . Stream the CloudTrail logs to Amazon CloudWatch Logs to store logs at a secondary location.
- B . Enable log file integrity validation and use digest files to verify the hash value of the log file.
- C . Replicate the S3 log bucket across regions, and encrypt log files with S3 managed keys.
- D . Enable S3 server access logging to track requests made to the log bucket for security audits.
B
Explanation:
When you enable log file integrity validation, CloudTrail creates a hash for every log file that it delivers. Every hour, CloudTrail also creates and delivers a file that references the log files for the last hour and contains a hash of each. This file is called a digest file. CloudTrail signs each digest file using the private key of a public and private key pair. After delivery, you can use the public key to validate the digest file. CloudTrail uses different key pairs for each AWS region
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html
A team of On-call engineers frequently needs to connect to Amazon EC2 Instances In a private subnet to troubleshoot and run commands. The Instances use either the latest AWS-provided Windows Amazon Machine Images (AMIs) or Amazon Linux AMIs.
The team has an existing IAM role for authorization. A SysOps administrator must provide the team with access to the Instances by granting IAM permissions to this
Which solution will meet this requirement?
- A . Add a statement to the IAM role policy to allow the ssm:StartSession action on the instances. Instruct the team to use AWS Systems Manager Session Manager to connect to the Instances by using the assumed IAM role.
- B . Associate an Elastic IP address and a security group with each instance. Add the engineers’ IP addresses to the security group inbound rules. Add a statement to the IAM role policy to allow the ec2:AuthoflzeSecurityGroupIngress action so that the team can connect to the Instances.
- C . Create a bastion host with an EC2 Instance, and associate the bastion host with the VPC. Add a statement to the IAM role policy to allow the ec2:CreateVpnConnection action on the bastion host. Instruct the team to use the bastion host endpoint to connect to the instances.
D Create an internet-facing Network Load Balancer. Use two listeners. Forward port 22 to a target group of Linux instances. Forward port 3389 to a target group of Windows Instances. Add a statement to the IAM role policy to allow the ec2:CreateRoute action so that the team can connect to the Instances.
A
Explanation:
Step-by-Step
Understand the Problem:
Engineers need to connect to EC2 instances in a private subnet for troubleshooting. The instances are using Windows or Amazon Linux AMIs. The team already has an IAM role for authorization. Analyze the Requirements:
Provide secure and efficient access to the instances without exposing them directly to the internet.
Utilize existing IAM role for access control.
Evaluate the Options:
Option A: Use AWS Systems Manager Session Manager.
Allows secure and auditable SSH or RDP access to EC2 instances without the need for bastion hosts or opening inbound ports.
Add a policy to allow the ssm:StartSession action.
Option B: Use Elastic IP and security group.
Exposes instances to direct access, increasing security risks.
Option C: Use a bastion host.
Requires additional infrastructure and maintenance.
Option D: Use an internet-facing Network Load Balancer.
Exposes instances to direct access via load balancer, not ideal for private subnets.
Select the Best Solution:
Option A: Using AWS Systems Manager Session Manager is the most secure and efficient solution. It eliminates the need for additional infrastructure and avoids exposing instances to the internet.
Reference: AWS Systems Manager Session Manager
Controlling Access to Session Manager
AWS Systems Manager Session Manager provides secure and auditable access to EC2 instances in a private subnet using IAM roles.
A company has an AWS Cloud Formation template that creates an Amazon S3 bucket. A user authenticates to the corporate AWS account with their Active Directory credentials and attempts to deploy the Cloud Formation template. However, the stack creation fails.
Which factors could cause this failure? (Select TWO.)
- A . The user’s IAM policy does not allow the cloudformation:CreateStack action.
- B . The user’s IAM policy does not allow the cloudformation:CreateStackSet action.
- C . The user’s IAM policy does not allow the s3:CreateBucket action.
- D . The user’s IAM policy explicitly denies the s3:ListBucket action.
- E . The user’s IAM policy explicitly denies the s3:PutObject action
AC
Explanation:
Understand the Problem:
A user attempts to deploy a CloudFormation template to create an S3 bucket but the stack creation fails.
The user authenticates using Active Directory credentials.
Analyze the Requirements:
Identify permissions required for successful CloudFormation stack creation.
Evaluate the Options:
Option A: The user’s IAM policy does not allow the cloudformation:CreateStack action.
Without this permission, the user cannot create CloudFormation stacks.
Option B: The user’s IAM policy does not allow the cloudformation:CreateStackSet action.
StackSet is used for managing stacks across multiple accounts and regions, not relevant for a single stack creation.
Option C: The user’s IAM policy does not allow the s3:CreateBucket action.
This permission is required to create an S3 bucket as part of the stack.
Option D: The user’s IAM policy explicitly denies the s3:ListBucket action.
This permission is not required for bucket creation but for listing existing buckets.
Option E: The user’s IAM policy explicitly denies the s3:PutObject action.
This permission is required to put objects in a bucket, not to create the bucket.
Select the Best Solution:
Option A and C: The user must have permissions for cloudformation:CreateStack and s3:CreateBucket to successfully create the stack and the S3 bucket.
Reference: AWS CloudFormation Permissions
IAM Policies and Permissions
Ensuring the user has the required permissions for cloudformation:CreateStack and s3:CreateBucket is crucial for successful stack creation.
A company runs a web application on three Amazon EC2 instances behind an Application Load Balancer (ALB). The company notices that random periods of increased traffic cause a degradation in the application’s performance. A SysOps administrator must scale the application to meet the increased traffic.
Which solution meets these requirements?
- A . Create an Amazon CloudWatch alarm to monitor application latency and increase the size of each EC2 instance If the desired threshold is reached.
- B . Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor application latency and add an EC2 instance to the ALB if the desired threshold is reached.
- C . Deploy the application to an Auto Scaling group of EC2 instances with a target tracking scaling policy. Attach the ALB to the Auto Scaling group.
- D . Deploy the application to an Auto Scaling group of EC2 instances with a scheduled scaling policy.
Attach the ALB to the Auto Scaling group.
C
Explanation:
Step-by-Step
Understand the Problem:
The web application experiences performance degradation during random periods of increased traffic.
Analyze the Requirements:
Implement a scalable solution to handle varying traffic loads. Maintain application performance during traffic spikes. Evaluate the Options:
Option A: Monitor application latency with CloudWatch alarm and increase instance size.
Manually resizing instances is not efficient for handling random traffic spikes.
Option B: Use EventBridge rule to add EC2 instance to ALB.
This approach is not as efficient as Auto Scaling for dynamic traffic management.
Option C: Deploy to an Auto Scaling group with target tracking scaling policy.
Automatically adjusts the number of instances based on traffic demand.
Ensures consistent application performance by scaling in response to traffic changes.
Option D: Deploy to an Auto Scaling group with scheduled scaling policy.
Suitable for predictable traffic patterns but not for random traffic spikes.
Select the Best Solution:
Option C: Using an Auto Scaling group with a target tracking scaling policy ensures the application scales dynamically based on traffic, maintaining performance.
Reference: Amazon EC2 Auto Scaling
Target Tracking Scaling Policies
Auto Scaling with a target tracking policy provides a robust solution for handling random traffic increases by dynamically adjusting the number of instances.
A company has a new requirement stating that all resources In AWS must be tagged according to a set policy.
Which AWS service should be used to enforce and continually Identify all resources that are not in compliance with the policy?
- A . AWS CloudTrail
- B . Amazon Inspector
- C . AWS Config
- D . AWS Systems Manager
C
Explanation:
Step-by-Step
Understand the Problem:
Enforce a policy that requires all AWS resources to be tagged according to company policy.
Continuously identify non-compliant resources.
Analyze the Requirements:
Implement a solution to monitor and enforce resource tagging compliance.
Evaluate the Options:
Option A: AWS CloudTrail.
Provides logging and monitoring of API calls but does not enforce tagging policies.
Option B: Amazon Inspector.
Primarily used for security assessments, not resource tagging compliance.
Option C: AWS Config.
Monitors and evaluates the configurations of AWS resources.
Can enforce compliance by using AWS Config rules to check resource tags.
Option D: AWS Systems Manager.
Provides operational insights and management but is not specifically designed for compliance enforcement.
Select the Best Solution:
Option C: AWS Config is designed for compliance and configuration monitoring, making it the ideal service for enforcing and identifying non-compliant resource tags.
Reference: AWS Config
Managing Resource Tag Compliance with AWS Config
AWS Config provides the necessary tools to enforce and monitor resource tagging compliance, ensuring all resources adhere to the set policy.
A SysOps administrator is setting up an automated process to recover an Amazon EC2 instance In the event of an underlying hardware failure. The recovered instance must have the same private IP address and the same Elastic IP address that the original instance had. The SysOps team must receive an email notification when the recovery process is initiated.
Which solution will meet these requirements?
- A . Create an Amazon CloudWatch alarm for the EC2 instance, and specify the SiatusCheckFailedjnstance metric. Add an EC2 action to the alarm to recover the instance. Add an alarm notification to publish a message to an Amazon Simple Notification Service (Amazon SNS> topic. Subscribe the SysOps team email address to the SNS topic.
- B . Create an Amazon CloudWatch alarm for the EC2 Instance, and specify the
StatusCheckFailed_System metric. Add an EC2 action to the alarm to recover the instance. Add an alarm notification to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the SysOps team email address to the SNS topic. - C . Create an Auto Scaling group across three different subnets in the same Availability Zone with a minimum, maximum, and desired size of 1. Configure the Auto Seating group to use a launch template that specifies the private IP address and the Elastic IP address. Add an activity notification
for the Auto Scaling group to send an email message to the SysOps team through Amazon Simple Email Service (Amazon SES). - D . Create an Auto Scaling group across three Availability Zones with a minimum, maximum, and desired size of 1. Configure the Auto Scaling group to use a launch template that specifies the private IP address and the Elastic IP address. Add an activity notification for the Auto Scaling group to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the SysOps team email address to the SNS topic.
B
Explanation:
You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically recovers the instance if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair. Terminated instances cannot be recovered. A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata. If the impaired instance has a public IPv4 address, the instance retains the public IPv4 address after recovery. If the impaired instance is in a placement group, the recovered instance runs in the placement group. When the StatusCheckFailed_System alarm is triggered, and the recover action is initiated, you will be notified by the Amazon SNS topic that you selected when you created the alarm and associated the recover action. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html
A SysOps administrator must create a solution that immediately notifies software developers if an AWS Lambda function experiences an error.
Which solution will meet this requirement?
- A . Create an Amazon Simple Notification Service (Amazon SNS) topic with an email subscription for each developer. Create an Amazon CloudWatch alarm by using the Errors metric and the Lambda function name as a dimension. Configure the alarm to send a notification to the SNS topic when the alarm state reaches ALARM.
- B . Create an Amazon Simple Notification Service (Amazon SNS) topic with a mobile subscription for each developer. Create an Amazon EventBridge (Amazon CloudWatch Events) alarm by using LambdaError as the event pattern and the SNS topic name as a resource. Configure the alarm to send a notification to the SNS topic when the alarm state reaches ALARM.
- C . Verify each developer email address in Amazon Simple Email Service (Amazon SES). Create an Amazon CloudWatch rule by using the LambdaError metric and developer email addresses as dimensions. Configure the rule to send an email through Amazon SES when the rule state reaches ALARM.
- D . Verify each developer mobile phone in Amazon Simple Email Service {Amazon SES). Create an Amazon EventBridge (Amazon CloudWatch Events) rule by using Errors as the event pattern and the Lambda function name as a resource. Configure the rule to send a push notification through Amazon SES when the rule state reaches ALARM.
A
Explanation:
To immediately notify software developers if an AWS Lambda function experiences an error, follow these steps:
Create an SNS Topic:
Navigate to the Amazon SNS console and create a new topic. Add email subscriptions for each developer to the SNS topic.
Reference: Creating an Amazon SNS Topic Create a CloudWatch Alarm:
Go to the Amazon CloudWatch console and create an alarm based on the Errors metric for the specific Lambda function.
Use the Lambda function name as a dimension.
Configure the alarm to trigger when the metric exceeds a threshold indicating an error.
Reference: Creating Alarms in Amazon CloudWatch
Configure Notification:
Set the CloudWatch alarm action to send a notification to the SNS topic created in step 1 when the alarm state reaches ALARM.
Reference: Using Amazon SNS for CloudWatch Alarms
This configuration ensures that developers are notified immediately via email if the Lambda function experiences an error.
A SysOps administrator developed a Python script that uses the AWS SDK to conduct several maintenance tasks. The script needs to run automatically every night.
What is the MOST operationally efficient solution that meets this requirement?
- A . Convert the Python script to an AWS Lambda (unction. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke the function every night.
- B . Convert the Python script to an AWS Lambda function. Use AWS CloudTrail to invoke the function every night.
- C . Deploy the Python script to an Amazon EC2 Instance. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule the instance to start and stop every night.
- D . Deploy the Python script to an Amazon EC2 instance. Use AWS Systems Manager to schedule the instance to start and stop every night.
A
Explanation:
To automate the execution of a Python script every night efficiently:
Convert Python Script to Lambda:
Create an AWS Lambda function and upload the Python script.
Ensure the function has the necessary IAM permissions to perform the required maintenance tasks.
Reference: AWS Lambda Developer Guide
Create EventBridge Rule:
Navigate to the Amazon EventBridge console and create a new rule. Set the rule to trigger the Lambda function on a nightly schedule.
Reference: Creating Scheduled Events with Amazon EventBridge
This solution is operationally efficient because Lambda functions are managed, scale automatically, and you only pay for the compute time you consume.
A SysOps administrator must create a solution that automatically shuts down any Amazon EC2 instances that have less than 10% average CPU utilization for 60 minutes or more.
Which solution will meet this requirement In the MOST operationally efficient manner?
- A . Implement a cron job on each EC2 instance to run once every 60 minutes and calculate the current CPU utilization. Initiate an instance shutdown If CPU utilization is less than 10%.
- B . Implement an Amazon CloudWatch alarm for each EC2 instance to monitor average CPU utilization. Set the period at 1 hour, and set the threshold at 10%. Configure an EC2 action on the alarm to stop the instance.
- C . Install the unified Amazon CloudWatch agent on each EC2 instance, and enable the Basic level predefined metric set. Log CPU utilization every 60 minutes, and initiate an instance shutdown if CPU utilization is less than 10%.
- D . Use AWS Systems Manager Run Command to get CPU utilization from each EC2 instance every 60 minutes. Initiate an instance shutdown if CPU utilization is less than 10%.
B
Explanation:
To automatically shut down EC2 instances with low CPU utilization:
Create CloudWatch Alarms:
Go to the CloudWatch console and create an alarm for each EC2 instance.
Set the alarm to monitor the CPUUtilization metric with a period of 1 hour and a threshold of 10%.
Reference: Creating Amazon CloudWatch Alarms
Configure EC2 Action:
Configure the alarm to trigger an EC2 action that stops the instance when the alarm state is ALARM.
Reference: Stop EC2 Instances Using CloudWatch Alarms
This method is operationally efficient as it automates the monitoring and action without requiring manual intervention or additional infrastructure.
A company uses AWS Cloud Formation templates to deploy cloud infrastructure. An analysis of all the company’s templates shows that the company has declared the same components in multiple templates. A SysOps administrator needs to create dedicated templates that have their own parameters and conditions for these common components.
Which solution will meet this requirement?
- A . Develop a CloudFormaiion change set.
- B . Develop CloudFormation macros.
- C . Develop CloudFormation nested stacks.
- D . Develop CloudFormation stack sets.
C
Explanation:
To manage common components across multiple CloudFormation templates efficiently:
Create Nested Stacks:
Develop separate CloudFormation templates for the common components.
Use these templates as nested stacks within the main templates.
Reference: AWS CloudFormation Nested Stacks Define Parameters and Conditions:
Each nested stack can have its own parameters and conditions to customize the deployment.
Reference: Working with Nested Stacks
This solution promotes reuse and modularization, reducing duplication and simplifying template maintenance.
A company has deployed AWS Security Hub and AWS Config in a newly implemented organization in AWS Organizations. A SysOps administrator must implement a solution to restrict all member accounts in the organization from deploying Amazon EC2 resources in the ap-southeast-2 Region. The solution must be implemented from a single point and must govern an current and future accounts. The use of root credentials also must be restricted in member accounts.
Which AWS feature should the SysOps administrator use to meet these requirements?
- A . AWS Config aggregator
- B . IAM user permissions boundaries
- C . AWS Organizations service control policies (SCPs)
- D . AWS Security Hub conformance packs
C
Explanation:
To restrict EC2 resource deployment in a specific region and restrict root credentials usage:
Create Service Control Policies (SCPs):
Use AWS Organizations to create SCPs that restrict actions for all member accounts.
Create an SCP to deny the creation of EC2 instances in the ap-southeast-2 region.
Create an SCP to deny the use of root credentials in member accounts.
Reference: Service Control Policies
Attach SCPs:
Attach the SCPs to the organizational units (OUs) or directly to the accounts as needed.
Reference: Attaching SCPs
This approach provides centralized control over account policies, ensuring compliance across current and future accounts.
A company has a new requirement stating that all resources in AWS must be tagged according to a set policy.
Which AWS service should be used to enforce and continually identify all resources that are not in compliance with the policy?
- A . AWS CloudTrail
- B . Amazon Inspector
- C . AWSConfig
- D . AWS Systems Manager
C
Explanation:
Understand the Problem:
All AWS resources must be tagged according to a company policy.
Continuous identification and enforcement of non-compliant resources are required.
Analyze the Requirements:
Monitor resources for compliance with tagging policies.
Enforce tagging policies and provide ongoing compliance reporting.
Evaluate the Options:
Option A: AWS CloudTrail.
Tracks user activity and API usage but does not enforce tagging policies.
Option B: Amazon Inspector.
Provides security assessments but does not monitor or enforce resource tagging.
Option C: AWS Config.
Monitors and evaluates the configurations of AWS resources.
AWS Config rules can be used to enforce tagging policies and continuously monitor compliance.
Option D: AWS Systems Manager.
Provides operational insights and management but does not specifically enforce tagging compliance.
Select the Best Solution:
Option C: AWS Config is designed for compliance and configuration monitoring, making it the ideal service for enforcing and identifying non-compliant resource tags.
Reference: AWS Config
Managing Resource Tag Compliance with AWS Config
AWS Config provides the necessary tools to enforce and monitor resource tagging compliance, ensuring all resources adhere to the set policy.
A SysOps administrator has used AWS Cloud Formation to deploy a sereness application into a production VPC. The application consists of an AWS Lambda function, an Amazon DynamoOB table, and an Amazon API Gateway API. The SysOps administrator must delete the AWS Cloud Formation stack without deleting the DynamoOB table.
Which action should the SysOps administrator take before deleting the AWS Cloud Formation stack?
- A . Add a Retain deletion policy to the DynamoOB resource in the AWS CloudFormation stack.
- B . Add a Snapshot deletion policy to the DynamoOB resource In the AWS CloudFormation stack.
- C . Enable termination protection on the AWS Cloud Formation stack.
- D . Update the application’s IAM policy with a Deny statement for the dynamodb:DeleteTabie action.
A
Explanation:
Understand the Problem:
The requirement is to delete the CloudFormation stack without deleting the DynamoDB table.
Analyze the Requirements:
Ensure the DynamoDB table is preserved when the CloudFormation stack is deleted.
Evaluate the Options:
Option A: Add a Retain deletion policy to the DynamoDB resource.
The Retain policy ensures that the DynamoDB table is not deleted when the stack is deleted.
Option B: Add a Snapshot deletion policy to the DynamoDB resource.
Snapshot policy is not applicable to DynamoDB tables and would not retain the table itself.
Option C: Enable termination protection on the CloudFormation stack.
Prevents stack deletion entirely but does not specifically protect the DynamoDB table. Option D: Update the IAM policy with a Deny statement for dynamodb:DeleteTable. Prevents deletion of the table but is not a CloudFormation stack-specific solution. Select the Best Solution:
Option A: Adding a Retain deletion policy to the DynamoDB resource in the CloudFormation stack ensures the table is preserved when the stack is deleted.
Reference: AWS CloudFormation Deletion Policy
Using the Retain deletion policy ensures that the DynamoDB table is not deleted when the CloudFormation stack is deleted, preserving critical data.
A company has a critical serverless application that uses multiple AWS Lambda functions. Each Lambda function generates 1 GB of log data daily in its own Amazon CloudWatch Logs log group. The company’s security team asks for a count of application errors, grouped by type, across all of the log groups.
What should a SysOps administrator do to meet this requirement?
- A . Perform a CloudWatch Logs Insights query that uses the stats command and count function.
- B . Perform a CloudWatch Logs search that uses the groupby keyword and count function.
- C . Perform an Amazon Athena query that uses the SELECT and GROUP BY keywords.
- D . Perform an Amazon RDS query that uses the SELECT and GROUP BY keywords.
A
Explanation:
Step-by-Step
Understand the Problem:
Each Lambda function generates 1 GB of log data daily in its own CloudWatch Logs log group.
The security team needs a count of application errors, grouped by type, across all log groups.
Analyze the Requirements:
Aggregate and analyze log data across multiple log groups. Count and group errors by type. Evaluate the Options:
Option A: Perform a CloudWatch Logs Insights query.
CloudWatch Logs Insights allows querying and analyzing log data.
The stats command and count function can aggregate and count errors across log groups.
Option B: Perform a CloudWatch Logs search with groupby and count.
CloudWatch Logs search does not support these functions; Logs Insights is needed for advanced queries.
Option C: Perform an Amazon Athena query.
Athena can query data in S3 but is not directly applicable to CloudWatch Logs.
Option D: Perform an Amazon RDS query.
RDS queries are for database data, not applicable to log data in CloudWatch.
Select the Best Solution:
Option A: CloudWatch Logs Insights is designed for querying and analyzing log data, making it the appropriate choice for counting and grouping errors.
Reference: Amazon CloudWatch Logs Insights
CloudWatch Logs Insights provides powerful querying capabilities to aggregate and analyze log data, including counting and grouping errors.
A SysOps administrator needs to give users the ability to upload objects to an Amazon S3 bucket. The SysOps administrator creates a presigned URL and provides the URL to a user, but the user cannot upload an object to the S3 bucket. The presigned URL has not expired, and no bucket policy is applied to the S3 bucket.
Which of the following could be the cause of this problem?
- A . The user has not properly configured the AWS CLI with their access key and secret access key.
- B . The SysOps administrator does not have the necessary permissions to upload the object to the S3 bucket.
- C . The SysOps administrator must apply a bucket policy to the S3 bucket to allow the user to upload the object.
- D . The object already has been uploaded through the use of the presigned URL, so the presigned URL is no longer valid.
B
Explanation:
Step-by-Step
Understand the Problem:
A user cannot upload an object to an S3 bucket using a presigned URL, even though the URL is valid and the bucket has no policy applied.
Analyze the Requirements:
Determine the cause of the issue preventing the upload via the presigned URL.
Evaluate the Options:
Option A: The user has not properly configured the AWS CLI.
CLI configuration is not relevant to using a presigned URL.
Option B: The SysOps administrator does not have the necessary permissions.
The administrator’s permissions are required to generate a valid presigned URL with sufficient permissions.
Option C: A bucket policy is required.
A bucket policy is not necessary if the presigned URL has the correct permissions.
Option D: The object has already been uploaded.
A presigned URL remains valid until it expires or the specified permissions are revoked.
Select the Best Solution:
Option B: Ensuring that the SysOps administrator has the necessary permissions to upload objects to the S3 bucket is crucial for generating valid presigned URLs.
Reference: Amazon S3 Presigned URLs
IAM Policies for Amazon S3
The SysOps administrator must have the necessary permissions to upload objects to the S3 bucket, ensuring that the presigned URL generated allows the user to upload successfully.
A SysOps administrator is responsible for a legacy. CPU-heavy application. The application can only be scaled vertically Currently, the application is deployed on a single t2 large Amazon EC2 instance The system is showing 90% CPU usage and significant performance latency after a few minutes
What change should be made to alleviate the performance problem?
- A . Change the Amazon EBS volume to Provisioned lOPs
- B . Upgrade to a compute-optimized instance
- C . Add additional t3. large instances to the application
- D . Purchase Reserved Instances
B
Explanation:
To address the performance issues of a CPU-heavy application running on a t2.large EC2 instance, the best course of action is to upgrade to a compute-optimized instance. Compute-optimized instances provide a higher ratio of CPU resources compared to memory, making them ideal for applications that require high CPU performance.
Upgrade to a Compute-Optimized Instance:
Identify a suitable compute-optimized instance type, such as the c5.large, which offers better CPU performance compared to the t2.large.
Stop the current EC2 instance and change the instance type to the chosen compute-optimized instance.
Reference: Amazon EC2 Instance Types
Considerations:
Ensure that the new instance type is compatible with the existing AMI and EBS volume configuration.
Monitor the application performance after the upgrade to ensure that the new instance type meets the application’s requirements.
This approach directly addresses the high CPU utilization and performance latency issues.
A company is tunning a website on Amazon EC2 instances thai are in an Auto Scaling group When the website traffic increases, additional instances lake several minutes to become available because ot a long-running user data script that installs software A SysOps administrator must decrease the time that is required (or new instances to become available
Which action should the SysOps administrator take to meet this requirement?
- A . Reduce the scaling thresholds so that instances are added before traffic increases
- B . Purchase Reserved Instances to cover 100% of the maximum capacity of the Auto Scaling group
- C . Update the Auto Scaling group to launch instances that have a storage optimized instance type
- D . Use EC2 Image Builder to prepare an Amazon Machine Image (AMI) that has pre-installed software
D
Explanation:
To reduce the time required for new instances to become available in an Auto Scaling group, pre-installing the necessary software in the AMI used by the Auto Scaling group is the most effective solution.
Use EC2 Image Builder:
Utilize EC2 Image Builder to create a custom AMI that includes all the required software and configurations.
This reduces the setup time during instance launch, as the user data script will no longer need to
install the software.
Reference: EC2 Image Builder
Update Auto Scaling Group:
Update the Auto Scaling group to use the new AMI with pre-installed software.
Reference: Auto Scaling Groups
This solution ensures that new instances can handle traffic more quickly, reducing latency during scaling events.
A SysOps administrator is notified that an Amazon EC2 instance has stopped responding The AWS Management Console indicates that the system status checks are failing.
What should the administrator do first to resolve this issue?
- A . Reboot the EC2 instance so it can be launched on a new host
- B . Stop and then start the EC2 instance so that it can be launched on a new host
- C . Terminate the EC2 instance and relaunch it
- D . View the AWS CloudTrail log to investigate what changed on the EC2 instance
B
Explanation:
When an EC2 instance stops responding and the system status checks are failing, the recommended first step is to stop and then start the instance. This action will cause the instance to be moved to a new host, potentially resolving the underlying hardware issue.
Stop and Start the Instance:
In the AWS Management Console, navigate to the EC2 dashboard and stop the affected instance.
After the instance has stopped, start it again to launch it on a new host.
Reference: Instance Status Checks
Monitor Instance:
After restarting, monitor the instance to ensure that it is functioning correctly and that the system status checks pass.
This approach is often effective in resolving hardware-related issues.
A SysOps administrator has enabled AWS CloudTrail in an AWS account If CloudTrail is disabled it must be re-enabled immediately What should the SysOps administrator do to meet these requirements WITHOUT writing custom code”
- A . Add the AWS account to AWS Organizations Enable CloudTrail in the management account
- B . Create an AWS Config rule that is invoked when CloudTrail configuration changes Apply the AWS-ConfigureCloudTrailLogging automatic remediation action
- C . Create an AWS Config rule that is invoked when CloudTrail configuration changes Configure the rule to invoke an AWS Lambda function to enable CloudTrail
- D . Create an Amazon EventBridge (Amazon CloudWatch Events) hourly rule with a schedule pattern to run an AWS Systems Manager Automation document to enable CloudTrail
B
Explanation:
To ensure CloudTrail is re-enabled immediately if it is disabled, you can use AWS Config with an automatic remediation action.
Create AWS Config Rule:
Configure an AWS Config rule that triggers when there are changes to the CloudTrail configuration.
Reference: AWS Config Rules
Apply Automatic Remediation:
Use the AWS-ConfigureCloudTrailLogging automatic remediation action to re-enable CloudTrail if it is disabled.
Reference: AWS Config Remediation
This solution ensures compliance without the need for custom code.
A recent audit found that most resources belonging to the development team were in violation of patch compliance standards The resources were properly tagged.
Which service should be used to quickly remediate the issue and bring the resources back into compliance?
- A . AWS Config
- B . Amazon Inspector
- C . AWS Trusted Advisor
- D . AWS Systems Manager
D
Explanation:
To quickly remediate resources and bring them back into compliance with patch standards, AWS Systems Manager is the appropriate service to use.
Use AWS Systems Manager:
Leverage Systems Manager Patch Manager to automate the process of patching managed instances.
Use Systems Manager State Manager to ensure that resources remain in compliance with the desired configurations.
Reference: AWS Systems Manager Patch Manager
Reference: AWS Systems Manager State Manager
Tag-Based Management:
Utilize the tagging information to target the development team’s resources specifically for patching and compliance management.
Reference: Tagging Systems Manager Resources
This solution allows for efficient and automated remediation of compliance issues.
An Amazon EC2 instance is running an application that uses Amazon Simple Queue Service (Amazon SQS} queues A SysOps administrator must ensure that the application can read, write, and delete messages from the SQS queues
Which solution will meet these requirements in the MOST secure manner?
- A . Create an IAM user with an IAM policy that allows the sqs SendMessage permission, the sqs ReceiveMessage permission, and the sqs DeleteMessage permission to the appropriate queues Embed the IAM user’s credentials in the application’s configuration
- B . Create an IAM user with an IAM policy that allows the sqs SendMessage permission, the sqs ReceiveMessage permission, and the sqs DeleteMessage permission to the appropriate queues Export the IAM user’s access key and secret access key as environment variables on the EC2 instance
- C . Create and associate an IAM role that allows EC2 instances to call AWS services Attach an IAM policy to the role that allows sqs." permissions to the appropriate queues
- D . Create and associate an IAM role that allows EC2 instances to call AWS services Attach an IAM policy to the role that allows the sqs SendMessage permission, the sqs ReceiveMessage permission, and the sqs DeleteMessage permission to the appropriate queues
D
Explanation:
To ensure that the application running on an Amazon EC2 instance can read, write, and delete messages from the SQS queues in the most secure manner, the recommended approach is to use IAM roles for EC2 instances. This approach avoids the need to embed or export long-term AWS credentials, which can be a security risk.
Create an IAM Role for EC2:
Navigate to the IAM console in the AWS Management Console.
Choose "Roles" in the navigation pane, then click "Create role".
Select "AWS service" as the type of trusted entity and choose "EC2" as the use case. Click "Next:
Permissions".
Attach the Required Permissions:
On the "Attach permissions policies" page, you can either select an existing policy or create a custom policy.
For a custom policy, click "Create policy" and use the following JSON policy to allow the required SQS actions:
json
Copy code
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sqs:SendMessage",
"sqs:ReceiveMessage",
"sqs:DeleteMessage"
],
"Resource": "arn:aws:sqs:region:account-id:queue-name"
}
]
}
Replace region, account-id, and queue-name with appropriate values for your SQS queues.
Assign the Role to the EC2 Instance:
After creating the role with the necessary permissions, navigate to the EC2 console.
Select the instance that needs access to the SQS queues.
In the "Actions" menu, choose "Security", then "Modify IAM role".
Attach the newly created IAM role to the instance.
Verify the Permissions:
Ensure that the IAM role is properly attached to the EC2 instance.
Test the application to confirm that it can successfully perform the required actions (read, write, delete) on the SQS queues.
Reference: IAM Roles for Amazon EC2
Amazon SQS Policy Examples
A development team recently deployed a new version of a web application to production After the release, penetration testing revealed a cross-site scripting vulnerability that could expose user data
Which AWS service will mitigate this issue?
- A . AWS Shield Standard
- B . AWS WAF
- C . Elastic Load Balancing
- D . Amazon Cognito
B
Explanation:
AWS WAF (Web Application Firewall) is designed to protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF can help mitigate cross-site scripting (XSS) vulnerabilities by allowing users to create rules to filter specific types of HTTP requests.
Create a Web ACL:
Go to the AWS WAF & Shield console.
Click "Create web ACL" and specify a name and the AWS resource to protect (e.g., an Application Load Balancer).
Add Rules to Mitigate XSS:
Within the Web ACL, add a new rule.
Select "Rule builder" and choose a rule type. For mitigating XSS, use "AWS Managed Rules" or create a custom rule.
AWS Managed Rules include a predefined set for XSS that you can enable.
Configure the XSS Rule:
If using a custom rule, configure it to inspect requests and block any that contain XSS patterns. Use regular expressions or specific patterns to identify malicious scripts. Deploy the Web ACL:
Once configured, save the Web ACL.
Associate it with your Application Load Balancer or CloudFront distribution to start filtering requests.
Monitor and Adjust:
Monitor the requests being blocked by AWS WAF.
Adjust the rules as necessary to ensure legitimate traffic is not affected and the application remains protected.
Reference: AWS WAF Developer Guide
AWS WAF Managed Rules
A company uses an AWS CloudFormation template to provision an Amazon EC2 instance and an Amazon RDS DB instance A SysOps administrator must update the template to ensure that the DB instance is created before the EC2 instance is launched
What should the SysOps administrator do to meet this requirement?
- A . Add a wait condition to the template Update the EC2 instance user data script to send a signal after the EC2 instance is started
- B . Add the DependsOn attribute to the EC2 instance resource, and provide the logical name of the RDS resource
- C . Change the order of the resources in the template so that the RDS resource is listed before the EC2 instance resource
- D . Create multiple templates Use AWS CloudFormation StackSets to wait for one stack to complete before the second stack is created
B
Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-dependson.html
Syntax The DependsOn attribute can take a single string or list of strings. "DependsOn" : [ String, … ] Example The following template contains an AWS::EC2::Instance resource with a DependsOn attribute that specifies myDB, an AWS::RDS::DBInstance. When CloudFormation creates this stack, it first creates myDB, then creates Ec2Instance.
A company has an existing web application that runs on two Amazon EC2 instances behind an Application Load Balancer (ALB) across two Availability Zones The application uses an Amazon RDS Multi-AZ DB Instance Amazon Route 53 record sets route requests tor dynamic content to the load balancer and requests for static content to an Amazon S3 bucket Site visitors are reporting extremely long loading times.
Which actions should be taken to improve the performance of the website? (Select TWO)
- A . Add Amazon CloudFront caching for static content
- B . Change the load balancer listener from HTTPS to TCP
- C . Enable Amazon Route 53 latency-based routing
- D . Implement Amazon EC2 Auto Scaling for the web servers
- E . Move the static content from Amazon S3 to the web servers
AD
Explanation:
To improve the performance of the website with long loading times, leveraging Amazon CloudFront for caching static content and implementing EC2 Auto Scaling are effective strategies.
Add Amazon CloudFront Caching:
Navigate to the CloudFront console and create a new distribution.
For the origin, specify the S3 bucket where the static content is stored.
Configure caching settings to ensure that static content is cached at edge locations for faster delivery to end users.
Update the DNS settings in Amazon Route 53 to point to the CloudFront distribution for static content.
Implement EC2 Auto Scaling:
Go to the EC2 console and create a new launch configuration or launch template for the web servers.
Set up an Auto Scaling group using the launch configuration/template.
Configure the Auto Scaling group to use health checks, and specify the desired, minimum, and maximum number of instances.
Define scaling policies to add or remove instances based on CPU utilization or other metrics. Distribute instances across multiple Availability Zones for high availability.
Reference: Amazon CloudFront Developer Guide
Amazon EC2 Auto Scaling
A company is running an application on premises and wants to use AWS for data backup All of the data must be available locally. The backup application can write only to block-based storage that is compatible with the Portable Operating System Interface (POSIX)
Which backup solution will meet these requirements?
- A . Configure the backup software to use Amazon S3 as the target for the data backups
- B . Configure the backup software to use Amazon S3 Glacier as the target for the data backups
- C . Use AWS Storage Gateway, and configure it to use gateway-cached volumes
- D . Use AWS Storage Gateway, and configure it to use gateway-stored volumes
D
Explanation:
AWS Storage Gateway provides a hybrid cloud storage service that enables on-premises applications to seamlessly use AWS cloud storage. The gateway-stored volumes configuration is suitable for scenarios where all data must be available locally and the backup application can write only to block-based storage that is POSIX-compliant.
Deploy AWS Storage Gateway:
Launch the AWS Storage Gateway service from the AWS Management Console. Download and deploy the Storage Gateway VM on your on-premises infrastructure.
Activate the Gateway:
Activate the Storage Gateway by connecting it to your AWS account. Follow the setup wizard to complete the activation process. Configure Gateway-Stored Volumes:
Create a gateway-stored volume where the primary data is stored locally, and an asynchronous copy is stored in AWS.
Specify the size of the volume and configure it to match your backup application’s requirements.
Connect the Backup Application:
Present the created volume to your on-premises backup application as an iSCSI target.
Configure the backup application to write data to the iSCSI target provided by the Storage Gateway.
Monitor and Manage:
Use the AWS Management Console to monitor the gateway and the volumes.
Ensure that the data is being backed up to AWS correctly and that local copies are maintained as required.
Reference: AWS Storage Gateway User Guide
Gateway-Stored Volumes
An organization created an Amazon Elastic File System (Amazon EFS) volume with a file system ID of fs-85ba4Kc. and it is actively used by 10 Amazon EC2 hosts. The organization has become concerned that the file system is not encrypted
How can this be resolved?
- A . Enable encryption on each host’s connection to the Amazon EFS volume Each connection must be recreated for encryption to take effect
- B . Enable encryption on the existing EFS volume by using the AWS Command Line Interface
- C . Enable encryption on each host’s local drive Restart each host to encrypt the drive
- D . Enable encryption on a newly created volume and copy all data from the original volume Reconnect each host to the new volume
D
Explanation:
https://docs.aws.amazon.com/efs/latest/ug/encryption.html
Amazon EFS supports two forms of encryption for file systems, encryption of data in transit and encryption at rest. You can enable encryption of data at rest when creating an Amazon EFS file system. You can enable encryption of data in transit when you mount the file system.
While setting up an AWS managed VPN connection, a SysOps administrator creates a customer gateway resource in AWS The customer gateway device resides in a data center with a NAT gateway in front of it
What address should be used to create the customer gateway resource?
- A . The private IP address of the customer gateway device
- B . The MAC address of the NAT device in front of the customer gateway device
- C . The public IP address of the customer gateway device
- D . The public IP address of the NAT device in front of the customer gateway device
D
Explanation:
When setting up an AWS managed VPN connection and creating a customer gateway resource, if the customer gateway device resides behind a NAT device, you should use the public IP address of the NAT device. This is because the VPN connection from AWS will be established to the public IP address that AWS can reach.
Identify the Public IP Address of the NAT Device:
Determine the public IP address assigned to the NAT device in front of the customer gateway.
Create Customer Gateway Resource:
Navigate to the VPC console in the AWS Management Console.
In the navigation pane, choose "Customer Gateways" and then click "Create Customer Gateway".
Enter a name for the customer gateway.
For the "IP Address", enter the public IP address of the NAT device.
Configure VPN Connection:
Create a VPN connection by navigating to the "VPN Connections" section and clicking "Create VPN Connection".
Select the created customer gateway and complete the VPN setup wizard.
Update Routing and Configuration:
Ensure that the routing configurations on both the AWS side and the on-premises side are updated to route traffic through the VPN connection.
Configure the customer gateway device (behind the NAT) to accept traffic from the NAT device and route it appropriately.
Reference: AWS Managed VPN Connections
Customer Gateway Resource
An errant process is known to use an entire processor and run at 100% A SysOps administrator wants to automate restarting the instance once the problem occurs for more than 2 minutes
How can this be accomplished?
- A . Create an Amazon CloudWatch alarm for the Amazon EC2 instance with basic monitoring Enable an action to restart the instance
- B . Create a CloudWatch alarm for the EC2 instance with detailed monitoring Enable an action to restart the instance
- C . Create an AWS Lambda function to restart the EC2 instance triggered on a scheduled basis every 2 minutes
- D . Create a Lambda function to restart the EC2 instance, triggered by EC2 health checks
B
Explanation:
To address the issue of an errant process consuming an entire processor and running at 100%, the SysOps administrator can automate the restarting of the instance using Amazon CloudWatch and AWS Systems Manager.
Enable Detailed Monitoring:
Ensure that detailed monitoring is enabled on the EC2 instance. Detailed monitoring provides data in 1-minute periods, which is crucial for detecting the 2-minute threshold accurately.
Reference: Amazon EC2 Detailed Monitoring
Create a CloudWatch Alarm:
Navigate to the CloudWatch console.
Select "Alarms" and click on "Create Alarm".
Choose the EC2 instance metric for CPU utilization.
Set the condition to trigger the alarm when the CPU utilization is greater than or equal to 100% for 2 consecutive periods of 1 minute each.
Reference: Creating Amazon CloudWatch Alarms
Enable an Action to Restart the Instance:
In the actions section of the alarm creation, select "EC2 Action" and then "Reboot this instance". This action will automatically restart the instance when the alarm is triggered.
Reference: Adding Alarms to Amazon EC2 Actions
This setup ensures that if the CPU utilization reaches 100% for more than 2 minutes, the instance will automatically restart, mitigating the impact of the errant process.
A SysOps administrator notices a scale-up event for an Amazon EC2 Auto Scaling group Amazon CloudWatch shows a spike in the RequestCount metric for the associated Application Load Balancer The administrator would like to know the IP addresses for the source of the requests
Where can the administrator find this information?
- A . Auto Scaling logs
- B . AWS CloudTrail logs
- C . EC2 instance logs
- D . Elastic Load Balancer access logs
D
Explanation:
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
An organization with a large IT department has decided to migrate to AWS With different job functions in the IT department it is not desirable to give all users access to all AWS resources Currently the organization handles access via LDAP group membership
What is the BEST method to allow access using current LDAP credentials?
- A . Create an AWS Directory Service Simple AD Replicate the on-premises LDAP directory to Simple AD
- B . Create a Lambda function to read LDAP groups and automate the creation of IAM users
- C . Use AWS CloudFormation to create IAM roles Deploy Direct Connect to allow access to the on-premises LDAP server
- D . Federate the LDAP directory with IAM using SAML Create different IAM roles to correspond to different LDAP groups to limit permissions
D
Explanation:
To allow access using current LDAP credentials while migrating to AWS, the best approach is to federate the LDAP directory with IAM using SAML.
Set Up SAML-Based Federation:
AWS supports identity federation using SAML (Security Assertion Markup Language) 2.0. You need to configure your LDAP directory to federate with AWS IAM via SAML.
Reference: About SAML 2.0-based Federation
Create and Configure IAM Roles:
Create IAM roles in AWS that correspond to different LDAP groups. Each role should have the appropriate permissions for its specific job function.
Reference: Creating IAM Roles
Set Up Identity Provider in AWS:
Configure AWS as a SAML 2.0 identity provider. This involves setting up a trust relationship between AWS and your LDAP directory.
Reference: Creating and Managing a SAML Identity Provider Assign IAM Roles to SAML Provider:
Map the LDAP group membership to IAM roles. This allows users to assume the roles based on their LDAP group membership.
Reference: Configuring SAML Assertions for Role-Based Access Control
By federating the LDAP directory with IAM using SAML, the organization can leverage existing LDAP credentials and group memberships to manage access to AWS resources effectively.
An Amazon S3 Inventory report reveals that more than 1 million objects in an S3 bucket are not encrypted These objects must be encrypted, and all future objects must be encrypted at the time they are written.
Which combination of actions should a SysOps administrator take to meet these requirements? (Select TWO)
- A . Create an AWS Config rule that runs evaluations against configuration changes to the S3 bucket When an unencrypted object is found run an AWS Systems Manager Automation document to encrypt the object in place
- B . Edit the properties of the S3 bucket to enable default server-side encryption
- C . Filter the S3 Inventory report by using S3 Select to find all objects that are not encrypted Create an S3 Batch Operations job to copy each object in place with encryption enabled
- D . Filter the S3 Inventory report by using S3 Select to find all objects that are not encrypted Send each object name as a message to an Amazon Simple Queue Service (Amazon SQS) queue Use the SQS queue to invoke an AWS Lambda function to tag each object with a key of "Encryption" and a value of "SSE-KMS"
- E . Use S3 Event Notifications to invoke an AWS Lambda function on all new object-created events for the S3 bucket Configure the Lambda function to check whether the object is encrypted and to run an AWS Systems Manager Automation document to encrypt the object in place when an unencrypted object is found
BC
Explanation:
To ensure all objects in the S3 bucket are encrypted, including future objects, the following steps should be taken:
Enable Default Server-Side Encryption:
Edit the properties of the S3 bucket to enable default server-side encryption. This ensures that all new objects written to the bucket are encrypted by default.
Navigate to the S3 console, select the bucket, go to the "Properties" tab, and under "Default encryption", select the encryption method (SSE-S3, SSE-KMS, etc.).
Reference: Amazon S3 Default Encryption
Use S3 Inventory and S3 Select:
Use S3 Inventory to generate a report of all objects in the bucket. This report helps identify which objects are not encrypted.
Use S3 Select to filter the inventory report and find all unencrypted objects.
Reference: Amazon S3 Inventory
Create S3 Batch Operations Job:
Create an S3 Batch Operations job to copy each unencrypted object in place with encryption enabled. This can be done through the S3 console or using AWS CLI/SDK.
This job can efficiently encrypt a large number of objects without the need to move data out of S3.
Reference: Amazon S3 Batch Operations
By following these steps, the SysOps administrator ensures that all existing and future objects in the S3 bucket are encrypted, thereby meeting the security requirements.
A company is using an AWS KMS customer master key (CMK) with imported key material The company references the CMK by its alias in the Java application to encrypt data The CMK must be rotated every 6 months
What is the process to rotate the key?
- A . Enable automatic key rotation for the CMK and specify a period of 6 months
- B . Create a new CMK with new imported material, and update the key alias to point to the new CMK.
- C . Delete the current key material, and import new material into the existing CMK
- D . Import a copy of the existing key material into a new CMK as a backup, and set the rotation schedule for 6 months
B
Explanation:
To rotate an AWS KMS customer master key (CMK) with imported key material every 6 months, follow these steps:
Create a New CMK with New Imported Material:
Generate new key material according to your security policies.
Create a new CMK in AWS KMS and import the new key material into this CMK.
Reference: Importing Key Material
Update the Key Alias:
Update the alias that your Java application references to point to the new CMK. This can be done via the AWS Management Console, AWS CLI, or AWS SDKs.
Aliases in KMS are used to refer to a key without having to use the key ID, making it easier to manage key rotation.
Reference: Working with Aliases
Test and Validate:
Ensure that the application can successfully encrypt and decrypt data using the new CMK.
Validate that the rotation process does not impact the application’s functionality.
By creating a new CMK and updating the alias, the administrator ensures the key is rotated without service disruption, maintaining compliance with security requirements.
A company is running a serverless application on AWS Lambda The application stores data in an Amazon RDS for MySQL DB instance Usage has steadily increased and recently there have been numerous "too many connections" errors when the Lambda function attempts to connect to the database The company already has configured the database to use the maximum max_connections value that is possible
What should a SysOps administrator do to resolve these errors’?
- A . Create a read replica of the database Use Amazon Route 53 to create a weighted DNS record that contains both databases
- B . Use Amazon RDS Proxy to create a proxy Update the connection string in the Lambda function
- C . Increase the value in the max_connect_errors parameter in the parameter group that the database uses
- D . Update the Lambda function’s reserved concurrency to a higher value
B
Explanation:
https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/
RDS Proxy acts as an intermediary between your application and an RDS database. RDS Proxy establishes and manages the necessary connection pools to your database so that your application creates fewer database connections. Your Lambda functions interact with RDS Proxy instead of your database instance. It handles the connection pooling necessary for scaling many simultaneous connections created by concurrent Lambda functions. This allows your Lambda applications to reuse existing connections, rather than creating new connections for every function invocation.
Check "Database proxy for Amazon RDS" section in the link to see how RDS proxy help Lambda handle huge connections to RDS MySQL https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/
A company stores files on 50 Amazon S3 buckets in the same AWS Region The company wants to connect to the S3 buckets securely over a private connection from its Amazon EC2 instances. The company needs a solution that produces no additional cost
Which solution will meet these requirements?
- A . Create a gateway VPC endpoint lor each S3 bucket Attach the gateway VPC endpoints to each subnet inside the VPC
- B . Create an interface VPC endpoint (or each S3 bucket Attach the interface VPC endpoints to each subnet inside the VPC
- C . Create one gateway VPC endpoint for all the S3 buckets Add the gateway VPC endpoint to the VPC route table
- D . Create one interface VPC endpoint for all the S3 buckets Add the interface VPC endpoint to the VPC route table
C
Explanation:
To securely connect to Amazon S3 buckets from Amazon EC2 instances over a private connection without incurring additional costs, you can use a gateway VPC endpoint for S3. This method allows you to create a single gateway VPC endpoint for all S3 buckets in the same region, ensuring secure, private communication.
Create a Gateway VPC Endpoint:
Navigate to the VPC console.
In the navigation pane, choose "Endpoints" and then click "Create Endpoint".
Select "AWS services" and then choose "com.amazonaws.<region>.s3" from the service name dropdown.
Select the VPC in which to create the endpoint and the appropriate route tables.
Click "Create endpoint".
Update the Route Tables:
The gateway VPC endpoint will automatically update the selected route tables to include routes that direct S3 traffic through the endpoint.
Ensure that the route tables associated with your subnets include routes for the S3 service pointing to the gateway VPC endpoint.
Verify the Configuration:
Ensure that instances in the VPC can access the S3 buckets using private IP addresses.
Check the routing configuration to confirm that traffic to S3 is routed through the gateway VPC endpoint.
Reference: Gateway VPC Endpoints for Amazon S3
Creating a Gateway Endpoint
A company uses AWS CloudFormation to deploy its application infrastructure Recently, a user accidentally changed a property of a database in a CloudFormation template and performed a stack update that caused an interruption to the application A SysOps administrator must determine how to modify the deployment process to allow the DevOps team to continue to deploy the infrastructure, but prevent against accidental modifications to specific resources.
Which solution will meet these requirements?
- A . Set up an AWS Config rule to alert based on changes to any CloudFormation stack An AWS Lambda function can then describe the stack to determine if any protected resources were modified and cancel the operation
- B . Set up an Amazon CloudWatch Events event with a rule to trigger based on any CloudFormation API call An AWS Lambda function can then describe the stack to determine if any protected resources were modified and cancel the operation
- C . Launch the CloudFormation templates using a stack policy with an explicit allow for all resources and an explicit deny of the protected resources with an action of Update
- D . Attach an IAM policy to the DevOps team role that prevents a CloudFormation stack from updating, with a condition based on the specific Amazon Resource Names (ARNs) of the protected resources
C
Explanation:
A stack policy is used to protect specific resources within a CloudFormation stack from being unintentionally updated or deleted. By using a stack policy, you can explicitly deny updates to critical resources while allowing updates to other parts of the stack.
Create a Stack Policy:
Define a JSON stack policy that includes an explicit allow for all resources and an explicit deny for the protected resources. For example:
json
Copy code
{
"Statement": [
{
"Effect": "Allow",
"Action": "Update:*",
"Principal": "*",
"Resource": "*"
},
{
"Effect": "Deny",
"Action": "Update:*",
"Principal": "*",
"Resource": "arn:aws:cloudformation:region:account-id:stack/stack-name/protected-resource"
}
]
}
Replace region, account-id, stack-name, and protected-resource with the appropriate values.
Apply the Stack Policy:
Navigate to the CloudFormation console.
Select the stack you want to protect.
Choose "Stack actions" and then "Edit stack policy".
Paste the stack policy JSON and save the policy.
Perform Stack Updates:
When performing stack updates, the stack policy will enforce the rules specified, preventing accidental updates to the protected resources.
Review and Adjust:
Periodically review the stack policy to ensure it still meets the needs of the organization and update it as necessary.
Reference: AWS CloudFormation Stack Policies
Creating and Applying a Stack Policy
A company’s financial department needs to view the cost details of each project in an AWS account A SysOps administrator must perform the initial configuration that is required to view cost for each project in Cost Explorer
Which solution will meet this requirement?
- A . Activate cost allocation tags Add a project tag to the appropriate resources
- B . Configure consolidated billing Create AWS Cost and Usage Reports
- C . Use AWS Budgets Create AWS Budgets reports
- D . Use cost categories to define custom groups that are based on AWS cost and usage dimensions
A
Explanation:
To view the cost details of each project in an AWS account using Cost Explorer, you need to activate cost allocation tags and tag the resources with a project identifier.
Activate Cost Allocation Tags:
Navigate to the AWS Billing and Cost Management console.
In the navigation pane, choose "Cost allocation tags".
Select the tags you want to activate (e.g., "Project") and click "Activate".
Tag Resources with Project Identifier:
Go to the AWS Management Console for each service where resources are deployed (e.g., EC2, S3, RDS).
For each resource, add a tag with the key "Project" and a value that identifies the project (e.g., "ProjectA").
Ensure that all relevant resources are tagged appropriately.
View Cost Allocation in Cost Explorer:
Navigate to the Cost Explorer console.
Use the "Group by" option to group costs by the "Project" tag.
Review the cost details for each project based on the tags applied to the resources.
Reference: Using Cost Allocation Tags
Analyzing Your Costs with Cost Explorer
A company is managing multiple AWS accounts in AWS Organizations The company is reviewing internal security of Its AWS environment The company’s security administrator has their own AWS account and wants to review the VPC configuration of developer AWS accounts
Which solution will meet these requirements in the MOST secure manner?
- A . Create an IAM policy in each developer account that has read-only access related to VPC resources Assign the policy to an IAM user Share the user credentials with the security administrator
- B . Create an IAM policy in each developer account that has administrator access to all Amazon EC2 actions, including VPC actions Assign the policy to an IAM user Share the user credentials with the security administrator
- C . Create an IAM policy in each developer account that has administrator access related to VPC
resources Assign the policy to a cross-account IAM role Ask the security administrator to assume the role from their account - D . Create an IAM policy m each developer account that has read-only access related to VPC resources Assign the policy to a cross-account IAM role Ask the security administrator to assume the role from their account
D
Explanation:
To review the VPC configuration of developer AWS accounts securely, the best practice is to use cross-account IAM roles with read-only access.
Create an IAM Policy with Read-Only Access:
Navigate to the IAM console in each developer account.
Create a new policy with read-only access to VPC resources. For example:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeNetworkAcls"
],
"Resource": "*"
}
]
}
Save the policy.
Create a Cross-Account IAM Role:
In the IAM console, choose "Roles" and then "Create role".
Select "Another AWS account" and enter the AWS account ID of the security administrator’s account.
Attach the read-only policy created in step 1 to the role.
Save the role and note the role ARN.
Assume the Role from the Security Administrator’s Account:
In the security administrator’s account, navigate to the IAM console.
Use the "Switch Role" option to assume the cross-account role created in the developer account using the role ARN.
The security administrator can now access the VPC configuration of the developer accounts with read-only permissions.
Reference: Cross-Account Access
Creating and Managing IAM Policies
An application runs on multiple Amazon EC2 instances in an Auto Scaling group The Auto Scaling group is configured to use the latest version of a launch template A SysOps administrator must devise a solution that centrally manages the application logs and retains the logs for no more than 90 days
Which solution will meet these requirements?
- A . Launch an Amazon Machine Image (AMI) that is preconfigured with the Amazon CloudWatch Logs agent to send logs to an Amazon S3 bucket Apply a 90-day S3 Lifecycle policy on the S3 bucket to expire the application logs
- B . Launch an Amazon Machine Image (AMI) that is preconfigured with the Amazon CloudWatch Logs agent to send logs to a log group Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule to perform an instance refresh every 90 days
- C . Update the launch template user data to install and configure the Amazon CloudWatch Logs agent to send logs to a log group Configure the retention period on the log group to be 90 days
- D . Update the launch template user data to install and configure the Amazon CloudWatch Logs agent to send logs to a log group Set the log rotation configuration of the EC2 instances to 90 days
C
Explanation:
To centrally manage application logs and retain them for no more than 90 days, you can use the Amazon CloudWatch Logs agent to send logs to a CloudWatch log group and configure the log group’s retention period.
Update the Launch Template User Data:
Navigate to the EC2 console.
Select the launch template used by the Auto Scaling group.
Edit the launch template to include the following user data script:
#!/bin/bash
yum update -y
yum install -y awslogs
cat <<EOF > /etc/awslogs/awslogs.conf
[general]
state_file = /var/lib/awslogs/agent-state
[/var/log/messages]
file = /var/log/messages
log_group_name = /my-log-group
log_stream_name = {instance_id}/messages
datetime_format = %b %d %H:%M:%S
[/var/log/secure]
file = /var/log/secure
log_group_name = /my-log-group
log_stream_name = {instance_id}/secure
datetime_format = %b %d %H:%M:%S
EOF
systemctl start awslogsd
systemctl enable awslogsd
Replace /my-log-group with the name of your CloudWatch log group.
Configure the Log Group Retention Period:
Navigate to the CloudWatch console.
In the navigation pane, choose "Logs".
Select the log group created by the CloudWatch Logs agent.
Click on "Actions" and then "Edit retention settings".
Set the retention period to 90 days.
Verify the Configuration:
Ensure that logs from the EC2 instances are being sent to the CloudWatch log group.
Verify that the log group’s retention period is correctly set to 90 days.
Reference: Amazon CloudWatch Logs Agent
Setting Log Retention in CloudWatch
A company has a mobile app that uses Amazon S3 to store images The images are popular for a week, and then the number of access requests decreases over time The images must be highly available and must be immediately accessible upon request A SysOps administrator must reduce S3 storage costs for the company.
Which solution will meet these requirements MOST cost-effectively?
- A . Create an S3 Lifecycle policy to transition the images to S3 Glacier after 7 days
- B . Create an S3 Lifecycle policy to transition the images to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days
- C . Create an S3 Lifecycle policy to transition the images to S3 Standard after 7 days
- D . Create an S3 Lifecycle policy to transition the images to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days
D
Explanation:
To reduce S3 storage costs while ensuring the images remain highly available and immediately accessible, transitioning the images to S3 Standard-IA after 7 days is the most cost-effective solution.
Understand Storage Classes:
S3 Standard-IA (Infrequent Access) is designed for data that is accessed less frequently but requires rapid access when needed. It offers lower storage costs compared to S3 Standard, but higher retrieval costs.
Reference: Amazon S3 Storage Classes
Create a Lifecycle Policy:
Navigate to the S3 console and select the bucket storing the images.
Go to the "Management" tab and select "Lifecycle".
Create a new lifecycle rule. Set the rule to transition objects to S3 Standard-IA 7 days after creation.
Reference: Managing your storage lifecycle
Configure the Transition:
In the Lifecycle rule configuration, add a "Transition" action.
Specify the transition to S3 Standard-IA and set the number of days to 7.
Save and apply the rule.
This setup ensures that images are stored in the cost-effective S3 Standard-IA storage class after their initial period of high access, meeting the requirement of high availability and immediate access.
A SysOps administrator receives notification that an application that is running on Amazon EC2 instances has failed to authenticate to an Amazon RDS database. To troubleshoot, the SysOps administrator needs to investigate AWS Secrets Manager password rotation
Which Amazon CloudWatch log will provide insight into the password rotation?
- A . AWS CloudTrail logs
- B . EC2 instance application logs
- C . AWS Lambda function logs
- D . RDS database logs
C
Explanation:
To investigate AWS Secrets Manager password rotation and troubleshoot the authentication failure of an application running on Amazon EC2 instances, you should check the AWS Lambda function logs responsible for the password rotation.
Understand Secrets Manager Password Rotation:
AWS Secrets Manager can automatically rotate secrets according to a specified rotation schedule using an AWS Lambda function.
Reference: Rotate AWS Secrets Manager secrets
Identify the Lambda Function:
Locate the Lambda function configured for password rotation in the AWS Secrets Manager console.
Reference: Managing Lambda Rotation Function
Access CloudWatch Logs:
Navigate to the CloudWatch console.
Select "Logs" and find the log group associated with the Lambda function responsible for password rotation.
Review the logs for any errors or issues related to the password rotation process.
Reference: Logging AWS Lambda function activity with Amazon CloudWatch Logs
By checking the AWS Lambda function logs, you can gain insights into any issues or errors that occurred during the password rotation process, helping to troubleshoot the authentication failure.
An AWS Lambda function is intermittently failing several times a day A SysOps administrator must find out how often this error has occurred in the last 7 days.
Which action will meet this requirement in the MOST operationally efficient manner?
- A . Use Amazon Athena to query the Amazon CloudWatch logs that are associated with the Lambda function
- B . Use Amazon Athena to query the AWS CloudTrail logs that are associated with the Lambda function
- C . Use Amazon CloudWatch Logs Insights to query the associated Lambda function logs
- D . Use Amazon Elasticsearch Service (Amazon ES) to stream the Amazon CloudWatch logs for the Lambda function
C
Explanation:
To efficiently find out how often the AWS Lambda function has failed in the last 7 days, use Amazon CloudWatch Logs Insights.
Access CloudWatch Logs Insights:
Navigate to the CloudWatch console.
Select "Logs Insights" from the navigation pane.
Reference: Analyzing Log Data with CloudWatch Logs Insights
Select the Log Group:
Select the log group associated with the Lambda function. Specify the time range for the last 7 days. Run the Query:
Use a query to filter and count the occurrences of errors in the logs. For example:
sql
Copy code
fields @timestamp, @message
| filter @message like /ERROR/
| stats count() by bin(1d)
This query will count the number of errors per day in the last 7 days.
Using CloudWatch Logs Insights provides an efficient way to query and analyze log data, helping you quickly identify the frequency of Lambda function failures.
A company has an internal web application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group in a single Availability Zone. A SysOps administrator must make the application highly available.
Which action should the SysOps administrator take to meet this requirement?
- A . Increase the maximum number of instances in the Auto Scaling group to meet the capacity that is
required at peak usage. - B . Increase the minimum number of instances in the Auto Scaling group to meet the capacity that is required at peak usage.
- C . Update the Auto Scaling group to launch new instances in a second Availability Zone in the same AWS Region.
- D . Update the Auto Scaling group to launch new instances in an Availability Zone in a second AWS Region.
C
Explanation:
To make the application highly available, update the Auto Scaling group to launch new instances in a second Availability Zone within the same AWS Region.
Understand High Availability Requirements:
High availability involves distributing instances across multiple Availability Zones to ensure the application remains accessible even if one Availability Zone experiences issues.
Reference: Regions, Availability Zones, and Local Zones
Update Auto Scaling Group Configuration:
Navigate to the EC2 console.
Select "Auto Scaling Groups" and choose the Auto Scaling group for your application.
Update the "Network" section to include additional subnets from different Availability Zones within the same region.
Reference: Configuring your Auto Scaling group to launch instances in multiple Availability Zones Adjust Capacity Settings:
Ensure the desired and minimum capacity settings are configured to distribute instances across multiple Availability Zones.
Save the changes.
By launching instances in multiple Availability Zones, the application can handle failures in one zone, achieving high availability.
A SysOps administrator is building a process for sharing Amazon RDS database snapshots between different accounts associated with different business units within the same company. All data must be encrypted at rest.
How should the administrator implement this process?
- A . Write a script to download the encrypted snapshot, decrypt it using the AWS KMS encryption key used to encrypt the snapshot, then create a new volume in each account.
- B . Update the key policy to grant permission to the AWS KMS encryption key used to encrypt the snapshot with all relevant accounts, then share the snapshot with those accounts.
- C . Create an Amazon EC2 instance based on the snapshot, then save the instance’s Amazon EBS volume as a snapshot and share it with the other accounts. Require each account owner to create a new volume from that snapshot and encrypt it.
- D . Create a new unencrypted RDS instance from the encrypted snapshot, connect to the instance using SSH/RDP. export the database contents into a file, then share this file with the other accounts.
B
Explanation:
To share Amazon RDS database snapshots between different accounts while ensuring all data is encrypted at rest, follow these steps:
Update the KMS Key Policy:
Navigate to the AWS KMS console.
Select the KMS key used to encrypt the RDS snapshots.
Update the key policy to grant the relevant AWS accounts permission to use the key.
Reference: Key policies in AWS KMS
Share the RDS Snapshot:
Navigate to the RDS console.
Select the snapshot you want to share and choose the "Share Snapshot" option.
Specify the AWS account IDs with which you want to share the snapshot.
Reference: Sharing a DB Snapshot
Access the Shared Snapshot in Target Accounts:
In the target AWS accounts, navigate to the RDS console.
Locate the shared snapshot under the "Snapshots" section.
Use the snapshot to create a new encrypted RDS instance in the target accounts.
By updating the key policy and sharing the snapshot, you ensure the encrypted data is securely accessible across different accounts within the organization.
A SysOps administrator has an AWS CloudFormation template of the company’s existing infrastructure in us-west-2. The administrator attempts to use the template to launch a new stack in eu-west-1, but the stack only partially deploys, receives an error message, and then rolls back.
Why would this template fail to deploy? (Select TWO.)
- A . The template referenced an IAM user that is not available in eu-west-1.
- B . The template referenced an Amazon Machine Image (AMI) that is not available in eu-west-1.
- C . The template did not have the proper level of permissions to deploy the resources.
- D . The template requested services that do not exist in eu-west-1.
- E . CloudFormation templates can be used only to update existing services.
BD
Explanation:
When attempting to deploy a CloudFormation template from one region (us-west-2) to another (eu-west-1), the deployment might fail due to region-specific resources and services.
Amazon Machine Image (AMI) Availability:
AMIs are region-specific. If the template references an AMI that is only available in us-west-2 and not in eu-west-1, the stack will fail to deploy in eu-west-1.
To resolve this, identify the AMI used in us-west-2 and find or create a similar AMI in eu-west-1.
Service Availability:
Some AWS services are not available in all regions. If the template includes resources or services that are not available in eu-west-1, the stack will fail.
Check the AWS Regional Services List to ensure all services and resource types in the template are available in eu-west-1.
Reference: Copy an AMI
AWS Regional Services List
A development team recently deployed a new version of a web application to production. After the release, penetration testing revealed a cross-site scripting vulnerability that could expose user data.
Which AWS service will mitigate this issue?
- A . AWS Shield Standard
- B . AWS WAF
- C . Elastic Load Balancing
- D . Amazon Cognito
B
Explanation:
AWS WAF (Web Application Firewall) helps protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF can mitigate cross-site scripting (XSS) vulnerabilities by filtering and monitoring HTTP requests based
on custom rules.
Create a Web ACL:
Navigate to the AWS WAF console.
Create a new Web ACL and associate it with your application.
Add Rules to Mitigate XSS:
Use AWS Managed Rules for common threats, including XSS. Create custom rules to inspect requests and block malicious scripts. Associate Web ACL with Resources:
Attach the Web ACL to your CloudFront distribution, API Gateway, or Application Load Balancer to filter incoming requests.
Reference: AWS WAF Developer Guide
AWS WAF Managed Rules
A company is using an Amazon DynamoDB table for data. A SysOps administrator must configure replication of the table to another AWS Region for disaster recovery.
What should the SysOps administrator do to meet this requirement?
- A . Enable DynamoDB Accelerator (DAX).
- B . Enable DynamoDB Streams, and add a global secondary index (GSI).
- C . Enable DynamoDB Streams, and-add a global table Region.
- D . Enable point-in-time recovery.
C
Explanation:
To configure replication of a DynamoDB table to another AWS Region for disaster recovery, you should use DynamoDB Global Tables. Global Tables use DynamoDB Streams to replicate changes across multiple regions.
Enable DynamoDB Streams:
Navigate to the DynamoDB console.
Select your table and enable DynamoDB Streams.
Add a Global Table Region:
With Streams enabled, go to the Global Tables tab.
Add a new region to the table, specifying the region where you want to replicate the data.
Automatic Replication:
DynamoDB will handle the replication of data across regions automatically, ensuring data consistency and high availability.
Reference: DynamoDB Global Tables
Enabling DynamoDB Streams
A company hosts a web portal on Amazon EC2 instances. The web portal uses an Elastic Load Balancer (ELB) and Amazon Route 53 for its public DNS service. The ELB and the EC2 instances are deployed by way of a single AWS CloudFormation stack in the us-east-1 Region. The web portal must be highly available across multiple Regions.
Which configuration will meet these requirements?
- A . Deploy a copy of the stack in the us-west-2 Region. Create a single start of authority (SOA) record in Route 53 that includes the IP address from each ELB. Configure the SOA record with health checks. Use the ELB in us-east-1 as the primary record and the ELB in us-west-2 as the secondary record.
- B . Deploy a copy of the stack in the us-west-2 Region. Create an additional A record in Route 53 that includes the ELB in us-west-2 as an alias target. Configure the A records with a failover routing policy and health checks. Use the ELB in us-east-1 as the primary record and the ELB in us-west-2 as the secondary record.
- C . Deploy a new group of EC2 instances in the us-west-2 Region. Associate the new EC2 instances with the existing ELB, and configure load balancer health checks on all EC2 instances. Configure the ELB to update Route 53 when EC2 instances in us-west-2 fail health checks.
- D . Deploy a new group of EC2 instances in the us-west-2 Region. Configure EC2 health checks on all EC2 instances in each Region. Configure a peering connection between the VPCs. Use the VPC in us-east-1 as the primary record and the VPC in us-west-2 as the secondary record.
B
Explanation:
When you create a hosted zone, Route 53 automatically creates a name server (NS) record and a start of authority (SOA) record for the zone.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html#migrate-dns-create-hosted-zone
https://en.wikipedia.org/wiki/SOA_record
A SysOps administrator must set up notifications for whenever combined billing exceeds a certain threshold for all AWS accounts within a company. The administrator has set up AWS Organizations and enabled Consolidated Billing.
Which additional steps must the administrator perform to set up the billing alerts?
- A . In the payer account: Enable billing alerts in the Billing and Cost Management console; publish an Amazon SNS message when the billing alert triggers.
- B . In each account: Enable billing alerts in the Billing and Cost Management console; set up a billing
alarm in Amazon CloudWatch; publish an SNS message when the alarm triggers. - C . In the payer account: Enable billing alerts in the Billing and Cost Management console; set up a billing alarm in the Billing and Cost Management console to publish an SNS message when the alarm triggers.
- D . In the payer account: Enable billing alerts in the Billing and Cost Management console; set up a billing alarm in Amazon CloudWatch; publish an SNS message when the alarm triggers.
D
Explanation:
To set up notifications for when combined billing exceeds a certain threshold for all AWS accounts within an organization, follow these steps:
Enable Billing Alerts:
In the payer account, go to the Billing and Cost Management console. Under "Billing preferences", enable the "Receive Billing Alerts" option. Set Up a Billing Alarm in CloudWatch: Navigate to the CloudWatch console.
Create a new billing alarm, specifying the threshold for the combined billing amount.
Set the alarm to publish a notification to an SNS topic when the threshold is breached.
Publish SNS Message:
Create an SNS topic to receive the billing alarm notifications. Configure the billing alarm to send notifications to the SNS topic.
Reference: Setting Up a CloudWatch Billing Alarm
AWS Billing and Cost Management User Guide
A company has multiple AWS Site-to-Site VPN connections between a VPC and its branch offices. The company manages an Amazon Elasticsearch Service (Amazon ES) domain that is configured with public access. The Amazon ES domain has an open domain access policy. A SysOps administrator needs to ensure that Amazon ES can be accessed only from the branch offices while preserving existing data.
Which solution will meet these requirements?
- A . Configure an identity-based access policy on Amazon ES. Add an allow statement to the policy that includes the Amazon Resource Name (ARN) for each branch office VPN connection.
- B . Configure an IP-based domain access policy on Amazon ES. Add an allow statement to the policy that includes the private IP CIDR blocks from each branch office network.
- C . Deploy a new Amazon ES domain in private subnets in a VPC, and import a snapshot from the old domain. Create a security group that allows inbound traffic from the branch office CIDR blocks.
- D . Reconfigure the Amazon ES domain in private subnets in a VPC. Create a security group that allows inbound traffic from the branch office CIDR blocks.
B
Explanation:
To ensure that Amazon Elasticsearch Service (Amazon ES) can be accessed only from the branch offices while preserving existing data, an IP-based domain access policy is the best approach. This allows you to restrict access to specific IP ranges.
Configure an IP-Based Domain Access Policy:
Navigate to the Amazon ES console.
Select the domain and go to the "Access policies" tab.
Update the Access Policy:
Edit the access policy to include an allow statement for the private IP CIDR blocks of each branch office.
Example policy:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"branch-office-cidr-1",
"branch-office-cidr-2"
]
}
}
}
]
}
Verify the Configuration:
Ensure that the policy correctly restricts access to the specified IP ranges. Test access from the branch offices to confirm connectivity.
Reference: Amazon Elasticsearch Service Access Control
Configuring Access Policies
A large company is using AWS Organizations to manage its multi-account AWS environment. According to company policy, all users should have read-level access to a particular Amazon S3 bucket in a central account. The S3 bucket data should not be available outside the organization. A SysOps administrator must set up the permissions and add a bucket policy to the S3 bucket.
Which parameters should be specified to accomplish this in the MOST efficient manner?
- A . Specify ‘*’ as the principal and PrincipalOrgld as a condition.
- B . Specify all account numbers as the principal.
- C . Specify PrincipalOrgld as the principal.
- D . Specify the organization’s management account as the principal.
A
Explanation:
To ensure that all users within the AWS Organization have read-level access to a specific Amazon S3 bucket, while preventing access outside the organization, you can specify a wildcard principal ("Principal": "*") and use the PrincipalOrgId condition key in the bucket policy.
Specify the Principal:
Use "Principal": "*". This means that any principal can access the bucket, but the actual access will be controlled by the condition.
Add Condition with PrincipalOrgId:
Add a condition to restrict access based on the PrincipalOrgId. This condition ensures that only the principals from the specified AWS Organization can access the bucket.
Example bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"StringEquals": {
"aws:PrincipalOrgID": "o-exampleorgid"
}
}
}
]
}
Reference: Bucket Policy Examples
This approach ensures that all users within the organization have the required access while blocking access from outside the organization.
A SysOps administrator is troubleshooting connection timeouts to an Amazon EC2 instance that has a public IP address. The instance has a private IP address of 172.31.16.139. When the SysOps administrator tries to ping the instance’s public IP address from the remote IP address 203.0.113.12, the response is "request timed out." The flow logs contain the following information:
What is one cause of the problem?
- A . Inbound security group deny rule
- B . Outbound security group deny rule
- C . Network ACL inbound rules
- D . Network ACL outbound rules
C
Explanation:
The issue of "request timed out" when pinging the public IP address of the EC2 instance could be due to the Network ACL (NACL) inbound rules.
Check NACL Inbound Rules:
Network ACLs act at the subnet level and can explicitly allow or deny traffic to or from a subnet.
Ensure that the NACL associated with the subnet containing the EC2 instance has inbound rules that allow ICMP traffic (which is used for ping).
Example rule to allow inbound ICMP traffic:
Rule Number: 100
Type: ICMP
Protocol: 1
Port Range: N/A (ICMP doesn’t use ports)
Source: 0.0.0.0/0 (or specific IP range)
Allow/Deny: ALLOW
Reference: Network ACLs
Verify Security Groups:
Although the most probable cause is NACLs, also ensure that the security group attached to the instance allows inbound ICMP traffic.
By allowing ICMP traffic in the NACL inbound rules, you can resolve the timeout issue when pinging the EC2 instance.
A company has multiple Amazon EC2 instances that run a resource-intensive application in a
development environment. A SysOps administrator is implementing a solution to stop these EC2 instances when they are not in use.
Which solution will meet this requirement?
- A . Assess AWS CloudTrail logs to verify that there is no EC2 API activity. Invoke an AWS Lambda function to stop the EC2 instances.
- B . Create an Amazon CloudWatch alarm to stop the EC2 instances when the average CPU utilization is lower than 5% for a 30-minute period.
- C . Create an Amazon CloudWatch metric to stop the EC2 instances when the VolumeReadBytes metric is lower than 500 for a 30-minute period.
- D . Use AWS Config to invoke an AWS Lambda function to stop the EC2 instances based on resource configuration changes.
C
Explanation:
To stop EC2 instances in a development environment when they are not in use, you can create a CloudWatch alarm based on CPU utilization.
Create CloudWatch Alarm:
Navigate to the CloudWatch console.
Select "Alarms" and click on "Create Alarm".
Choose the EC2 instance metric for CPU utilization.
Set the condition to trigger the alarm when the average CPU utilization is less than 5% for a continuous 30-minute period.
Reference: Creating Amazon CloudWatch Alarms
Configure Alarm Actions:
In the actions section of the alarm creation, specify the action to stop the instance.
This can be done by creating an alarm action that uses an AWS Lambda function or directly through EC2 actions.
Example using Lambda:
def lambda_handler(event, context):
ec2 = boto3.client(‘ec2’)
response = ec2.stop_instances(
InstanceIds=[
‘instance-id’
]
)
return response
Reference: Using Amazon CloudWatch Alarms
By setting up this CloudWatch alarm, the EC2 instances will automatically stop when they are not being utilized, reducing costs in the development environment.
A SysOps administrator needs to configure a solution that will deliver digital content to a set of authorized users through Amazon CloudFront. Unauthorized users must be restricted from access.
Which solution will meet these requirements?
- A . Store the digital content in an Amazon S3 bucket that does not have public access blocked. Use signed URLs to access the S3 bucket through CloudFront.
- B . Store the digital content in an Amazon S3 bucket that has public access blocked. Use an origin access identity (OAI) to deliver the content through CloudFront. Restrict S3 bucket access with signed URLs in CloudFront.
- C . Store the digital content in an Amazon S3 bucket that has public access blocked. Use an origin access identity (OAI) to deliver the content through CloudFront. Enable field-level encryption.
- D . Store the digital content in an Amazon S3 bucket that does not have public access blocked. Use signed cookies for restricted delivery of the content through CloudFront.
B
Explanation:
To deliver digital content to authorized users through CloudFront while restricting unauthorized access, you can use an origin access identity (OAI) with signed URLs.
Store Content in S3 with Public Access Blocked:
Ensure the S3 bucket has public access blocked.
Navigate to the S3 console, select the bucket, and configure the "Block Public Access" settings.
Reference: Blocking public access to your Amazon S3 storage Create an OAI for CloudFront:
In the CloudFront console, create an OAI to securely access the S3 bucket.
Associate the OAI with the CloudFront distribution.
Reference: Using an OAI
Restrict S3 Bucket Access to the OAI:
Update the S3 bucket policy to grant access to the OAI.
Example bucket policy:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <OAI-ID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
Use Signed URLs for Restricted Access:
Configure CloudFront to use signed URLs to control access to the content.
Reference: Serving private content with signed URLs and signed cookies
This setup ensures that only authorized users can access the content through CloudFront using signed URLs, while the S3 bucket remains private and secure.
A company has attached the following policy to an IAM user:
Which of the following actions are allowed for the IAM user?
- A . Amazon RDS DescribeDBInstances action in the us-east-1 Region
- B . Amazon S3 Putobject operation in a bucket named testbucket
- C . Amazon EC2 Describe Instances action in the us-east-1 Region
- D . Amazon EC2 AttachNetworkinterf ace action in the eu-west-1 Region
A
Explanation:
Based on the attached policy, the following actions are allowed for the IAM user:
Allow Amazon RDS DescribeDBInstances Action:
The policy allows rds:Describe* actions on all resources without any condition, so the user can describe RDS instances in any region.
Example action:
rds:DescribeDBInstances
Reference: Amazon RDS IAM Policies
Allow Amazon EC2 Actions in us-east-1 with Condition:
The policy allows ec2:* actions in the us-east-1 region based on the condition StringEquals for ec2:Region.
Example action:
ec2:DescribeInstances (only in us-east-1)
Reference: Amazon EC2 IAM Policies
Deny All Other EC2 Actions Globally:
The policy explicitly denies all actions that are not ec2:*, which means it blocks any other EC2 actions that don’t match the allow rule above.
Reference: IAM JSON Policy Elements: NotAction
Given these details, the only valid action among the options is:
A company runs a web application on three Amazon EC2 instances behind an Application Load Balancer (ALB). The company notices that random periods of increased traffic cause a degradation in the application’s performance. A SysOps administrator must scale the application to meet the increased traffic.
Which solution meets these requirements?
- A . Create an Amazon CloudWatch alarm to monitor application latency and increase the size of each
EC2 instance if the desired threshold is reached. - B . Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor application latency and add an EC2 instance to the ALB if the desired threshold is reached.
- C . Deploy the application to an Auto Scaling group of EC2 instances with a target tracking scaling policy. Attach the ALB to the Auto Scaling group.
- D . Deploy the application to an Auto Scaling group of EC2 instances with a scheduled scaling policy.
Attach the ALB to the Auto Scaling group.
C
Explanation:
Auto Scaling groups automatically adjust the number of EC2 instances to handle the load on your application. A target tracking scaling policy adjusts the capacity of the Auto Scaling group based on a target metric, such as CPU utilization or application latency.
Create an Auto Scaling Group:
Navigate to the EC2 console and select "Auto Scaling Groups".
Create a new Auto Scaling group and specify the launch template or configuration for your EC2 instances.
Configure Target Tracking Scaling Policy:
In the Auto Scaling group settings, add a scaling policy.
Choose "Target Tracking" and select a predefined metric such as "Average CPU Utilization".
Set the target value (e.g., 50% CPU utilization).
Attach ALB to Auto Scaling Group:
In the "Load Balancing" section of the Auto Scaling group settings, attach your existing ALB. This ensures that new instances are automatically registered with the ALB. Review and Create:
Review the Auto Scaling group settings and create the group.
Reference: Amazon EC2 Auto Scaling
Target Tracking Scaling Policies
A company’s public website is hosted in an Amazon S3 bucket in the us-east-1 Region behind an Amazon CloudFront distribution. The company wants to ensure that the website is protected from DDoS attacks. A SysOps administrator needs to deploy a solution that gives the company the ability to maintain control over the rate limit at which DDoS protections are applied.
Which solution will meet these requirements?
- A . Deploy a global-scoped AWS WAF web ACL with an allow default action. Configure an AWS WAF rate-based rule to block matching traffic. Associate the web ACL with the CloudFront distribution.
- B . Deploy an AWS WAF web ACL with an allow default action in us-east-1. Configure an AWS WAF rate-based rule to block matching traffic. Associate the web ACL with the S3 bucket.
- C . Deploy a global-scoped AWS WAF web ACL with a block default action. Configure an AWS WAF rate-based rule to allow matching traffic. Associate the web ACL with the CloudFront distribution.
- D . Deploy an AWS WAF web ACL with a block default action in us-east-1. Configure an AWS WAF rate-based rule to allow matching traffic. Associate the web ACL with the S3 bucket.
A
Explanation:
AWS WAF (Web Application Firewall) helps protect your web applications from common web exploits and bots. A rate-based rule allows you to control the rate of requests to your application.
Create a Global-Scoped AWS WAF Web ACL:
Navigate to the AWS WAF console.
Create a new Web ACL and choose "Global" for the scope.
Set the default action to "Allow".
Configure a Rate-Based Rule:
Within the Web ACL, add a new rule and select "Rate-based rule".
Define the rate limit (e.g., 2000 requests per 5 minutes).
Set the action to "Block".
Associate Web ACL with CloudFront Distribution:
After creating the Web ACL and rule, go to your CloudFront distribution settings. In the "General" tab, associate the Web ACL with your CloudFront distribution. Review and Confirm:
Review the configuration and ensure the Web ACL is correctly associated with the CloudFront distribution.
Reference: AWS WAF Rate-Based Rules
Associating AWS WAF with CloudFront
A company hosts an internal application on Amazon EC2 instances. All application data and requests route through an AWS Site-to-Site VPN connection between the on-premises network and AWS. The company must monitor the application for changes that allow network access outside of the corporate network. Any change that exposes the application externally must be restricted automatically.
Which solution meets these requirements in the MOST operationally efficient manner?
- A . Create an AWS Lambda function that updates security groups that are associated with the elastic network interface to remove inbound rules with noncorporate CIDR ranges. Turn on VPC Flow Logs, and send the logs to Amazon CloudWatch Logs. Create an Amazon CloudWatch alarm that matches traffic from noncorporate CIDR ranges, and publish a message to an Amazon Simple Notification Service (Amazon SNS) topic with the Lambda function as a target.
- B . Create a scheduled Amazon EventBridge (Amazon CloudWatch Events) rule that targets an AWS Systems Manager Automation document to check for public IP addresses on the EC2 instances. If public IP addresses are found on the EC2 instances, initiate another Systems Manager Automation
document to terminate the instances. - C . Configure AWS Config and a custom rule to monitor whether a security group allows inbound requests from noncorporate CIDR ranges. Create an AWS Systems Manager Automation document to remove any noncorporate CIDR ranges from the application security groups.
- D . Configure AWS Config and the managed rule for monitoring public IP associations with the EC2 instances by tag. Tag the EC2 instances with an identifier. Create an AWS Systems Manager Automation document to remove the public IP association from the EC2 instances.
C
Explanation:
CloudWatch provides metrics for EBS volumes that can be used to create alarms. The correct metrics to monitor EBS volume read and write activity are VolumeReadBytes and VolumeWriteBytes.
Identify the Correct Metrics:
VolumeReadBytes: The number of bytes read from the EBS volume.
VolumeWriteBytes: The number of bytes written to the EBS volume.
Reconfigure CloudWatch Alarms:
Navigate to the CloudWatch console.
Select the existing alarms and update the metric to VolumeReadBytes and VolumeWriteBytes for the specific EBS volumes attached to your EC2 instance.
Set the Desired Thresholds:
Configure the threshold for the alarms based on your monitoring requirements (e.g., trigger an alarm if read/write bytes exceed a certain value).
Verify Alarm Configuration:
Ensure that the alarms are correctly configured and monitoring the appropriate EBS volume metrics. Test the configuration by generating traffic to the volumes and observing the alarm state changes.
Reference: Amazon CloudWatch Metrics for Amazon EBS
Creating Amazon CloudWatch Alarms