How should you run this reverse proxy?

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?
A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Answer: A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

What should you do?

You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to ensure that the clusters can grow in nodes when needed.

What should you do?
A . Create a new subnet in the same region as the subnet being used.
B . Add an alias IP range to the subnet used by the GKE clusters.
C . Create a new VPC, and set up VPC peering with the existing VPC.
D . Expand the CIDR range of the relevant subnet for the cluster.

Answer: D

Explanation:

gcloud compute networks subnets expand-ip-range NAME gcloud compute networks subnets expand-ip-range – expand the IP range of a Compute Engine subnetwork

https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range

Which storage option should you use?

Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google’s recommended practices.

Which storage option should you use?
A . Multi-Regional Storage
B . Regional Storage
C . Nearline Storage
D . Coldline Storage

Answer: D

Explanation:

Reference:

https://cloud.google.com/storage/docs/storage-classes#nearline

https://cloud.google.com/blog/products/gcp/introducing-coldline-and-a-unified-platform-for-data-storage

Cloud Storage Coldline: a low-latency storage class for long-term archiving Coldline is a new Cloud Storage class designed for long-term archival and disaster recovery. Coldline is perfect for the archival needs of big data or multimedia content, allowing businesses to archive years of data. Coldline provides fast and instant (millisecond) access to data and changes the way that companies think about storing and accessing their cold data.

What should you do?

You want to configure a solution for archiving data in a Cloud Storage bucket. The solution must be cost-effective. Data with multiple versions should be archived after 30 days. Previous versions are accessed once a month for reporting. This archive data is also occasionally updated at month-end.

What should you do?
A . Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage.
B . Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage.
C . Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage.
D . Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage.

Answer: B

Explanation:

Reference: https://cloud.google.com/storage/docs/managing-lifecycles

Nearline Storage is ideal for data you plan to read or modify on average once per month or less. And this option archives just the noncurrent versions which is what we want to do.

Ref: https://cloud.google.com/storage/docs/storage-classes#nearline

What should you do?

You created a Google Cloud Platform project with an App Engine application inside the project. You initially configured the application to be served from the us-central region. Now you want the application to be served from the asia-northeast1 region.

What should you do?
A . Change the default region property setting in the existing GCP project to asia-northeast1.
B . Change the region property setting in the existing App Engine application from us-central to asia-northeast1.
C . Create a second App Engine application in the existing GCP project and specify asia-northeast1 as the region to serve your application.
D . Create a new GCP project and create an App Engine application inside this new project.
Specify asia-northeast1 as the region to serve your application.

Answer: D

Explanation:

https://cloud.google.com/appengine/docs/flexible/managing-projects-apps-billing#:~:text=Each%20Cloud%20project%20can%20contain%20only%20a%20single%20App%20Engine%20application%2C%20and%20once%20created%20you%20cannot%20change%20the%20location%20of%20your%20App%20Engine%20application.

Two App engine can’t be running on the same project: you can check this easy diagram for more info: https://cloud.google.com/appengine/docs/standard/an-overview-of-app-engine#components_of_an_application

And you can’t change location after setting it for your app Engine.

https://cloud.google.com/appengine/docs/standard/locations

App Engine is regional and you cannot change an apps region after you set it. Therefore, the only way to have an app run in another region is by creating a new project and targeting the app engine to run in the required region (asia-northeast1 in our case).

Ref: https://cloud.google.com/appengine/docs/locations

What should you do to delete the deployment and avoid pod getting recreated?

You created a Kubernetes deployment by running kubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment. You identified the pod and deleted it by running kubectl delete pod.

You noticed the pod got recreated.

✑ $ kubectl get pods

✑ NAME READY STATUS RESTARTS AGE

✑ nginx-84748895c4-nqqmt 1/1 Running 0 9m41s

✑ $ kubectl delete pod nginx-84748895c4-nqqmt

✑ pod nginx-84748895c4-nqqmt deleted

✑ $ kubectl get pods

✑ NAME READY STATUS RESTARTS AGE

✑ nginx-84748895c4-k6bzl 1/1 Running 0 25s

What should you do to delete the deployment and avoid pod getting recreated?
A . kubectl delete deployment nginx
B . kubectl delete Cdeployment=nginx
C . kubectl delete pod nginx-84748895c4-k6bzl Cno-restart 2
D . kubectl delete inginx

Answer: A

Explanation:

This command correctly deletes the deployment. Pods are managed by kubernetes workloads (deployments). When a pod is deleted, the deployment detects the pod is unavailable and brings up another pod to maintain the replica count. The only way to delete the workload is by deleting the deployment itself using the kubectl delete deployment command.

✑ $ kubectl delete deployment nginx

✑ deployment.apps nginx deleted

Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources

What should you do?

Your management has asked an external auditor to review all the resources in a specific project. The security team has enabled the Organization Policy called Domain Restricted Sharing on the organization node by specifying only your Cloud Identity domain. You want the auditor to only be able to view, but not modify, the resources in that project.

What should you do?
A . Ask the auditor for their Google account, and give them the Viewer role on the project.
B . Ask the auditor for their Google account, and give them the Security Reviewer role on the project.
C . Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project.
D . Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project.

Answer: C

Explanation:

Using primitive roles The following table lists the primitive roles that you can grant to access a project, the description of what the role does, and the permissions bundled within that role. Avoid using primitive roles except when absolutely necessary. These roles are very powerful, and include a large number of permissions across all Google Cloud services. For more details on when you should use primitive roles, see the Identity and Access Management FAQ. IAM predefined roles are much more granular, and allow you to carefully manage the set of permissions that your users have access to. See Understanding Roles for a list of roles that can be granted at the project level. Creating custom roles can further increase the control you have over user permissions. https://cloud.google.com/resource-manager/docs/access-control-proj#using_primitive_roles

https://cloud.google.com/iam/docs/understanding-custom-roles

How should you run this reverse proxy?

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?
A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Answer: A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

What should you do?

You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new project, using the fewest possible steps.

What should you do?
A . Use gcloud iam roles copy and specify the production project as the destination project.
B . Use gcloud iam roles copy and specify your organization as the destination organization.
C . In the Google Cloud Platform Console, use the ‘create role from role’ functionality.
D . In the Google Cloud Platform Console, use the ‘create role’ functionality and select all applicable permissions.

Answer: A

Explanation:

Reference: https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy

To create a copy of an existing role spanner.databaseAdmin into a project with PROJECT_ID, run: gcloud iam roles copy –source="roles/spanner.databaseAdmin" — destination=CustomSpannerDbAdmin –dest-project=PROJECT_ID

What should you do?

You have one project called proj-sa where you manage all your service accounts. You want to be able to use a service account from this project to take snapshots of VMs running in another project called proj-vm.

What should you do?
A . Download the private key from the service account, and add it to each VMs custom metadata.
B . Download the private key from the service account, and add the private key to each VM’s SSH keys.
C . Grant the service account the IAM Role of Compute Storage Admin in the project called proj-vm.
D . When creating the VMs, set the service account’s API scope for Compute Engine to read/write.

Answer: C

Explanation:

https://gtseres.medium.com/using-service-accounts-across-projects-in-gcp-cf9473fef8f0

You create the service account in proj-sa and take note of the service account email, then you go to proj-vm in IAM > ADD and add the service account’s email as new member and give it the Compute Storage Admin role.

https://cloud.google.com/compute/docs/access/iam#compute.storageAdmin