Exam4Training

Google Associate Cloud Engineer Google Cloud Certified – Associate Cloud Engineer Online Training

Question #1

Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and must be able to determine who accessed a given instance.

What should you do?

  • A . Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance.
  • B . Ask each member of the team to generate a new SSH key pair and to send you their public key.
    Use a configuration management tool to deploy those keys on each instance.
  • C . Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the “compute.osAdminLogin” role to the Google group corresponding to this team.
  • D . Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/compute/docs/instances/managing-instance-access

Question #2

You need to create a custom VPC with a single subnet. The subnet’s range must be as large as possible.

Which range should you use?

  • A . .00.0.0/0
  • B . 10.0.0.0/8
  • C . 172.16.0.0/12
  • D . 192.168.0.0/16

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/vpc/docs/vpc#manually_created_subnet_ip_ranges

Question #3

You want to select and configure a cost-effective solution for relational data on Google Cloud Platform. You are working with a small set of operational data in one geographic location. You need to support point-in-time recovery.

What should you do?

  • A . Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected.
  • B . Select Cloud SQL (MySQL). Select the create failover replicas option.
  • C . Select Cloud Spanner. Set up your instance with 2 nodes.
  • D . Select Cloud Spanner. Set up your instance as multi-regional.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Reference:

https://cloud.google.com/sql/docs/mysql/backup-recovery/restore

https://cloud.google.com/sql/docs/mysql/backup-recovery/pitr#disk-usage

Question #4

You want to configure autohealing for network load balancing for a group of Compute Engine instances that run in multiple zones, using the fewest possible steps. You need to configure re-creation of VMs if they are unresponsive after 3 attempts of 10 seconds each.

What should you do?

  • A . Create an HTTP load balancer with a backend configuration that references an existing instance group. Set the health check to healthy (HTTP).
  • B . Create an HTTP load balancer with a backend configuration that references an existing instance group. Define a balancing mode and set the maximum RPS to 10.
  • C . Create a managed instance group. Set the Autohealing health check to healthy (HTTP).
  • D . Create a managed instance group. Verify that the autoscaling setting is on.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/compute/docs/instance-groups

https://cloud.google.com/load-balancing/docs/network/transition-to-backend-services#console

In order to enable auto-healing, you need to group the instances into a managed instance group. Managed instance groups (MIGs) maintain the high availability of your applications by proactively keeping your virtual machine (VM) instances available. An auto-healing policy on the MIG relies on an application-based health check to verify that an application is responding as expected. If the auto-healer determines that an application isnt responding, the managed instance group automatically recreates that instance.

It is important to use separate health checks for load balancing and for auto-healing. Health checks for load balancing can and should be more aggressive because these health checks determine whether an instance receives user traffic. You want to catch non-responsive instances quickly, so you can redirect traffic if necessary. In contrast, health checking for auto-healing causes Compute Engine to proactively replace failing instances, so this health check should be more conservative than a load balancing health check.

Question #5

You are using multiple configurations for gcloud. You want to review the configured Kubernetes Engine cluster of an inactive configuration using the fewest possible steps.

What should you do?

  • A . Use gcloud config configurations describe to review the output.
  • B . Use gcloud config configurations activate and gcloud config list to review the output.
  • C . Use kubectl config get-contexts to review the output.
  • D . Use kubectl config use-context and kubectl config view to review the output.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Reference: https://medium.com/google-cloud/kubernetes-engine-kubectl-config-b6270d2b656c

kubectl config view -o jsonpath='{.users[].name}’ # display the first user kubectl config view -o jsonpath='{.users[*].name}’ # get a list of users kubectl config get-contexts # display list of contexts kubectl config current-context # display the current-context

kubectl config use-context my-cluster-name # set the default context to my-cluster-name

https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Question #6

Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google’s recommended practices.

Which storage option should you use?

  • A . Multi-Regional Storage
  • B . Regional Storage
  • C . Nearline Storage
  • D . Coldline Storage

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Reference:

https://cloud.google.com/storage/docs/storage-classes#nearline

https://cloud.google.com/blog/products/gcp/introducing-coldline-and-a-unified-platform-for-data-storage

Cloud Storage Coldline: a low-latency storage class for long-term archiving Coldline is a new Cloud Storage class designed for long-term archival and disaster recovery. Coldline is perfect for the archival needs of big data or multimedia content, allowing businesses to archive years of data. Coldline provides fast and instant (millisecond) access to data and changes the way that companies think about storing and accessing their cold data.

Question #7

Several employees at your company have been creating projects with Cloud Platform and paying for it with their personal credit cards, which the company reimburses. The company wants to centralize all these projects under a single, new billing account.

What should you do?

  • A . Contact cloud-billing@google.com with your bank account details and request a corporate billing account for your company.
  • B . Create a ticket with Google Support and wait for their call to share your credit card details over the phone.
  • C . In the Google Platform Console, go to the Resource Manage and move all projects to the root Organization.
  • D . In the Google Cloud Platform Console, create a new billing account and set up a payment method.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

( https://cloud.google.com/resource-manager/docs/project-migration#change_billing_account )

https://cloud.google.com/billing/docs/concepts

https://cloud.google.com/resource-manager/docs/project-migration

Question #8

You have an application that looks for its licensing server on the IP 10.0.3.21. You need to deploy the licensing server on Compute Engine. You do not want to change the configuration of the application and want the application to be able to reach the licensing server.

What should you do?

  • A . Reserve the IP 10.0.3.21 as a static internal IP address using gcloud and assign it to the licensing server.
  • B . Reserve the IP 10.0.3.21 as a static public IP address using gcloud and assign it to the licensing
    server.
  • C . Use the IP 10.0.3.21 as a custom ephemeral IP address and assign it to the licensing server.
  • D . Start the licensing server with an automatic ephemeral IP address, and then promote it to a static internal IP address.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

IP 10.0.3.21 is internal by default, and to ensure that it will be static non-changing it should be selected as static internal ip address.

Question #9

You are deploying an application to App Engine. You want the number of instances to scale based on request rate. You need at least 3 unoccupied instances at all times.

Which scaling type should you use?

  • A . Manual Scaling with 3 instances.
  • B . Basic Scaling with min_instances set to 3.
  • C . Basic Scaling with max_instances set to 3.
  • D . Automatic Scaling with min_idle_instances set to 3.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Reference:

https://cloud.google.com/appengine/docs/standard/python/how-instances-are-managed

https://cloud.google.com/appengine/docs/standard/go/config/appref

"App Engine calculates the number of instances necessary to serve your current application traffic

based on scaling settings such as target_cpu_utilization and target_throughput_utilization. Setting

min_idle_instances specifies the number of instances to run in addition to this calculated number.

For example, if App Engine calculates that 5 instances are necessary to serve traffic, and

min_idle_instances is set to 2, App Engine will run 7 instances (5, calculated based on traffic, plus 2

additional per min_idle_instances)."

Automatic scaling creates dynamic instances based on request rate, response latencies, and other application metrics. However, if you specify the number of minimum idle instances, that specified number of instances run as resident instances while any additional instances are dynamic.

Ref: https://cloud.google.com/appengine/docs/standard/python/how-instances-are-managed

Question #10

You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new project, using the fewest possible steps.

What should you do?

  • A . Use gcloud iam roles copy and specify the production project as the destination project.
  • B . Use gcloud iam roles copy and specify your organization as the destination organization.
  • C . In the Google Cloud Platform Console, use the ‘create role from role’ functionality.
  • D . In the Google Cloud Platform Console, use the ‘create role’ functionality and select all applicable permissions.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Reference: https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy

To create a copy of an existing role spanner.databaseAdmin into a project with PROJECT_ID, run: gcloud iam roles copy –source="roles/spanner.databaseAdmin" — destination=CustomSpannerDbAdmin –dest-project=PROJECT_ID

Question #11

You need a dynamic way of provisioning VMs on Compute Engine. The exact specifications will be in a dedicated configuration file. You want to follow Google’s recommended practices.

Which method should you use?

  • A . Deployment Manager
  • B . Cloud Composer
  • C . Managed Instance Group
  • D . Unmanaged Instance Group

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/deployment-manager/docs/configuration/create-basic-configuration

Question #12

You have a Dockerfile that you need to deploy on Kubernetes Engine.

What should you do?

  • A . Use kubectl app deploy <dockerfilename>.
  • B . Use gcloud app deploy <dockerfilename>.
  • C . Create a docker image from the Dockerfile and upload it to Container Registry. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file.
  • D . Create a docker image from the Dockerfile and upload it to Cloud Storage. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Reference: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app

Question #13

Your development team needs a new Jenkins server for their project. You need to deploy the server using the fewest steps possible.

What should you do?

  • A . Download and deploy the Jenkins Java WAR to App Engine Standard.
  • B . Create a new Compute Engine instance and install Jenkins through the command line interface.
  • C . Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image.
  • D . Use GCP Marketplace to launch the Jenkins solution.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Reference: https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine

Question #14

You need to update a deployment in Deployment Manager without any resource downtime in the deployment.

Which command should you use?

  • A . gcloud deployment-manager deployments create –config <deployment-config-path>
  • B . gcloud deployment-manager deployments update –config <deployment-config-path>
  • C . gcloud deployment-manager resources create –config <deployment-config-path>
  • D . gcloud deployment-manager resources update –config <deployment-config-path>

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Reference: https://cloud.google.com/sdk/gcloud/reference/deployment-manager/deployments/update

Question #15

You need to run an important query in BigQuery but expect it to return a lot of records. You want to find out how much it will cost to run the query. You are using on-demand pricing.

What should you do?

  • A . Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand.
  • B . Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator.
  • C . Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator.
  • D . Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Reference: https://cloud.google.com/bigquery/docs/estimate-costs

On-demand pricing Under on-demand pricing, BigQuery charges for queries by using one metric: the number of bytes processed (also referred to as bytes read). You are charged for the number of bytes processed whether the data is stored in BigQuery or in an external data source such as Cloud Storage, Drive, or Cloud Bigtable. On-demand pricing is based solely on usage. https://cloud.google.com/bigquery/pricing#on_demand_pricing

Question #16

You have a single binary application that you want to run on Google Cloud Platform. You decided to automatically scale the application based on underlying infrastructure CPU usage. Your organizational policies require you to use virtual machines directly. You need to ensure that the application scaling is operationally efficient and completed as quickly as possible.

What should you do?

  • A . Create a Google Kubernetes Engine cluster, and use horizontal pod autoscaling to scale the application.
  • B . Create an instance template, and use the template in a managed instance group with autoscaling configured.
  • C . Create an instance template, and use the template in a managed instance group that scales up and down based on the time of day.
  • D . Use a set of third-party tools to build automation around scaling the application up and down, based on Stackdriver CPU usage monitoring.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Managed instance groups offer autoscaling capabilities that let you automatically add or delete instances from a managed instance group based on increases or decreases in load (CPU Utilization in this case). Autoscaling helps your apps gracefully handle increases in traffic and reduce costs when the need for resources is lower. You define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load (CPU Utilization in this case). Autoscaling works by adding more instances to your instance group when there is more load (upscaling), and deleting instances when the need for instances is lowered (downscaling).

Ref: https://cloud.google.com/compute/docs/autoscaler

Question #17

You are analyzing Google Cloud Platform service costs from three separate projects. You want to use this information to create service cost estimates by service type, daily and monthly, for the next six months using standard query syntax.

What should you do?

  • A . Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis.
  • B . Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis.
  • C . Export your transactions to a local file, and perform analysis with a desktop tool.
  • D . Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

"…we recommend that you enable Cloud Billing data export to BigQuery at the same time that you create a Cloud Billing account. " https://cloud.google.com/billing/docs/how-to/export-data-bigquery https://medium.com/google-cloud/analyzing-google-cloud-billing-data-with-big-query-30bae1c2aae4

Question #18

You need to set up a policy so that videos stored in a specific Cloud Storage Regional bucket are moved to Coldline after 90 days, and then deleted after one year from their creation.

How should you set up the policy?

  • A . Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 C 90)
  • B . Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days.
  • C . Use gsutil rewrite and set the Delete action to 275 days (365-90).
  • D . Use gsutil rewrite and set the Delete action to 365 days.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/storage/docs/lifecycle#setstorageclass-cost

# The object’s time spent set at the original storage class counts towards any minimum storage duration that applies for the new storage class.

Question #19

You have a Linux VM that must connect to Cloud SQL. You created a service account with the

appropriate access rights. You want to make sure that the VM uses this service account instead of the default Compute Engine service account.

What should you do?

  • A . When creating the VM via the web console, specify the service account under the ‘Identity and API Access’ section.
  • B . Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as the value for the key compute-engine-service-account.
  • C . Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine-service-account.
  • D . Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service-account.json.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Reference: https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances

https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances Changing the service account and access scopes for an instance If you want to run the VM as a different identity, or you determine that the instance needs a different set of scopes to call the required APIs, you can change the service account and the access scopes of an existing instance. For example, you can change access scopes to grant access to a new API, or change an instance so that it runs as a service account that you created, instead of the Compute Engine default service account. However, Google recommends that you use the fine-grained IAM policies instead of relying on access scopes to control resource access for the service account. To change an instance’s service account and access scopes, the instance must be temporarily stopped. To stop your instance, read the documentation for Stopping an instance. After changing the service account or access scopes, remember to restart the instance. Use one of the following methods to the change service account or access scopes of the stopped instance.

Question #20

You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number of steps.

What should you do?

  • A . Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists.
  • B . Install a RDP client in your desktop. Set a Windows username and password in the GCP Console.
    Use the credentials to log in to the instance.
  • C . Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the RDP button in the GCP Console and supply the credentials to log in.
  • D . Set a Windows username and password in the GCP Console. Verify that a firewall rule for port 3389 exists. Click the RDP button in the GCP Console, and supply the credentials to log in.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://cloud.google.com/compute/docs/instances/connecting-to-windows#remote-desktop-connection-app

https://cloud.google.com/compute/docs/instances/windows/generating-credentials

https://cloud.google.com/compute/docs/instances/connecting-to-windows#before-you-begin

Question #21

You have one GCP account running in your default region and zone and another account running in a non-default region and zone. You want to start a new Compute Engine instance in these two Google Cloud Platform accounts using the command line interface.

What should you do?

  • A . Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts when running the commands to start the Compute Engine instances.
  • B . Create two configurations using gcloud config configurations create [NAME]. Run gcloud configurations list to start the Compute Engine instances.
  • C . Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances.
  • D . Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

"Run gcloud configurations list to start the Compute Engine instances".

How the heck are you expecting to "start" GCE instances doing "configuration list".

Each gcloud configuration has a 1 to 1 relationship with the region (if a region is defined). Since we have two different regions, we would need to create two separate configurations using gcloud config configurations create

Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/create

Secondly, you can activate each configuration independently by running gcloud config configurations activate [NAME]

Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate

Finally, while each configuration is active, you can run the gcloud compute instances start [NAME] command to start the instance in the configurations region. https://cloud.google.com/sdk/gcloud/reference/compute/instances/start

Question #22

You significantly changed a complex Deployment Manager template and want to confirm that the

dependencies of all defined resources are properly met before committing it to the project. You want the most rapid feedback on your changes.

What should you do?

  • A . Use granular logging statements within a Deployment Manager template authored in Python.
  • B . Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console.
  • C . Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures.
  • D . Execute the Deployment Manager template using the C-preview option in the same project, and observe the state of interdependent resources.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Reference: https://cloud.google.com/deployment-manager/docs/deployments/updating-deployments

Question #23

You are building a pipeline to process time-series data.

Which Google Cloud Platform services should you put in boxes 1,2,3, and 4?

  • A . Cloud Pub/Sub, Cloud Dataflow, Cloud Datastore, BigQuery
  • B . Firebase Messages, Cloud Pub/Sub, Cloud Spanner, BigQuery
  • C . Cloud Pub/Sub, Cloud Storage, BigQuery, Cloud Bigtable
  • D . Cloud Pub/Sub, Cloud Dataflow, Cloud Bigtable, BigQuery

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Reference:

https://cloud.google.com/solutions/correlating-time-series-dataflow

https://cloud.google.com/blog/products/data-analytics/handling-duplicate-data-in-streaming-pipeline-using-pubsub-dataflow

https://cloud.google.com/bigtable/docs/schema-design-time-series

Question #24

You have a project for your App Engine application that serves a development environment. The required testing has succeeded and you want to create a new project to serve as your production environment.

What should you do?

  • A . Use gcloud to create the new project, and then deploy your application to the new project.
  • B . Use gcloud to create the new project and to copy the deployed application to the new project.
  • C . Create a Deployment Manager configuration file that copies the current App Engine deployment into a new project.
  • D . Deploy your application again using gcloud and specify the project parameter with the new project name to create the new project.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

You can deploy to a different project by using Cproject flag.

By default, the service is deployed the current project configured via:

$ gcloud config set core/project PROJECT

To override this value for a single deployment, use the Cproject flag:

$ gcloud app deploy ~/my_app/app.yaml Cproject=PROJECT

Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy

Question #25

You need to configure IAM access audit logging in BigQuery for external auditors. You want to follow Google-recommended practices.

What should you do?

  • A . Add the auditors group to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM roles.
  • B . Add the auditors group to two new custom IAM roles.
  • C . Add the auditor user accounts to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM
    roles.
  • D . Add the auditor user accounts to two new custom IAM roles.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/iam/docs/job-functions/auditing#scenario_external_auditors

Because if you directly add users to the IAM roles, then if any users left the organization then you have to remove the users from multiple places and need to revoke his/her access from multiple places. But, if you put a user into a group then its very easy to manage these type of situations. Now, if any user left then you just need to remove the user from the group and all the access got revoked

The organization creates a Google group for these external auditors and adds the current auditor to the group. This group is monitored and is typically granted access to the dashboard application. During normal access, the auditors’ Google group is only granted access to view the historic logs stored in BigQuery. If any anomalies are discovered, the group is granted permission to view the actual Cloud Logging Admin Activity logs via the dashboard’s elevated access mode. At the end of each audit period, the group’s access is then revoked. Data is redacted using Cloud DLP before being made accessible for viewing via the dashboard application. The table below explains IAM logging roles that an Organization Administrator can grant to the service account used by the dashboard, as well as the resource level at which the role is granted.

Question #26

You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow Google-recommended practices.

What should you do?

  • A . Create a service account with an access scope. Use the access scope ‘https://www.googleapis.com/auth/devstorage.write_only’.
  • B . Create a service account with an access scope. Use the access scope ‘https://www.googleapis.com/auth/cloud-platform’.
  • C . Create a service account and add it to the IAM role ‘storage.objectCreator’ for that bucket.
  • D . Create a service account and add it to the IAM role ‘storage.objectAdmin’ for that bucket.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/iam/docs/understanding-service-accounts#using_service_accounts_with_compute_engine

https://cloud.google.com/storage/docs/access-control/iam-roles

Question #27

You have sensitive data stored in three Cloud Storage buckets and have enabled data access logging. You want to verify activities for a particular user for these buckets, using the fewest possible steps. You need to verify the addition of metadata labels and which files have been viewed from those buckets.

What should you do?

  • A . Using the GCP Console, filter the Activity log to view the information.
  • B . Using the GCP Console, filter the Stackdriver log to view the information.
  • C . View the bucket in the Storage section of the GCP Console.
  • D . Create a trace in Stackdriver to view the information.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/storage/docs/audit-logs

https://cloud.google.com/compute/docs/logging/audit-logging#audited_operations

Question #28

You are the project owner of a GCP project and want to delegate control to colleagues to manage buckets and files in Cloud Storage. You want to follow Google-recommended practices.

Which IAM roles should you grant your colleagues?

  • A . Project Editor
  • B . Storage Admin
  • C . Storage Object Admin
  • D . Storage Object Creator

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Storage Admin (roles/storage.admin) Grants full control of buckets and objects.

When applied to an individual bucket, control applies only to the specified bucket and objects within the bucket.

firebase.projects.get

resourcemanager.projects.get

resourcemanager.projects.list

storage.buckets.*

storage.objects.*

https://cloud.google.com/storage/docs/access-control/iam-roles

This role grants full control of buckets and objects. When applied to an individual bucket, control applies only to the specified bucket and objects within the bucket.

Ref: https://cloud.google.com/iam/docs/understanding-roles#storage-roles

Question #29

You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You want to use the most secure method that requires the fewest steps.

What should you do?

  • A . Create a signed URL with a four-hour expiration and share the URL with the company.
  • B . Set object access to ‘public’ and use object lifecycle management to remove the object after four hours.
  • C . Configure the storage bucket as a static website and furnish the object’s URL to the company.
    Delete the object from the storage bucket after four hours.
  • D . Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to that bucket. Delete the bucket after four hours have passed.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Signed URLs are used to give time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account. https://cloud.google.com/storage/docs/access-control/signed-urls

Question #30

You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution.

What should you do?

  • A . Deploy the monitoring pod in a StatefulSet object.
  • B . Deploy the monitoring pod in a DaemonSet object.
  • C . Reference the monitoring pod in a Deployment object.
  • D . Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset

https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns

DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed.

In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. So, this is a perfect fit for our monitoring pod. Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset

DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd. For example, you could have DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use different configurations for different hardware types and resource needs.

Question #31

You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service account to authenticate your application to the API. You want to make sure your application can use Cloud Pub/Sub.

What should you do?

  • A . Enable the Cloud Pub/Sub API in the API Library on the GCP Console.
  • B . Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it.
  • C . Use Deployment Manager to deploy your application. Rely on the automatic enablement of all APIs used by the application being deployed.
  • D . Grant the App Engine Default service account the role of Cloud Pub/Sub Admin. Have your application enable the API on the first connection to Cloud Pub/Sub.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Quickstart: using the Google Cloud Console

This page shows you how to perform basic tasks in Pub/Sub using the Google Cloud Console.

Note: If you are new to Pub/Sub, we recommend that you start with the interactive tutorial.

Before you begin

Set up a Cloud Console project.

Set up a project

Click to:

Create or select a project.

Enable the Pub/Sub API for that project.

You can view and manage these resources at any time in the Cloud Console.

Install and initialize the Cloud SDK.

Note: You can run the gcloud tool in the Cloud Console without installing the Cloud SDK. To run the gcloud tool in the Cloud Console, use Cloud Shell.

https://cloud.google.com/pubsub/docs/quickstart-console

Question #32

You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver Monitoring dashboard.

What should you do?

  • A . Use Shared VPC to connect all projects, and link Stackdriver to one of the projects.
  • B . For each project, create a Stackdriver account. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects.
  • C . Configure a single Stackdriver account, and link all projects to the same account.
  • D . Configure a single Stackdriver account for one of the projects. In Stackdriver, create a Group and add the other project names as criteria for that Group.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

When you intially click on Monitoring(Stackdriver Monitoring) it creates a workspac(a stackdriver account) linked to the ACTIVE(CURRENT) Project from which it was clicked.

Now if you change the project and again click onto Monitoring it would create an another workspace(a stackdriver account) linked to the changed ACTIVE(CURRENT) Project, we don’t want this as this would not consolidate our result into a single dashboard(workspace/stackdriver account).

If you have accidently created two diff workspaces merge them under Monitoring > Settings > Merge Workspaces > MERGE.

If we have only one workspace and two projects we can simply add other GCP Project under Monitoring > Settings > GCP Projects > Add GCP Projects.

https://cloud.google.com/monitoring/settings/multiple-projects

Nothing about groups https://cloud.google.com/monitoring/settings?hl=en

Question #33

You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of the VM should run per GCP project.

How should you configure the instance group?

  • A . Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
  • B . Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
  • C . Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum
    number of instances to 2.
  • D . Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/compute/docs/autoscaler#specifications

Autoscaling works independently from autohealing. If you configure autohealing for your group and an instance fails the health check, the autohealer attempts to recreate the instance. Recreating an instance can cause the number of instances in the group to fall below the autoscaling threshold (minNumReplicas) that you specify.

Since we need the application running at all times, we need a minimum 1 instance.

Only a single instance of the VM should run, we need a maximum 1 instance.

We want the application running at all times. If the VM crashes due to any underlying hardware failure, we want another instance to be added to MIG so that application can continue to serve requests. We can achieve this by enabling autoscaling. The only option that satisfies these three is Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.

Ref: https://cloud.google.com/compute/docs/autoscaler

Question #34

You want to verify the IAM users and roles assigned within a GCP project named my-project.

What should you do?

  • A . Run gcloud iam roles list. Review the output section.
  • B . Run gcloud iam service-accounts list. Review the output section.
  • C . Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles.
  • D . Navigate to the project and then to the Roles section in the GCP Console. Review the roles and status.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Logged onto console and followed the steps and was able to see all the assigned users and roles.

Question #35

You need to create a new billing account and then link it with an existing Google Cloud Platform project.

What should you do?

  • A . Verify that you are Project Billing Manager for the GCP project. Update the existing project to link it to the existing billing account.
  • B . Verify that you are Project Billing Manager for the GCP project. Create a new billing account and link the new billing account to the existing project.
  • C . Verify that you are Billing Administrator for the billing account. Create a new project and link the new project to the existing billing account.
  • D . Verify that you are Billing Administrator for the billing account. Update the existing project to link it to the existing billing account.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Billing Administrators can not create a new billing account, and the project is presumably already created. Project Billing Manager allows you to link the created billing account to the project. It is vague on how the billing account gets created but by process of elimination

Question #36

You have one project called proj-sa where you manage all your service accounts. You want to be able to use a service account from this project to take snapshots of VMs running in another project called proj-vm.

What should you do?

  • A . Download the private key from the service account, and add it to each VMs custom metadata.
  • B . Download the private key from the service account, and add the private key to each VM’s SSH keys.
  • C . Grant the service account the IAM Role of Compute Storage Admin in the project called proj-vm.
  • D . When creating the VMs, set the service account’s API scope for Compute Engine to read/write.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://gtseres.medium.com/using-service-accounts-across-projects-in-gcp-cf9473fef8f0

You create the service account in proj-sa and take note of the service account email, then you go to proj-vm in IAM > ADD and add the service account’s email as new member and give it the Compute Storage Admin role.

https://cloud.google.com/compute/docs/access/iam#compute.storageAdmin

Question #37

You created a Google Cloud Platform project with an App Engine application inside the project. You initially configured the application to be served from the us-central region. Now you want the application to be served from the asia-northeast1 region.

What should you do?

  • A . Change the default region property setting in the existing GCP project to asia-northeast1.
  • B . Change the region property setting in the existing App Engine application from us-central to asia-northeast1.
  • C . Create a second App Engine application in the existing GCP project and specify asia-northeast1 as the region to serve your application.
  • D . Create a new GCP project and create an App Engine application inside this new project. Specify asia-northeast1 as the region to serve your application.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://cloud.google.com/appengine/docs/flexible/managing-projects-apps-billing#:~:text=Each%20Cloud%20project%20can%20contain%20only%20a%20single%20App%20Engine%20application%2C%20and%20once%20created%20you%20cannot%20change%20the%20location%20of%20your%20App%20Engine%20application.

Two App engine can’t be running on the same project: you can check this easy diagram for more info: https://cloud.google.com/appengine/docs/standard/an-overview-of-app-engine#components_of_an_application

And you can’t change location after setting it for your app Engine.

https://cloud.google.com/appengine/docs/standard/locations

App Engine is regional and you cannot change an apps region after you set it. Therefore, the only way to have an app run in another region is by creating a new project and targeting the app engine to run in the required region (asia-northeast1 in our case).

Ref: https://cloud.google.com/appengine/docs/locations

Question #38

You need to grant access for three users so that they can view and edit table data on a Cloud Spanner instance.

What should you do?

  • A . Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to the role.
  • B . Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to a new group. Add the group to the role.
  • C . Run gcloud iam roles describe roles/spanner.viewer –project my-project. Add the users to the role.
  • D . Run gcloud iam roles describe roles/spanner.viewer –project my-project. Add the users to a new group. Add the group to the role.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/spanner/docs/iam#spanner.databaseUser

Using the gcloud tool, execute the gcloud iam roles describe roles/spanner.databaseUser command

on Cloud Shell. Attach the users to a newly created Google group and add the group to the role.

Question #39

You create a new Google Kubernetes Engine (GKE) cluster and want to make sure that it always runs a supported and stable version of Kubernetes.

What should you do?

  • A . Enable the Node Auto-Repair feature for your GKE cluster.
  • B . Enable the Node Auto-Upgrades feature for your GKE cluster.
  • C . Select the latest available cluster version for your GKE cluster.
  • D . Select “Container-Optimized OS (cos)” as a node image for your GKE cluster.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Creating or upgrading a cluster by specifying the version as latest does not provide automatic upgrades. Enable node auto-upgrades to ensure that the nodes in your cluster are up-to-date with the latest stable version.

https://cloud.google.com/kubernetes-engine/versioning-and-upgrades

Node auto-upgrades help you keep the nodes in your cluster up to date with the cluster master version when your master is updated on your behalf. When you create a new cluster or node pool with Google Cloud Console or the gcloud command, node auto-upgrade is enabled by default.

Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades

Question #40

You have an instance group that you want to load balance. You want the load balancer to terminate the client SSL session. The instance group is used to serve a public web application over HTTPS. You want to follow Google-recommended practices.

What should you do?

  • A . Configure an HTTP(S) load balancer.
  • B . Configure an internal TCP load balancer.
  • C . Configure an external SSL proxy load balancer.
  • D . Configure an external TCP proxy load balancer.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Reference: https://cloud.google.com/load-balancing/docs/https/

According to this guide for setting up an HTTP (S) load balancer in GCP: The client SSL session terminates at the load balancer. Sessions between the load balancer and the instance can either be HTTPS (recommended) or HTTP.

https://cloud.google.com/load-balancing/docs/ssl

Question #41

You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly.

How should you upload the file?

  • A . Use the GCP Console to transfer the file instead of gsutil.
  • B . Enable parallel composite uploads using gsutil on the file transfer.
  • C . Decrease the TCP window size on the machine initiating the transfer.
  • D . Change the storage class of the bucket from Nearline to Multi-Regional.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/storage/docs/parallel-composite-uploads

https://cloud.google.com/storage/docs/uploads-downloads#parallel-composite-uploads

Question #42

You’ve deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below:

You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices.

What should you do?

  • A . Store the database password inside the Docker image of the container, not in the YAML file.
  • B . Store the database password inside a Secret object. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret.
  • C . Store the database password inside a ConfigMap object. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap.
  • D . Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/config-connector/docs/how-to/secrets#gcloud

Question #43

You are running an application on multiple virtual machines within a managed instance group and have autoscaling enabled. The autoscaling policy is configured so that additional instances are added to the group if the CPU utilization of instances goes above 80%. VMs are added until the instance group reaches its maximum limit of five VMs or until CPU utilization of instances lowers to 80%. The initial delay for HTTP health checks against the instances is set to 30 seconds. The virtual machine instances take around three minutes to become available for users. You observe that when the instance group autoscales, it adds more instances then necessary to support the levels of end-user traffic. You want to properly maintain instance group sizes when autoscaling.

What should you do?

  • A . Set the maximum number of instances to 1.
  • B . Decrease the maximum number of instances to 3.
  • C . Use a TCP health check instead of an HTTP health check.
  • D . Increase the initial delay of the HTTP health check to 200 seconds.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The reason is that when you do health check, you want the VM to be working. Do the first check after initial setup time of 3 mins = 180 s < 200 s is reasonable.

The reason why our autoscaling is adding more instances than needed is that it checks 30 seconds after launching the instance and at this point, the instance isnt up and isnt ready to serve traffic. So our autoscaling policy starts another instance again checks this after 30 seconds and the cycle repeats until it gets to the maximum instances or the instances launched earlier are healthy and start processing traffic which happens after 180 seconds (3 minutes). This can be easily rectified by adjusting the initial delay to be higher than the time it takes for the instance to become available for processing traffic.

So setting this to 200 ensures that it waits until the instance is up (around 180-second mark) and then starts forwarding traffic to this instance. Even after a cool out period, if the CPU utilization is still high, the autoscaler can again scale up but this scale-up is genuine and is based on the actual load.

Initial Delay Seconds This setting delays autohealing from potentially prematurely recreating the instance if the instance is in the process of starting up. The initial delay timer starts when the currentAction of the instance is VERIFYING.

Ref: https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs

Question #44

You need to select and configure compute resources for a set of batch processing jobs. These jobs take around 2 hours to complete and are run nightly. You want to minimize service costs.

What should you do?

  • A . Select Google Kubernetes Engine. Use a single-node cluster with a small instance type.
  • B . Select Google Kubernetes Engine. Use a three-node cluster with micro instance types.
  • C . Select Compute Engine. Use preemptible VM instances of the appropriate standard machine type.
  • D . Select Compute Engine. Use VM instance types that support micro bursting.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

If your apps are fault-tolerant and can withstand possible instance preemptions, then preemptible instances can reduce your Compute Engine costs significantly. For example, batch processing jobs can run on preemptible instances. If some of those instances stop during processing, the job slows but does not completely stop. Preemptible instances complete your batch processing tasks without placing additional workload on your existing instances and without requiring you to pay full price for additional normal instances.

https://cloud.google.com/compute/docs/instances/preemptible

Question #45

You recently deployed a new version of an application to App Engine and then discovered a bug in the release. You need to immediately revert to the prior version of the application.

What should you do?

  • A . Run gcloud app restore.
  • B . On the App Engine page of the GCP Console, select the application that needs to be reverted and click Revert.
  • C . On the App Engine Versions page of the GCP Console, route 100% of the traffic to the previous version.
  • D . Deploy the original version as a separate application. Then go to App Engine settings and split traffic between applications so that the original version serves 100% of the requests.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Reference: https://medium.com/google-cloud/app-engine-project-cleanup-9647296e796a

Question #46

You deployed an App Engine application using gcloud app deploy, but it did not deploy to the intended project. You want to find out why this happened and where the application deployed.

What should you do?

  • A . Check the app.yaml file for your application and check project settings.
  • B . Check the web-application.xml file for your application and check project settings.
  • C . Go to Deployment Manager and review settings for deployment of applications.
  • D . Go to Cloud Shell and run gcloud config list to review the Google Cloud configuration used for deployment.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

C:GCPappeng>gcloud config list

[core]

account = xxx@gmail.com

disable_usage_reporting = False

project = my-first-demo-xxxx

https://cloud.google.com/endpoints/docs/openapi/troubleshoot-gce-deployment

Question #47

You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance.

What should you do?

  • A . Create an instance template for the instances. Set the ‘Automatic Restart’ to on. Set the ‘On-host maintenance’ to Migrate VM instance. Add the instance template to an instance group.
  • B . Create an instance template for the instances. Set ‘Automatic Restart’ to off. Set ‘On-host maintenance’ to Terminate VM instances. Add the instance template to an instance group.
  • C . Create an instance group for the instances. Set the ‘Autohealing’ health check to healthy (HTTP).
  • D . Create an instance group for the instance. Verify that the ‘Advanced creation options’ setting for ‘do not retry machine creation’ is set to off.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Create an instance template for the instances so VMs have same specs. Set the "˜Automatic Restart’ to on to VM automatically restarts upon crash. Set the "˜On-host maintenance’ to Migrate VM instance. This will take care of VM during maintenance window. It will migrate VM instance making it highly available Add the instance template to an instance group so instances can be managed.

• onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.

• [Default] MIGRATE, which causes Compute Engine to live migrate an instance when there is a maintenance event.

• TERMINATE, which stops an instance instead of migrating it.

• automaticRestart: Determines the behavior when an instance crashes or is stopped by the system.

• [Default] true, so Compute Engine restarts an instance if the instance crashes or is stopped.

• false, so Compute Engine does not restart an instance if the instance crashes or is stopped.

Enabling automatic restart ensures that compute engine instances are automatically restarted when they crash. And Enabling Migrate VM Instance enables live migrates i.e. compute instances are migrated during system maintenance and remain running during the migration.

Automatic Restart If your instance is set to terminate when there is a maintenance event, or if your instance crashes because of an underlying hardware issue, you can set up Compute Engine to automatically restart the instance by setting the automaticRestart field to true. This setting does not

apply if the instance is taken offline through a user action, such as calling sudo shutdown, or during a zone outage.

Ref: https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options#autorestart

Enabling the Migrate VM Instance option migrates your instance away from an infrastructure maintenance event, and your instance remains running during the migration. Your instance might experience a short period of decreased performance, although generally, most instances should not notice any difference. This is ideal for instances that require constant uptime and can tolerate a short period of decreased performance.

Ref: https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options#live_migrate

Question #48

You host a static website on Cloud Storage. Recently, you began to include links to PDF files on this site. Currently, when users click on the links to these PDF files, their browsers prompt them to save the file onto their local system. Instead, you want the clicked PDF files to be displayed within the browser window directly, without prompting the user to save the file locally.

What should you do?

  • A . Enable Cloud CDN on the website frontend.
  • B . Enable ‘Share publicly’ on the PDF file objects.
  • C . Set Content-Type metadata to application/pdf on the PDF file objects.
  • D . Add a label to the storage bucket with a key of Content-Type and value of application/pdf.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_Types#importance_of_setting_the_correct_mime_type

Question #49

You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to have 8 GB of memory.

What should you do?

  • A . Rely on live migration to move the workload to a machine with more memory.
  • B . Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8 GB.
  • C . Stop the VM, change the machine type to n1-standard-8, and start the VM.
  • D . Stop the VM, increase the memory to 8 GB, and start the VM.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In Google compute engine, if predefined machine types don’t meet your needs, you can create an instance with custom virtualized hardware settings. Specifically, you can create an instance with a custom number of vCPUs and custom memory, effectively using a custom machine type.

Custom machine types are ideal for the following scenarios:

Question #49

You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to have 8 GB of memory.

What should you do?

  • A . Rely on live migration to move the workload to a machine with more memory.
  • B . Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8 GB.
  • C . Stop the VM, change the machine type to n1-standard-8, and start the VM.
  • D . Stop the VM, increase the memory to 8 GB, and start the VM.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In Google compute engine, if predefined machine types don’t meet your needs, you can create an instance with custom virtualized hardware settings. Specifically, you can create an instance with a custom number of vCPUs and custom memory, effectively using a custom machine type.

Custom machine types are ideal for the following scenarios:

Question #49

You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to have 8 GB of memory.

What should you do?

  • A . Rely on live migration to move the workload to a machine with more memory.
  • B . Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8 GB.
  • C . Stop the VM, change the machine type to n1-standard-8, and start the VM.
  • D . Stop the VM, increase the memory to 8 GB, and start the VM.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In Google compute engine, if predefined machine types don’t meet your needs, you can create an instance with custom virtualized hardware settings. Specifically, you can create an instance with a custom number of vCPUs and custom memory, effectively using a custom machine type.

Custom machine types are ideal for the following scenarios:

Question #52

You have production and test workloads that you want to deploy on Compute Engine. Production VMs need to be in a different subnet than the test VMs. All the VMs must be able to reach each other over internal IP without creating additional routes. You need to set up VPC and the 2 subnets.

Which configuration meets these requirements?

  • A . Create a single custom VPC with 2 subnets. Create each subnet in a different region and with a different CIDR range.
  • B . Create a single custom VPC with 2 subnets. Create each subnet in the same region and with the same CIDR range.
  • C . Create 2 custom VPCs, each with a single subnet. Create each subnet is a different region and with a different CIDR range.
  • D . Create 2 custom VPCs, each with a single subnet. Create each subnet in the same region and with the same CIDR range.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

When we create subnets in the same VPC with different CIDR ranges, they can communicate automatically within VPC. Resources within a VPC network can communicate with one another by using internal (private) IPv4 addresses, subject to applicable network firewall rules

Ref: https://cloud.google.com/vpc/docs/vpc

Question #53

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated.

What should you do?

  • A . Create a health check on port 443 and use that when creating the Managed Instance Group.
  • B . Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
  • C . In the Instance Template, add the label ‘health-check’.
  • D . In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autohealing_policy

Question #53

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated.

What should you do?

  • A . Create a health check on port 443 and use that when creating the Managed Instance Group.
  • B . Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
  • C . In the Instance Template, add the label ‘health-check’.
  • D . In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autohealing_policy

Question #53

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated.

What should you do?

  • A . Create a health check on port 443 and use that when creating the Managed Instance Group.
  • B . Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
  • C . In the Instance Template, add the label ‘health-check’.
  • D . In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autohealing_policy

Question #53

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated.

What should you do?

  • A . Create a health check on port 443 and use that when creating the Managed Instance Group.
  • B . Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
  • C . In the Instance Template, add the label ‘health-check’.
  • D . In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autohealing_policy

Question #53

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated.

What should you do?

  • A . Create a health check on port 443 and use that when creating the Managed Instance Group.
  • B . Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
  • C . In the Instance Template, add the label ‘health-check’.
  • D . In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autohealing_policy

Question #53

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated.

What should you do?

  • A . Create a health check on port 443 and use that when creating the Managed Instance Group.
  • B . Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
  • C . In the Instance Template, add the label ‘health-check’.
  • D . In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autohealing_policy

Question #53

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated.

What should you do?

  • A . Create a health check on port 443 and use that when creating the Managed Instance Group.
  • B . Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
  • C . In the Instance Template, add the label ‘health-check’.
  • D . In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autohealing_policy

Question #60

Assign the BigQuery dataViewer user role to the group.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

Read the dataset’s metadata and to list tables in the dataset. Read data and metadata from the dataset’s tables. When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs.

BigQuery Data Viewer

(roles/bigquery.dataViewer)

When applied to a table or view, this role provides permissions to:

Read data and metadata from the table or view.

This role cannot be applied to individual models or routines.

When applied to a dataset, this role provides permissions to:

Read the dataset’s metadata and list tables in the dataset.

Read data and metadata from the dataset’s tables.

When applied at the project or organization level, this role can also enumerate all datasets in the

project. Additional roles, however, are necessary to allow the running of jobs.

Lowest-level resources where you can grant this role:

Table

View

BigQuery Job User

(roles/bigquery.jobUser)

Provides permissions to run jobs, including queries, within the project.

Lowest-level resources where you can grant this role:

Project

to run jobs https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser databaseUser needs additional role permission to run jobs

https://cloud.google.com/spanner/docs/iam#spanner.databaseUser

Question #60

Assign the BigQuery dataViewer user role to the group.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

Read the dataset’s metadata and to list tables in the dataset. Read data and metadata from the dataset’s tables. When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs.

BigQuery Data Viewer

(roles/bigquery.dataViewer)

When applied to a table or view, this role provides permissions to:

Read data and metadata from the table or view.

This role cannot be applied to individual models or routines.

When applied to a dataset, this role provides permissions to:

Read the dataset’s metadata and list tables in the dataset.

Read data and metadata from the dataset’s tables.

When applied at the project or organization level, this role can also enumerate all datasets in the

project. Additional roles, however, are necessary to allow the running of jobs.

Lowest-level resources where you can grant this role:

Table

View

BigQuery Job User

(roles/bigquery.jobUser)

Provides permissions to run jobs, including queries, within the project.

Lowest-level resources where you can grant this role:

Project

to run jobs https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser databaseUser needs additional role permission to run jobs

https://cloud.google.com/spanner/docs/iam#spanner.databaseUser

Question #60

Assign the BigQuery dataViewer user role to the group.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

Read the dataset’s metadata and to list tables in the dataset. Read data and metadata from the dataset’s tables. When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs.

BigQuery Data Viewer

(roles/bigquery.dataViewer)

When applied to a table or view, this role provides permissions to:

Read data and metadata from the table or view.

This role cannot be applied to individual models or routines.

When applied to a dataset, this role provides permissions to:

Read the dataset’s metadata and list tables in the dataset.

Read data and metadata from the dataset’s tables.

When applied at the project or organization level, this role can also enumerate all datasets in the

project. Additional roles, however, are necessary to allow the running of jobs.

Lowest-level resources where you can grant this role:

Table

View

BigQuery Job User

(roles/bigquery.jobUser)

Provides permissions to run jobs, including queries, within the project.

Lowest-level resources where you can grant this role:

Project

to run jobs https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser databaseUser needs additional role permission to run jobs

https://cloud.google.com/spanner/docs/iam#spanner.databaseUser

Question #60

Assign the BigQuery dataViewer user role to the group.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

Read the dataset’s metadata and to list tables in the dataset. Read data and metadata from the dataset’s tables. When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs.

BigQuery Data Viewer

(roles/bigquery.dataViewer)

When applied to a table or view, this role provides permissions to:

Read data and metadata from the table or view.

This role cannot be applied to individual models or routines.

When applied to a dataset, this role provides permissions to:

Read the dataset’s metadata and list tables in the dataset.

Read data and metadata from the dataset’s tables.

When applied at the project or organization level, this role can also enumerate all datasets in the

project. Additional roles, however, are necessary to allow the running of jobs.

Lowest-level resources where you can grant this role:

Table

View

BigQuery Job User

(roles/bigquery.jobUser)

Provides permissions to run jobs, including queries, within the project.

Lowest-level resources where you can grant this role:

Project

to run jobs https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser databaseUser needs additional role permission to run jobs

https://cloud.google.com/spanner/docs/iam#spanner.databaseUser

Question #60

Assign the BigQuery dataViewer user role to the group.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

Read the dataset’s metadata and to list tables in the dataset. Read data and metadata from the dataset’s tables. When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs.

BigQuery Data Viewer

(roles/bigquery.dataViewer)

When applied to a table or view, this role provides permissions to:

Read data and metadata from the table or view.

This role cannot be applied to individual models or routines.

When applied to a dataset, this role provides permissions to:

Read the dataset’s metadata and list tables in the dataset.

Read data and metadata from the dataset’s tables.

When applied at the project or organization level, this role can also enumerate all datasets in the

project. Additional roles, however, are necessary to allow the running of jobs.

Lowest-level resources where you can grant this role:

Table

View

BigQuery Job User

(roles/bigquery.jobUser)

Provides permissions to run jobs, including queries, within the project.

Lowest-level resources where you can grant this role:

Project

to run jobs https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser databaseUser needs additional role permission to run jobs

https://cloud.google.com/spanner/docs/iam#spanner.databaseUser

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #65

Create an egress firewall rule with the following settings:

• Targets: all instances

• Source filter: IP ranges (with the range set to 10.0.1.0/24)

• Protocols: allow TCP: 8080

Reveal Solution Hide Solution

Correct Answer: B

Explanation:

Question #75

Create the new instance in the new subnetwork and use the first instance’s private address as the endpoint.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

✑ Given that the new instance wants to access the application on the existing compute engine instance, these applications seem to be related so they should be within the same VPC. It is possible to have them in different VPCs and peer the VPCs but this is a lot of additional work and we can simplify this by choosing the option below (which is the answer)

Question #75

Create the new instance in the new subnetwork and use the first instance’s private address as the endpoint.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

✑ Given that the new instance wants to access the application on the existing compute engine instance, these applications seem to be related so they should be within the same VPC. It is possible to have them in different VPCs and peer the VPCs but this is a lot of additional work and we can simplify this by choosing the option below (which is the answer)

Question #75

Create the new instance in the new subnetwork and use the first instance’s private address as the endpoint.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

✑ Given that the new instance wants to access the application on the existing compute engine instance, these applications seem to be related so they should be within the same VPC. It is possible to have them in different VPCs and peer the VPCs but this is a lot of additional work and we can simplify this by choosing the option below (which is the answer)

Question #75

Create the new instance in the new subnetwork and use the first instance’s private address as the endpoint.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

✑ Given that the new instance wants to access the application on the existing compute engine instance, these applications seem to be related so they should be within the same VPC. It is possible to have them in different VPCs and peer the VPCs but this is a lot of additional work and we can simplify this by choosing the option below (which is the answer)

Question #75

Create the new instance in the new subnetwork and use the first instance’s private address as the endpoint.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

✑ Given that the new instance wants to access the application on the existing compute engine instance, these applications seem to be related so they should be within the same VPC. It is possible to have them in different VPCs and peer the VPCs but this is a lot of additional work and we can simplify this by choosing the option below (which is the answer)

Question #75

Create the new instance in the new subnetwork and use the first instance’s private address as the endpoint.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

✑ Given that the new instance wants to access the application on the existing compute engine instance, these applications seem to be related so they should be within the same VPC. It is possible to have them in different VPCs and peer the VPCs but this is a lot of additional work and we can simplify this by choosing the option below (which is the answer)

Question #75

Create the new instance in the new subnetwork and use the first instance’s private address as the endpoint.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

✑ Given that the new instance wants to access the application on the existing compute engine instance, these applications seem to be related so they should be within the same VPC. It is possible to have them in different VPCs and peer the VPCs but this is a lot of additional work and we can simplify this by choosing the option below (which is the answer)

Question #82

Clear the option to enable legacy Stackdriver Monitoring.

Reveal Solution Hide Solution

Correct Answer: A

Explanation:

https://cloud.google.com/logging/docs/api/v2/resource-list

GKE Containers have more log than GKE Cluster Operations:

.-GKE Containe:

cluster_name: An immutable name for the cluster the container is running in.

namespace_id: Immutable ID of the cluster namespace the container is running in.

instance_id: Immutable ID of the GCE instance the container is running in.

pod_id: Immutable ID of the pod the container is running in.

container_name: Immutable name of the container.

zone: The GCE zone in which the instance is running.

VS

.-GKE Cluster Operations

project_id: The identifier of the GCP project associated with this resource, such as "my-project".

cluster_name: The name of the GKE Cluster.

location: The location in which the GKE Cluster is running.

Question #83

You have a website hosted on App Engine standard environment. You want 1% of your users to see a new test version of the website. You want to minimize complexity.

What should you do?

  • A . Deploy the new version in the same application and use the –migrate option.
  • B . Deploy the new version in the same application and use the –splits option to give a weight of 99 to the current version and a weight of 1 to the new version.
  • C . Create a new App Engine application in the same project. Deploy the new version in that application. Use the App Engine library to proxy 1% of the requests to the new version.
  • D . Create a new App Engine application in the same project. Deploy the new version in that application. Configure your network load balancer to send 1% of the traffic to that new application.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/appengine/docs/standard/python/splitting-traffic#gcloud

Question #84

You have a web application deployed as a managed instance group. You have a new version of the application to gradually deploy. Your web application is currently receiving live web traffic. You want to ensure that the available capacity does not decrease during the deployment.

What should you do?

  • A . Perform a rolling-action start-update with maxSurge set to 0 and maxUnavailable set to 1.
  • B . Perform a rolling-action start-update with maxSurge set to 1 and maxUnavailable set to 0.
  • C . Create a new managed instance group with an updated instance template. Add the group to the backend service for the load balancer. When all instances in the new managed instance group are healthy, delete the old managed instance group.
  • D . Create a new instance template with the new application version. Update the existing managed instance group with the new instance template. Delete the instances in the managed instance group to allow the managed instance group to recreate the instance using the new instance template.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#max_unavailable

Question #85

You are building an application that stores relational data from users. Users across the globe will use this application. Your CTO is concerned about the scaling requirements because the size of the user base is unknown. You need to implement a database solution that can scale with your user growth with minimum configuration changes.

Which storage solution should you use?

  • A . Cloud SQL
  • B . Cloud Spanner
  • C . Cloud Firestore
  • D . Cloud Datastore

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Cloud Spanner is a relational database and is highly scalable. Cloud Spanner is a highly scalable, enterprise-grade, globally-distributed, and strongly consistent database service built for the cloud specifically to combine the benefits of relational database structure with a non-relational horizontal scale. This combination delivers high-performance transactions and strong consistency across rows, regions, and continents with an industry-leading 99.999% availability SLA, no planned downtime, and enterprise-grade security

Ref: https://cloud.google.com/spanner



Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #86

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects.

What should you do?

  • A . Assign the finance team only the Billing Account User role on the billing account.
  • B . Assign the engineering team only the Billing Account User role on the billing account.
  • C . Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.
  • D . Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

From this source: https://cloud.google.com/billing/docs/how-to/custom-roles#permission_association_and_inheritance

"For example, associating a project with a billing account requires the billing.resourceAssociations.create permission on the billing account and also the resourcemanager.projects.createBillingAssignment permission on the project. This is because project permissions are required for actions where project owners control access, while billing account permissions are required for actions where billing account administrators control access. When both should be involved, both permissions are necessary."

Question #96

Configure the Compute Engine instance to use the address of the load balancer that has been created.

Reveal Solution Hide Solution

Correct Answer: A

Explanation:

performs a peering between the two VPC’s (the statement makes sure that this option is feasible since it clearly specifies that there is no overlapping between the ip ranges of both vpc’s), deploy the LoadBalancer as internal with the annotation, and configure the endpoint so that the compute engine instance can access the application internally, that is, without the need to have a public ip at any time and therefore, without the need to go outside the google network. The traffic, therefore, never crosses the public internet.

https://medium.com/pablo-perez/k8s-externaltrafficpolicy-local-or-cluster-40b259a19404

https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing

clients in a VPC network connected to the LoadBalancer network using VPC Network Peering can also access the Service

https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters

Question #97

Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective approach for log file retention.

What should you do?

  • A . Create an export to the sink that saves logs from Cloud Audit to BigQuery.
  • B . Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket.
  • C . Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery.
  • D . Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Coldline Storage is the perfect service to store audit logs from all the projects and is very cost-efficient as well. Coldline Storage is a very low-cost, highly durable storage service for storing infrequently accessed data.

Question #98

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?

  • A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

Question #98

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?

  • A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

Question #98

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?

  • A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

Question #98

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?

  • A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

Question #98

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?

  • A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

Question #98

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?

  • A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

Question #98

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?

  • A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

Question #98

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?

  • A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

Question #98

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.

How should you run this reverse proxy?

  • A . Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B . Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C . Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D . Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

What is Google Cloud Memorystore?

Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.

Question #107

In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.

Reveal Solution Hide Solution

Correct Answer: D

Explanation:

Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.

✑ Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP

✑ Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.

✑ In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it is what Google recommends.

Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid

✑ You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.

✑ You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection. Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.

✑ So Negotiate with the security team to be able to give public IP addresses to the servers is not right. Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).

✑ So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.

✑ Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity

architectures https://cloud.google.com/hybrid-connectivity.

So,

✑ Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

Question #107

In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.

Reveal Solution Hide Solution

Correct Answer: D

Explanation:

Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.

✑ Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP

✑ Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.

✑ In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it is what Google recommends.

Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid

✑ You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.

✑ You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection. Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.

✑ So Negotiate with the security team to be able to give public IP addresses to the servers is not right. Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).

✑ So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.

✑ Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity

architectures https://cloud.google.com/hybrid-connectivity.

So,

✑ Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

Question #107

In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.

Reveal Solution Hide Solution

Correct Answer: D

Explanation:

Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.

✑ Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP

✑ Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.

✑ In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it is what Google recommends.

Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid

✑ You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.

✑ You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection. Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.

✑ So Negotiate with the security team to be able to give public IP addresses to the servers is not right. Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).

✑ So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.

✑ Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity

architectures https://cloud.google.com/hybrid-connectivity.

So,

✑ Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

Question #107

In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.

Reveal Solution Hide Solution

Correct Answer: D

Explanation:

Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.

✑ Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP

✑ Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.

✑ In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it is what Google recommends.

Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid

✑ You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.

✑ You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection. Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.

✑ So Negotiate with the security team to be able to give public IP addresses to the servers is not right. Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).

✑ So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.

✑ Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity

architectures https://cloud.google.com/hybrid-connectivity.

So,

✑ Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

Question #107

In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.

Reveal Solution Hide Solution

Correct Answer: D

Explanation:

Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.

✑ Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP

✑ Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.

✑ In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it is what Google recommends.

Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid

✑ You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.

✑ You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection. Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.

✑ So Negotiate with the security team to be able to give public IP addresses to the servers is not right. Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).

✑ So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.

✑ Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity

architectures https://cloud.google.com/hybrid-connectivity.

So,

✑ Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

Question #107

In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.

Reveal Solution Hide Solution

Correct Answer: D

Explanation:

Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.

✑ Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP

✑ Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.

✑ In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it is what Google recommends.

Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid

✑ You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.

✑ You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection. Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.

✑ So Negotiate with the security team to be able to give public IP addresses to the servers is not right. Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).

✑ So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.

✑ Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity

architectures https://cloud.google.com/hybrid-connectivity.

So,

✑ Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

Question #107

In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.

Reveal Solution Hide Solution

Correct Answer: D

Explanation:

Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.

✑ Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP

✑ Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.

✑ In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it is what Google recommends.

Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid

✑ You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.

✑ You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection. Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.

✑ So Negotiate with the security team to be able to give public IP addresses to the servers is not right. Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).

✑ So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.

✑ Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity

architectures https://cloud.google.com/hybrid-connectivity.

So,

✑ Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

Question #107

In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.

Reveal Solution Hide Solution

Correct Answer: D

Explanation:

Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.

✑ Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP

✑ Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.

✑ In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it is what Google recommends.

Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid

✑ You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.

✑ You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection. Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.

✑ So Negotiate with the security team to be able to give public IP addresses to the servers is not right. Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).

✑ So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.

✑ Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity

architectures https://cloud.google.com/hybrid-connectivity.

So,

✑ Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

Question #115

In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

https://cloud.google.com/run/docs/tutorials/pubsub#integrating-pubsub

Question #115

In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

https://cloud.google.com/run/docs/tutorials/pubsub#integrating-pubsub

Question #115

In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

https://cloud.google.com/run/docs/tutorials/pubsub#integrating-pubsub

Question #115

In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

https://cloud.google.com/run/docs/tutorials/pubsub#integrating-pubsub

Question #119

You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few requests per day. You want to minimize costs.

What should you do?

  • A . Deploy the container on Cloud Run.
  • B . Deploy the container on Cloud Run on GKE.
  • C . Deploy the container on App Engine Flexible.
  • D . Deploy the container on Google Kubernetes Engine, with cluster autoscaling and horizontal pod autoscaling enabled.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Cloud Run takes any container images and pairs great with the container ecosystem: Cloud Build, Artifact Registry, Docker. … No infrastructure to manage: once deployed, Cloud Run manages your services so you can sleep well. Fast autoscaling. Cloud Run automatically scales up or down from zero to N depending on traffic.

https://cloud.google.com/run

Question #120

Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to consolidate all costs as of tomorrow.

What should you do?

  • A . Link the acquired company’s projects to your company’s billing account.
  • B . Configure the acquired company’s billing account and your company’s billing account to export the billing data into the same BigQuery dataset.
  • C . Migrate the acquired company’s projects into your company’s GCP organization. Link the migrated projects to your company’s billing account.
  • D . Create a new GCP organization and a new billing account. Migrate the acquired company’s projects and your company’s projects into the new GCP organization and link the projects to the new billing account.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/resource-manager/docs/project-migration#oauth_consent_screen

https://cloud.google.com/resource-manager/docs/project-migration

Question #121

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices.

What should you do?

  • A . Add the support team group to the roles/monitoring.viewer role
  • B . Add the support team group to the roles/spanner.databaseUser role.
  • C . Add the support team group to the roles/spanner.databaseReader role.
  • D . Add the support team group to the roles/stackdriver.accounts.viewer role.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements. roles/monitoring.viewer. is the right answer.

Ref: https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles

Question #121

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices.

What should you do?

  • A . Add the support team group to the roles/monitoring.viewer role
  • B . Add the support team group to the roles/spanner.databaseUser role.
  • C . Add the support team group to the roles/spanner.databaseReader role.
  • D . Add the support team group to the roles/stackdriver.accounts.viewer role.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements. roles/monitoring.viewer. is the right answer.

Ref: https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles

Question #121

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices.

What should you do?

  • A . Add the support team group to the roles/monitoring.viewer role
  • B . Add the support team group to the roles/spanner.databaseUser role.
  • C . Add the support team group to the roles/spanner.databaseReader role.
  • D . Add the support team group to the roles/stackdriver.accounts.viewer role.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements. roles/monitoring.viewer. is the right answer.

Ref: https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles

Question #121

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices.

What should you do?

  • A . Add the support team group to the roles/monitoring.viewer role
  • B . Add the support team group to the roles/spanner.databaseUser role.
  • C . Add the support team group to the roles/spanner.databaseReader role.
  • D . Add the support team group to the roles/stackdriver.accounts.viewer role.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements. roles/monitoring.viewer. is the right answer.

Ref: https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles

Question #121

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices.

What should you do?

  • A . Add the support team group to the roles/monitoring.viewer role
  • B . Add the support team group to the roles/spanner.databaseUser role.
  • C . Add the support team group to the roles/spanner.databaseReader role.
  • D . Add the support team group to the roles/stackdriver.accounts.viewer role.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements. roles/monitoring.viewer. is the right answer.

Ref: https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles

Question #121

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices.

What should you do?

  • A . Add the support team group to the roles/monitoring.viewer role
  • B . Add the support team group to the roles/spanner.databaseUser role.
  • C . Add the support team group to the roles/spanner.databaseReader role.
  • D . Add the support team group to the roles/stackdriver.accounts.viewer role.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements. roles/monitoring.viewer. is the right answer.

Ref: https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles

Question #121

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices.

What should you do?

  • A . Add the support team group to the roles/monitoring.viewer role
  • B . Add the support team group to the roles/spanner.databaseUser role.
  • C . Add the support team group to the roles/spanner.databaseReader role.
  • D . Add the support team group to the roles/stackdriver.accounts.viewer role.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements. roles/monitoring.viewer. is the right answer.

Ref: https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles

Question #121

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices.

What should you do?

  • A . Add the support team group to the roles/monitoring.viewer role
  • B . Add the support team group to the roles/spanner.databaseUser role.
  • C . Add the support team group to the roles/spanner.databaseReader role.
  • D . Add the support team group to the roles/stackdriver.accounts.viewer role.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements. roles/monitoring.viewer. is the right answer.

Ref: https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles

Question #129

Use Cloud Scheduler to trigger this Cloud Function once a day.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

Question #129

Use Cloud Scheduler to trigger this Cloud Function once a day.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

Question #129

Use Cloud Scheduler to trigger this Cloud Function once a day.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

Question #129

Use Cloud Scheduler to trigger this Cloud Function once a day.

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

Question #133

You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the same Deployment Manager deployment, you also want to create a DaemonSet in the kube-system namespace of the cluster. You want a solution that uses the fewest possible services.

What should you do?

  • A . Add the cluster’s API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet.
  • B . Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition.
  • C . With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet.
  • D . In the cluster’s definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Adding an API as a type provider

This page describes how to add an API to Google Cloud Deployment Manager as a type provider. To learn more about types and type providers, read the Types overview documentation.

A type provider exposes all of the resources of a third-party API to Deployment Manager as base types that you can use in your configurations. These types must be directly served by a RESTful API that supports Create, Read, Update, and Delete (CRUD).

If you want to use an API that is not automatically provided by Google with Deployment Manager, you must add the API as a type provider.

https://cloud.google.com/deployment-manager/docs/configuration/type-providers/creating-type-provider

Question #134

You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment.

What should you do?

  • A . Use service account credentials in your on-premises application.
  • B . Use gcloud to create a key file for the service account that has appropriate permissions.
  • C . Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications.
  • D . Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from your data center.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Reference: https://cloud.google.com/vision/automl/docs/before-you-begin

To use a service account outside of Google Cloud, such as on other platforms or on-premises, you must first establish the identity of the service account. Public/private key pairs provide a secure way of accomplishing this goal. You can create a service account key using the Cloud Console, the gcloud tool, the serviceAccounts.keys.create() method, or one of the client libraries.

Ref: https://cloud.google.com/iam/docs/creating-managing-service-account-keys

Question #135

You are using Container Registry to centrally store your company’s container images in a separate project. In another project, you want to create a Google Kubernetes Engine (GKE) cluster. You want to ensure that Kubernetes can download images from Container Registry.

What should you do?

  • A . In the project where the images are stored, grant the Storage Object Viewer IAM role to the service account used by the Kubernetes nodes.
  • B . When you create the GKE cluster, choose the Allow full access to all Cloud APIs option under ‘Access scopes’.
  • C . Create a service account, and give it access to Cloud Storage. Create a P12 key for this service account and use it as an imagePullSecrets in Kubernetes.
  • D . Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account. is not right.

As mentioned above, Container Registry ignores permissions set on individual objects within the storage bucket so this isnt going to work.

Ref: https://cloud.google.com/container-registry/docs/access-control

Question #136

You deployed a new application inside your Google Kubernetes Engine cluster using the YAML file specified below.

You check the status of the deployed pods and notice that one of them is still in PENDING status:

You want to find out why the pod is stuck in pending status.

What should you do?

  • A . Review details of the myapp-service Service object and check for error messages.
  • B . Review details of the myapp-deployment Deployment object and check for error messages.
  • C . Review details of myapp-deployment-58ddbbb995-lp86m Pod and check for warning messages.
  • D . View logs of the container in myapp-deployment-58ddbbb995-lp86m pod and check for warning messages.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods

Question #137

You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP.

What should you do?

  • A . After the VM has been created, use your Google Account credentials to log in into the VM.
  • B . After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.
  • C . When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value.
  • D . After the VM has been created, download the JSON private key for the default Compute Engine service account. Use the credentials in the JSON file to log in to the VM.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

You can generate Windows passwords using either the Google Cloud Console or the gcloud command-line tool. This option uses the right syntax to reset the windows password. gcloud compute reset-windows-password windows-instance

Ref: https://cloud.google.com/compute/docs/instances/windows/creating-passwords-for-windows-instances#gcloud

Question #138

You want to configure an SSH connection to a single Compute Engine instance for users in the dev1 group. This instance is the only resource in this particular Google Cloud Platform project that the dev1 users should be able to connect to.

What should you do?

  • A . Set metadata to enable-oslogin=true for the instance. Grant the dev1 group the compute.osLogin role. Direct them to use the Cloud Shell to ssh to that instance.
  • B . Set metadata to enable-oslogin=true for the instance. Set the service account to no service account for that instance. Direct them to use the Cloud Shell to ssh to that instance.
  • C . Enable block project wide keys for the instance. Generate an SSH key for each user in the dev1 group. Distribute the keys to dev1 users and direct them to use their third-party tools to connect.
  • D . Enable block project wide keys for the instance. Generate an SSH key and associate the key with that instance. Distribute the key to dev1 users and direct them to use their third-party tools to connect.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Reference: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys

After you enable OS Login on one or more instances in your project, those VMs accept connections only from user accounts that have the necessary IAM roles in your project or organization. In this case, we are granting the group compute.osLogin which lets them log in as non-administrator account. And since we are directing them to use Cloud Shell to ssh, we dont need to add their SSH keys to the instance metadata.

Ref: https://cloud.google.com/compute/docs/instances/managing-instance-access#configure_users

Ref: https://cloud.google.com/compute/docs/instances/managing-instance-access#add_oslogin_keys

Question #139

You need to produce a list of the enabled Google Cloud Platform APIs for a GCP project using the gcloud command line in the Cloud Shell. The project name is my-project.

What should you do?

  • A . Run gcloud projects list to get the project ID, and then run gcloud services list –project <project ID>.
  • B . Run gcloud init to set the current project to my-project, and then run gcloud services list — available.
  • C . Run gcloud info to view the account value, and then run gcloud services list –account <Account>.
  • D . Run gcloud projects describe <project ID> to verify the project value, and then run gcloud services list –available.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

`gcloud services list –available` returns not only the enabled services in the project but also services that CAN be enabled.

https://cloud.google.com/sdk/gcloud/reference/services/list#–available

Run the following command to list the enabled APIs and services in your current project:

gcloud services list

whereas, Run the following command to list the APIs and services available to you in your current project:

gcloud services list Cavailable

https://cloud.google.com/sdk/gcloud/reference/services/list#–available –available

Return the services available to the project to enable. This list will include any services that the project has already enabled.

To list the services the current project has enabled for consumption, run:

gcloud services list –enabled

To list the services the current project can enable for consumption, run:

gcloud services list Cavailable

Question #140

You are building a new version of an application hosted in an App Engine environment. You want to test the new version with 1% of users before you completely switch your application over to the new version.

What should you do?

  • A . Deploy a new version of your application in Google Kubernetes Engine instead of App Engine and then use GCP Console to split traffic.
  • B . Deploy a new version of your application in a Compute Engine instance instead of App Engine and then use GCP Console to split traffic.
  • C . Deploy a new version as a separate app in App Engine. Then configure App Engine using GCP Console to split traffic between the two apps.
  • D . Deploy a new version of your application in App Engine. Then go to App Engine settings in GCP Console and split traffic between the current version and newly deployed versions accordingly.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

GCP App Engine natively offers traffic splitting functionality between versions. You can use traffic splitting to specify a percentage distribution of traffic across two or more of the versions within a service. Splitting traffic allows you to conduct A/B testing between your versions and provides control over the pace when rolling out features.

Ref: https://cloud.google.com/appengine/docs/standard/python/splitting-traffic

Question #141

You need to provide a cost estimate for a Kubernetes cluster using the GCP pricing calculator for Kubernetes. Your workload requires high IOPs, and you will also be using disk snapshots. You start by entering the number of nodes, average hours, and average days.

What should you do next?

  • A . Fill in local SSD. Fill in persistent disk storage and snapshot storage.
  • B . Fill in local SSD. Add estimated cost for cluster management.
  • C . Select Add GPUs. Fill in persistent disk storage and snapshot storage.
  • D . Select Add GPUs. Add estimated cost for cluster management.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/compute/docs/disks/local-ssd

Question #142

You are using Google Kubernetes Engine with autoscaling enabled to host a new application. You want to expose this new application to the public, using HTTPS on a public IP address.

What should you do?

  • A . Create a Kubernetes Service of type NodePort for your application, and a Kubernetes Ingress to expose this Service via a Cloud Load Balancer.
  • B . Create a Kubernetes Service of type ClusterIP for your application. Configure the public DNS name of your application using the IP of this Service.
  • C . Create a Kubernetes Service of type NodePort to expose the application on port 443 of each node of the Kubernetes cluster. Configure the public DNS name of your application with the IP of every node of the cluster to achieve load-balancing.
  • D . Create a HAProxy pod in the cluster to load-balance the traffic to all the pods of the application. Forward the public traffic to HAProxy with an iptable rule. Configure the DNS name of your application using the public IP of the node HAProxy is running on.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Reference: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer

Create a Kubernetes Service of type ClusterIP for your application. Configure the public DNS name of your application using the IP of this Service. is not right.

Kubernetes Service of type ClusterIP exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster so you can not route external traffic to this IP.

Ref: https://kubernetes.io/docs/concepts/services-networking/service/

Question #143

You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute Engine instances is running in its own VPC.

What should you do?

  • A . Verify that both projects are in a GCP Organization. Create a new VPC and add all instances.
  • B . Verify that both projects are in a GCP Organization. Share the VPC from one project and request that the Compute Engine instances in the other project use this shared VPC.
  • C . Verify that you are the Project Administrator of both projects. Create two new VPCs and add all instances.
  • D . Verify that you are the Project Administrator of both projects. Create a new VPC and add all instances.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared VPC network

https://cloud.google.com/vpc/docs/shared-vpc

"For example, an existing instance in a service project cannot be reconfigured to use a Shared VPC network, but a new instance can be created to use available subnets in a Shared VPC network."

Question #144

You want to add a new auditor to a Google Cloud Platform project. The auditor should be allowed to read, but not modify, all project items.

How should you configure the auditor’s permissions?

  • A . Create a custom role with view-only project permissions. Add the user’s account to the custom role.
  • B . Create a custom role with view-only service permissions. Add the user’s account to the custom role.
  • C . Select the built-in IAM project Viewer role. Add the user’s account to this role.
  • D . Select the built-in IAM service Viewer role. Add the user’s account to this role.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Reference: https://cloud.google.com/resource-manager/docs/access-control-proj

The primitive role roles/viewer provides read access to all resources in the project. The permissions in this role are limited to Get and list access for all resources. As we have an out of the box role that exactly fits our requirement, we should use this.

Ref: https://cloud.google.com/resource-manager/docs/access-control-proj

It is advisable to use the existing GCP provided roles over creating custom roles with similar permissions as this becomes a maintenance overhead. If GCP modifies how permissions are handled or adds/removes permissions, the default GCP provided roles are automatically updated by Google whereas if they were custom roles, the responsibility is with us and this adds to the operational overhead and needs to be avoided.

Question #145

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost.

What should you do?

  • A . Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.
  • B . Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
  • C . Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs.
    Dedicate this cluster to your ML team.
  • D . Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

This is the most optimal solution. Rather than recreating all nodes, you create a new node pool with GPU enabled. You then modify the pod specification to target particular GPU types by adding node selector to your workloads Pod specification. YOu still have a single cluster so you pay Kubernetes cluster management fee for just one cluster thus minimizing the cost.

Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus

Ref: https://cloud.google.com/kubernetes-engine/pricing

Example:

apiVersion: v1

kind: Pod

metadata:

name: my-gpu-pod

spec:

containers:

name: my-gpu-container

image: nvidia/cuda:10.0-runtime-ubuntu18.04

command: [/bin/bash]

resources:

limits:

nvidia.com/gpu: 2

nodeSelector:

cloud.google.com/gke-accelerator: nvidia-tesla-k80 # or nvidia-tesla-p100 or nvidia-tesla-p4 or nvidia-tesla-v100 or nvidia-tesla-t4

Question #146

Your VMs are running in a subnet that has a subnet mask of 255.255.255.240. The current subnet has no more free IP addresses and you require an additional 10 IP addresses for new VMs. The existing and new VMs should all be able to reach each other without additional routes.

What should you do?

  • A . Use gcloud to expand the IP range of the current subnet.
  • B . Delete the subnet, and recreate it using a wider range of IP addresses.
  • C . Create a new project. Use Shared VPC to share the current network with the new project.
  • D . Create a new subnet with the same starting IP but a wider range to overwrite the current subnet.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range

gcloud compute networks subnets expand-ip-range – expand the IP range of a Compute Engine subnetwork gcloud compute networks subnets expand-ip-range NAME –prefix-length=PREFIX_LENGTH [–region=REGION] [GCLOUD_WIDE_FLAG …]

Question #147

Your organization uses G Suite for communication and collaboration. All users in your organization have a G Suite account. You want to grant some G Suite users access to your Cloud Platform project.

What should you do?

  • A . Enable Cloud Identity in the GCP Console for your domain.
  • B . Grant them the required IAM roles using their G Suite email address.
  • C . Create a CSV sheet with all users’ email addresses. Use the gcloud command line tool to convert them into Google Cloud Platform accounts.
  • D . In the G Suite console, add the users to a special group called cloud-console-users@yourdomain.com. Rely on the default behavior of the Cloud Platform to grant users access if they are members of this group.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Reference: https://cloud.google.com/resource-manager/docs/creating-managing-organization Default behavior does not grant access to the "your GCP Project" Default behavior allow only create billing account and project – When the organization is created, all users in your domain are automatically granted Project Creator and Billing Account Creator IAM roles at the organization level. This enables users in your domain to continue creating projects with no disruption.

Question #148

You have a Google Cloud Platform account with access to both production and development projects. You need to create an automated process to list all compute instances in development and production projects on a daily basis.

What should you do?

  • A . Create two configurations using gcloud config. Write a script that sets configurations as active, individually. For each configuration, use gcloud compute instances list to get a list of compute resources.
  • B . Create two configurations using gsutil config. Write a script that sets configurations as active, individually. For each configuration, use gsutil compute instances list to get a list of compute resources.
  • C . Go to Cloud Shell and export this information to Cloud Storage on a daily basis.
  • D . Go to GCP Console and export this information to Cloud SQL on a daily basis.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

You can create two configurations C one for the development project and another for the production project. And you do that by running “gcloud config configurations create” command. https://cloud.google.com/sdk/gcloud/reference/config/configurations/create

In your custom script, you can load these configurations one at a time and execute gcloud compute instances list to list Google Compute Engine instances in the project that is active in the gcloud configuration.

Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list

Once you have this information, you can export it in a suitable format to a suitable target e.g. export as CSV or export to Cloud Storage/BigQuery/SQL, etc

Question #149

You have a large 5-TB AVRO file stored in a Cloud Storage bucket. Your analysts are proficient only in SQL and need access to the data stored in this file. You want to find a cost-effective way to complete their request as soon as possible.

What should you do?

  • A . Load data in Cloud Datastore and run a SQL query against it.
  • B . Create a BigQuery table and load data in BigQuery. Run a SQL query on this table and drop this table after you complete your request.
  • C . Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on these external tables to complete your request.
  • D . Create a Hadoop cluster and copy the AVRO file to NDFS by compressing it. Load the file in a hive table and provide access to your analysts so that they can run SQL queries.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/bigquery/external-data-sources

An external data source is a data source that you can query directly from BigQuery, even though the data is not stored in BigQuery storage.

BigQuery supports the following external data sources:

Amazon S3

Azure Storage

Cloud Bigtable

Cloud Spanner

Cloud SQL

Cloud Storage

Drive

Question #150

You need to verify that a Google Cloud Platform service account was created at a particular time.

What should you do?

  • A . Filter the Activity log to view the Configuration category. Filter the Resource type to Service Account.
  • B . Filter the Activity log to view the Configuration category. Filter the Resource type to Google Project.
  • C . Filter the Activity log to view the Data Access category. Filter the Resource type to Service Account.
  • D . Filter the Activity log to view the Data Access category. Filter the Resource type to Google Project.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://developers.google.com/cloud-search/docs/guides/audit-logging-manual

Question #151

You deployed an LDAP server on Compute Engine that is reachable via TLS through port 636 using UDP. You want to make sure it is reachable by clients over that port.

What should you do?

  • A . Add the network tag allow-udp-636 to the VM instance running the LDAP server.
  • B . Create a route called allow-udp-636 and set the next hop to be the VM instance running the LDAP server.
  • C . Add a network tag of your choice to the instance. Create a firewall rule to allow ingress on UDP port 636 for that network tag.
  • D . Add a network tag of your choice to the instance running the LDAP server. Create a firewall rule to allow egress on UDP port 636 for that network tag.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

A tag is simply a character string added to a tags field in a resource, such as Compute Engine virtual machine (VM) instances or instance templates. A tag is not a separate resource, so you cannot create it separately. All resources with that string are considered to have that tag. Tags enable you to make firewall rules and routes applicable to specific VM instances.

Question #152

You need to set a budget alert for use of Compute Engineer services on one of the three Google Cloud Platform projects that you manage. All three projects are linked to a single billing account.

What should you do?

  • A . Verify that you are the project billing administrator. Select the associated billing account and create a budget and alert for the appropriate project.
  • B . Verify that you are the project billing administrator. Select the associated billing account and create a budget and a custom alert.
  • C . Verify that you are the project administrator. Select the associated billing account and create a budget for the appropriate project.
  • D . Verify that you are project administrator. Select the associated billing account and create a budget and a custom alert.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/iam/docs/understanding-roles#billing-roles

Question #153

You are migrating a production-critical on-premises application that requires 96 vCPUs to perform its task. You want to make sure the application runs in a similar environment on GCP.

What should you do?

  • A . When creating the VM, use machine type n1-standard-96.
  • B . When creating the VM, use Intel Skylake as the CPU platform.
  • C . Create the VM using Compute Engine default settings. Use gcloud to modify the running instance to have 96 vCPUs.
  • D . Start the VM using Compute Engine default settings, and adjust as you go based on Rightsizing Recommendations.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Ref: https://cloud.google.com/compute/docs/machine-types#n1_machine_type

Question #154

You want to configure a solution for archiving data in a Cloud Storage bucket. The solution must be cost-effective. Data with multiple versions should be archived after 30 days. Previous versions are accessed once a month for reporting. This archive data is also occasionally updated at month-end.

What should you do?

  • A . Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage.
  • B . Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage.
  • C . Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage.
  • D . Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Reference: https://cloud.google.com/storage/docs/managing-lifecycles

Nearline Storage is ideal for data you plan to read or modify on average once per month or less. And this option archives just the noncurrent versions which is what we want to do.

Ref: https://cloud.google.com/storage/docs/storage-classes#nearline

Question #155

Your company’s infrastructure is on-premises, but all machines are running at maximum capacity. You want to burst to Google Cloud. The workloads on Google Cloud must be able to directly communicate to the workloads on-premises using a private IP range.

What should you do?

  • A . In Google Cloud, configure the VPC as a host for Shared VPC.
  • B . In Google Cloud, configure the VPC for VPC Network Peering.
  • C . Create bastion hosts both in your on-premises environment and on Google Cloud. Configure both as proxy servers using their public IP addresses.
  • D . Set up Cloud VPN between the infrastructure on-premises and Google Cloud.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

"Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual

Private Cloud (VPC) networks regardless of whether they belong to the same project or the same

organization."

https://cloud.google.com/vpc/docs/vpc-peering

while

"Cloud Interconnect provides low latency, high availability connections that enable you to reliably transfer data between your on-premises and Google Cloud Virtual Private Cloud (VPC) networks." https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview

and

"HA VPN is a high-availability (HA) Cloud VPN solution that lets you securely connect your on-premises network to your VPC network through an IPsec VPN connection in a single region." https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview

Question #156

You want to select and configure a solution for storing and archiving data on Google Cloud Platform. You need to support compliance objectives for data from one geographic location. This data is archived after 30 days and needs to be accessed annually.

What should you do?

  • A . Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.
  • B . Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
  • C . Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
  • D . Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Google Cloud Coldline is a new cold-tier storage for archival data with access frequency of less than once per year. Unlike other cold storage options, Nearline has no delays prior to data access, so now it is the leading solution among competitors.

The Real description is about Coldline storage Class:

Coldline Storage

Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs.

Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.

https://cloud.google.com/storage/docs/storage-classes#coldline

Question #157

Your company uses BigQuery for data warehousing. Over time, many different business units in your company have created 1000+ datasets across hundreds of projects. Your CIO wants you to examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort in performing this task.

What should you do?

  • A . Go to Data Catalog and search for employee_ssn in the search box.
  • B . Write a shell script that uses the bq command line tool to loop through all the projects in your organization.
  • C . Write a script that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn column.
  • D . Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find employee_ssn column.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui?authuser=4

Question #158

You create a Deployment with 2 replicas in a Google Kubernetes Engine cluster that has a single preemptible node pool. After a few minutes, you use kubectl to examine the status of your Pod and observe that one of them is still in Pending status:

What is the most likely cause?

  • A . The pending Pod’s resource requests are too large to fit on a single node of the cluster.
  • B . Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
  • C . The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
  • D . The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods’ status. It is currently being rescheduled on a new node.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The pending Pods resource requests are too large to fit on a single node of the cluster. Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod. is the right answer.

When you have a deployment with some pods in running and other pods in the pending state, more often than not it is a problem with resources on the nodes. Heres a sample output of this use case. We see that the problem is with insufficient CPU on the Kubernetes nodes so we have to either enable auto-scaling or manually scale up the nodes.

Question #159

You want to find out when users were added to Cloud Spanner Identity Access Management (IAM) roles on your Google Cloud Platform (GCP) project.

What should you do in the GCP Console?

  • A . Open the Cloud Spanner console to review configurations.
  • B . Open the IAM & admin console to review IAM policies for Cloud Spanner roles.
  • C . Go to the Stackdriver Monitoring console and review information for Cloud Spanner.
  • D . Go to the Stackdriver Logging console, review admin activity logs, and filter them for Cloud Spanner IAM roles.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://cloud.google.com/monitoring/audit-logging

Question #160

Your company implemented BigQuery as an enterprise data warehouse. Users from multiple business units run queries on this data warehouse. However, you notice that query costs for BigQuery are very high, and you need to control costs.

Which two methods should you use? (Choose two.)

  • A . Split the users from business units to multiple projects.
  • B . Apply a user- or project-level custom query quota for BigQuery data warehouse.
  • C . Create separate copies of your BigQuery data warehouse for each business unit.
  • D . Split your BigQuery data warehouse into multiple data warehouses for each business unit.
  • E . Change your BigQuery query model from on-demand to flat rate. Apply the appropriate number of slots to each Project.

Reveal Solution Hide Solution

Correct Answer: B,E
B,E

Explanation:

https://cloud.google.com/bigquery/docs/custom-quotas

https://cloud.google.com/bigquery/pricing#flat_rate_pricing

Question #161

You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE cluster. For each of your customers, a Pod is running in that cluster, and your customers can run arbitrary code inside their Pod. You want to maximize the isolation between your customers’ Pods.

What should you do?

  • A . Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
  • B . Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.
  • C . Create a GKE node pool with a sandbox type configured to gvisor. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.
  • D . Use the cos_containerd image for your GKE nodes. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Reference: https://cloud.google.com/kubernetes-engine/sandbox/

GKE Sandbox provides an extra layer of security to prevent untrusted code from affecting the host kernel on your cluster nodes when containers in the Pod execute unknown or untrusted code. Multi-tenant clusters and clusters whose containers run untrusted workloads are more exposed to security vulnerabilities than other clusters. Examples include SaaS providers, web-hosting providers, or other organizations that allow their users to upload and run code. When you enable GKE Sandbox on a node pool, a sandbox is created for each Pod running on a node in that node pool. In addition, nodes running sandboxed Pods are prevented from accessing other Google Cloud services or cluster metadata. Each sandbox uses its own userspace kernel. With this in mind, you can make decisions about how to group your containers into Pods, based on the level of isolation you require and the characteristics of your applications.

Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/sandbox-pods

Question #162

Your customer has implemented a solution that uses Cloud Spanner and notices some read latency-related performance issues on one table. This table is accessed only by their users using a primary key.

The table schema is shown below.

You want to resolve the issue.

What should you do?

  • A . Option A
  • B . Option B
  • C . Option C
  • D . Option D

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

As mentioned in Schema and data model, you should be careful when choosing a primary key to not accidentally create hotspots in your database. One cause of hotspots is having a column whose value monotonically increases as the first key part, because this results in all inserts occurring at the end of your key space. This pattern is undesirable because Cloud Spanner divides data among servers by key ranges, which means all your inserts will be directed at a single server that will end up doing all the work. https://cloud.google.com/spanner/docs/schema-design#primary-key-prevent-hotspots

Question #163

Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project.

What should you do?

  • A . Add the group for the finance team to roles/billing user role.
  • B . Add the group for the finance team to roles/billing admin role.
  • C . Add the group for the finance team to roles/billing viewer role.
  • D . Add the group for the finance team to roles/billing project/Manager role.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

"Billing Account Viewer access would usually be granted to finance teams, it provides access to spend information, but does not confer the right to link or unlink projects or otherwise manage the properties of the billing account." https://cloud.google.com/billing/docs/how-to/billing-access

Question #164

Your organization has strict requirements to control access to Google Cloud projects. You need to enable your Site Reliability Engineers (SREs) to approve requests from the Google Cloud support team when an SRE opens a support case. You want to follow Google-recommended practices.

What should you do?

  • A . Add your SREs to roles/iam.roleAdmin role.
  • B . Add your SREs to roles/accessapproval approver role.
  • C . Add your SREs to a group and then add this group to roles/iam roleAdmin role.
  • D . Add your SREs to a group and then add this group to roles/accessapproval approver role.

Reveal Solution Hide Solution

Correct Answer: D
Exit mobile version