Google Professional Cloud Architect Google Certified Professional – Cloud Architect (GCP) Online Training
Google Professional Cloud Architect Online Training
The questions for Professional Cloud Architect were last updated at Jul 23,2025.
- Exam Code: Professional Cloud Architect
- Exam Name: Google Certified Professional – Cloud Architect (GCP)
- Certification Provider: Google
- Latest update: Jul 23,2025
Your company’s test suite is a custom C++ application that runs tests throughout each day on Linux virtual machines. The full test suite takes several hours to complete, running on a limited number of on premises servers reserved for testing. Your company wants to move the testing infrastructure to the cloud, to reduce the amount of time it takes to fully test a change to the system, while changing the tests as little as possible .
Which cloud infrastructure should you recommend?
- A . Google Compute Engine unmanaged instance groups and Network Load Balancer
- B . Google Compute Engine managed instance groups with auto-scaling
- C . Google Cloud Dataproc to run Apache Hadoop jobs to process each test
- D . Google App Engine with Google Stackdriver for logging
The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
- A . • Append metadata to file body.
• Compress individual files.
• Name files with serverName-Timestamp.
• Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket - B . • Batch every 10,000 events with a single manifest file for metadata.
• Compress event files and manifest file into a single archive file.
• Name files using serverName-EventSequence.
• Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket. - C . • Compress individual files.
• Name files with serverName-EventSequence.
• Save files to one bucket
• Set custom metadata headers for each object after saving. - D . • Append metadata to file body.
• Compress individual files.
• Name files with a random prefix pattern.
• Save files to one bucket
You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration in order to operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available .
What should you do?
- A . Create a Compute Engine instance template using the most recent Debian image. Create an instance from this template, and install and configure the application as part of the startup script. Repeat this process whenever a new Google-managed Debian image becomes available.
- B . Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.
- C . Create an instance with the latest available Debian image. Connect to the instance via SSH, and install and configure the application on the instance. Repeat this process
whenever a new Google-managed Debian image becomes available. - D . Create a Docker container with Debian as the base image. Install and configure the application as part of the Docker image creation process. Host the container on Google Kubernetes Engine and restart the container whenever a new update is available.
You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an on-premises network .
How should you deploy the VPN?
- A . Use VPC Network Peering between the VPC and the on-premises network.
- B . Expose the VPC to the on-premises network using IAM and VPC Sharing.
- C . Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway.
- D . Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
Your company uses the Firewall Insights feature in the Google Network Intelligence Center. You have several firewall rules applied to Compute Engine instances. You need to evaluate the efficiency of the applied firewall ruleset. When you bring up the Firewall Insights page in the Google Cloud Console, you notice that there are no log rows to display .
What should you do to troubleshoot the issue?
- A . Enable Virtual Private Cloud (VPC) flow logging.
- B . Enable Firewall Rules Logging for the firewall rules you want to monitor.
- C . Verify that your user account is assigned the compute.networkAdmin Identity and
Access Management (IAM) role. - D . Install the Google Cloud SDK, and verify that there are no Firewall logs in the command line output.
A development team at your company has created a dockerized HTTPS web application. You need to deploy the application on Google Kubernetes Engine (GKE) and make sure that the application scales automatically.
How should you deploy to GKE?
- A . Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to loadbalance the HTTPS traffic.
- B . Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a
Service resource of type LoadBalancer to load-balance the HTTPS traffic. - C . Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load balance the HTTPS traffic.
- D . Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
You need to upload files from your on-premises environment to Cloud Storage. You want the files to be
encrypted on Cloud Storage using customer-supplied encryption keys .
What should you do?
- A . Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
- B . Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.
- C . Use gsutil to upload the files, and use the flag –encryption-key to supply the encryption key.
- D . Use gsutil to create a bucket, and use the flag –encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.
You are migrating third-party applications from optimized on-premises virtual machines to Google Cloud. You are unsure about the optimum CPU and memory options. The application have a consistent usage patterns across multiple weeks. You want to optimize resource usage for the lowest cost .
What should you do?
- A . Create a Compute engine instance with CPU and Memory options similar to your application’s current on-premises virtual machine. Install the cloud monitoring agent, and deploy the third party application. Run a load with normal traffic levels on third party application and follow the Rightsizing Recommendations in the Cloud Console
- B . Create an App Engine flexible environment, and deploy the third party application using a Docker file and a custom runtime. Set CPU and memory options similar to your application’s current on-premises virtual machine in the app.yaml file.
- C . Create an instance template with the smallest available machine type, and use an image of the third party application taken from the current on-premises virtual machine. Create a managed instance group that uses average CPU to autoscale the number of instances in the group. Modify the average CPU utilization threshold to optimize the number of instances running.
- D . Create multiple Compute Engine instances with varying CPU and memory options. Install the cloud monitoring agent and deploy the third-party application on each of them. Run a load test with high traffic levels on the application and use the results to determine the optimal settings.
Your company is designing its data lake on Google Cloud and wants to develop different ingestion pipelines to collect unstructured data from different sources. After the data is stored in Google Cloud, it will be processed in several data pipelines to build a recommendation engine for end users on the website. The structure of the data retrieved from the source systems can change at any time. The data must be stored exactly as it was retrieved for reprocessing purposes in case the data structure is incompatible with the current processing pipelines. You need to design an architecture to support the use case after you retrieve the data .
What should you do?
- A . Send the data through the processing pipeline, and then store the processed data in a BigQuery table for reprocessing.
- B . Store the data in a BigQuery table. Design the processing pipelines to retrieve the data from the table.
- C . Send the data through the processing pipeline, and then store the processed data in a Cloud Storage bucket for reprocessing.
- D . Store the data in a Cloud Storage bucket. Design the processing pipelines to retrieve the data from the bucket
You have an application that makes HTTP requests to Cloud Storage. Occasionally the requests fail with HTTP status codes of 5xx and 429.
How should you handle these types of errors?
- A . Use gRPC instead of HTTP for better performance.
- B . Implement retry logic using a truncated exponential backoff strategy.
- C . Make sure the Cloud Storage bucket is multi-regional for geo-redundancy.
- D . Monitor https://status.cloud.google.com/feed.atom and only make requests if Cloud
Storage is not reporting
an incident.