Exam4Training

Google Professional Data Engineer Google Certified Professional – Data Engineer Online Training

Question #1

Topic 1, Main Questions Set A

Your company built a TensorFlow neural-network model with a large number of neurons and layers. The model fits well for the training data. However, when tested against new data, it performs poorly.

What method can you employ to address this?

  • A . Threading
  • B . Serialization
  • C . Dropout Methods
  • D . Dimensionality Reduction

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Reference https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877

Question #2

You are building a model to make clothing recommendations. You know a user’s fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available.

How should you use this data to train the model?

  • A . Continuously retrain the model on just the new data.
  • B . Continuously retrain the model on a combination of existing data and the new data.
  • C . Train on the existing data while using the new data as your test set.
  • D . Train on the new data while using the existing data as your test set.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/automl-tables/docs/prepare

Question #3

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources.

How should you adjust the database design?

  • A . Add capacity (memory and disk space) to the database server by the order of 200.
  • B . Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.
  • C . Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.
  • D . Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.

Reveal Solution Hide Solution

Correct Answer: C
Question #4

You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that visualizations are not showing data that is less than 1 hour old.

What should you do?

  • A . Disable caching by editing the report settings.
  • B . Disable caching in BigQuery by editing table details.
  • C . Refresh your browser tab showing the visualizations.
  • D . Clear your browser history for the past hour then reload the tab showing the virtualizations.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Reference https://support.google.com/datastudio/answer/7020039?hl=en

Question #5

An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted.

How should you build this pipeline?

  • A . Use federated data sources, and check data in the SQL query.
  • B . Enable BigQuery monitoring in Google Stackdriver and create an alert.
  • C . Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.
  • D . Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.

Reveal Solution Hide Solution

Correct Answer: D
Question #6

Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users.

How should you design the frontend to respond to a database failure?

  • A . Issue a command to restart the database servers.
  • B . Retry the query with exponential backoff, up to a cap of 15 minutes.
  • C . Retry the query every second until it comes back online to minimize staleness of data.
  • D . Reduce the query frequency to once every hour until the database comes back online.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/sql/docs/mysql/manage-connections#backoff

Question #7

You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine.

Which learning algorithm should you use?

  • A . Linear regression
  • B . Logistic classification
  • C . Recurrent neural network
  • D . Feedforward neural network

Reveal Solution Hide Solution

Correct Answer: A
Question #8

You are building new real-time data warehouse for your company and will use Google BigQuery streaming inserts. There is no guarantee that data will only be sent in once but you do have a unique ID for each row of data and an event timestamp. You want to ensure that duplicates are not included while interactively querying data.

Which query type should you use?

  • A . Include ORDER BY DESK on timestamp column and LIMIT to 1.
  • B . Use GROUP BY on the unique ID column and timestamp column and SUM on the values.
  • C . Use the LAG window function with PARTITION by unique ID along with WHERE LAG IS NOT NULL.
  • D . Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row equals 1.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://cloud.google.com/bigquery/docs/reference/standard-sql/analytic-function-concepts

Question #9

Your company is using WHILECARD tables to query data across multiple tables with similar names.

The SQL statement is currently failing with the following error:

# Syntax error: Expected end of statement but got “-“ at [4:11] SELECT age

FROM bigquery-public-data.noaa_gsod.gsod

WHERE age != 99

AND_TABLE_SUFFIX = ‘1929’

ORDER BY age DESC

Which table name will make the SQL statement work correctly?

  • A . ‘bigquery-public-data.noaa_gsod.gsod‘
  • B . bigquery-public-data.noaa_gsod.gsod*
  • C . ‘bigquery-public-data.noaa_gsod.gsod’*
  • D . ‘bigquery-public-data.noaa_gsod.gsod*`

Reveal Solution Hide Solution

Correct Answer: D
Question #10

Your company is in a highly regulated industry. One of your requirements is to ensure individual users have access only to the minimum amount of information required to do their jobs. You want to enforce this requirement with Google BigQuery.

Which three approaches can you take? (Choose three.)

  • A . Disable writes to certain tables.
  • B . Restrict access to tables by role.
  • C . Ensure that the data is encrypted at all times.
  • D . Restrict BigQuery API access to approved users.
  • E . Segregate data across multiple tables or databases.
  • F . Use Google Stackdriver Audit Logging to determine policy violations.

Reveal Solution Hide Solution

Correct Answer: B,D,F

Question #11

You are designing a basket abandonment system for an ecommerce company.

The system will send a message to a user based on these rules:

– No interaction by the user on the site for 1 hour

– Has added more than $30 worth of products to the basket

– Has not completed a transaction

You use Google Cloud Dataflow to process the data and decide if a message should be sent.

How should you design the pipeline?

  • A . Use a fixed-time window with a duration of 60 minutes.
  • B . Use a sliding time window with a duration of 60 minutes.
  • C . Use a session window with a gap time duration of 60 minutes.
  • D . Use a global window with a time based trigger with a delay of 60 minutes.

Reveal Solution Hide Solution

Correct Answer: C
Question #12

Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other’s data. You want to ensure appropriate access to the data.

Which three steps should you take? (Choose three.)

  • A . Load data into different partitions.
  • B . Load data into a different dataset for each client.
  • C . Put each client’s BigQuery dataset into a different table.
  • D . Restrict a client’s dataset to approved users.
  • E . Only allow a service account to access the datasets.
  • F . Use the appropriate identity and access management (IAM) roles for each client’s users.

Reveal Solution Hide Solution

Correct Answer: B,D,F
Question #13

You want to process payment transactions in a point-of-sale application that will run on Google Cloud Platform. Your user base could grow exponentially, but you do not want to manage infrastructure scaling.

Which Google database service should you use?

  • A . Cloud SQL
  • B . BigQuery
  • C . Cloud Bigtable
  • D . Cloud Datastore

Reveal Solution Hide Solution

Correct Answer: A
Question #14

You want to use a database of information about tissue samples to classify future tissue samples as either normal or mutated. You are evaluating an unsupervised anomaly detection method for classifying the tissue samples.

Which two characteristic support this method? (Choose two.)

  • A . There are very few occurrences of mutations relative to normal samples.
  • B . There are roughly equal occurrences of both normal and mutated samples in the database.
  • C . You expect future mutations to have different features from the mutated samples in the database.
  • D . You expect future mutations to have similar features to the mutated samples in the database.
  • E . You already have labels for which samples are mutated and which are normal in the database.

Reveal Solution Hide Solution

Correct Answer: AD
AD

Explanation:

Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal by looking for instances that seem to fit least to the remainder of the data set. https://en.wikipedia.org/wiki/Anomaly_detection

Question #15

You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight data.

How can you adjust your application design?

  • A . Re-write the application to load accumulated data every 2 minutes.
  • B . Convert the streaming insert code to batch load for individual messages.
  • C . Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.
  • D . Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The data is first comes to buffer and then written to Storage. If we are running queries in buffer we will face above mentioned issues. If we wait for the bigquery to write the data to storage then we won’t face the issue. So We need to wait till it’s written tio storage

Question #16

Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in Google BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure the data warehouse. You need to discover what everyone is doing.

What should you do first?

  • A . Use Google Stackdriver Audit Logs to review data access.
  • B . Get the identity and access management IIAM) policy of each table
  • C . Use Stackdriver Monitoring to see the usage of BigQuery query slots.
  • D . Use the Google Cloud Billing API to see what account the warehouse is being billed to.

Reveal Solution Hide Solution

Correct Answer: A
Question #17

Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster.

What should you do?

  • A . Create a Google Cloud Dataflow job to process the data.
  • B . Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.
  • C . Create a Hadoop cluster on Google Compute Engine that uses persistent disks.
  • D . Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.
  • E . Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.

Reveal Solution Hide Solution

Correct Answer: D
Question #18

Business owners at your company have given you a database of bank transactions. Each row contains the user ID, transaction type, transaction location, and transaction amount. They ask you to investigate what type of machine learning can be applied to the data.

Which three machine learning applications can you use? (Choose three.)

  • A . Supervised learning to determine which transactions are most likely to be fraudulent.
  • B . Unsupervised learning to determine which transactions are most likely to be fraudulent.
  • C . Clustering to divide the transactions into N categories based on feature similarity.
  • D . Supervised learning to predict the location of a transaction.
  • E . Reinforcement learning to predict the location of a transaction.
  • F . Unsupervised learning to predict the location of a transaction.

Reveal Solution Hide Solution

Correct Answer: B,C,D
Question #19

Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration.

What should you do?

  • A . Put the data into Google Cloud Storage.
  • B . Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
  • C . Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
  • D . Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

References:

Question #20

You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to capture anomalous sensor events. You are using a push subscription in Cloud Pub/Sub that calls a custom HTTPS endpoint that you have created to take action of these anomalous events as they occur. Your custom HTTPS endpoint keeps getting an inordinate amount of duplicate messages.

What is the most likely cause of these duplicate messages?

  • A . The message body for the sensor event is too large.
  • B . Your custom endpoint has an out-of-date SSL certificate.
  • C . The Cloud Pub/Sub topic has too many messages published to it.
  • D . Your custom endpoint is not acknowledging messages within the acknowledgement deadline.

Reveal Solution Hide Solution

Correct Answer: B

Question #21

Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the dat a.

How should you deduplicate the data most efficiency?

  • A . Assign global unique identifiers (GUID) to each data entry.
  • B . Compute the hash value of each data entry, and compare it with all historical data.
  • C . Store each data entry as the primary key in a separate database and apply an index.
  • D . Maintain a database table to store the hash value and other metadata for each data entry.

Reveal Solution Hide Solution

Correct Answer: D
Question #22

Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks.

What should you do?

  • A . Run a local version of Jupiter on the laptop.
  • B . Grant the user access to Google Cloud Shell.
  • C . Host a visualization tool on a VM on Google Compute Engine.
  • D . Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.

Reveal Solution Hide Solution

Correct Answer: B
Question #23

You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need to process, store and analyze these very large datasets in real time.

What should you do?

  • A . Send the data to Google Cloud Datastore and then export to BigQuery.
  • B . Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.
  • C . Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is required.
  • D . Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run an analysis as needed.

Reveal Solution Hide Solution

Correct Answer: B
Question #24

You have spent a few days loading data from comma-separated values (CSV) files into the Google BigQuery table CLICK_STREAM. The column DT stores the epoch time of click events. For convenience, you chose a simple schema where every field is treated as the STRING type. Now, you want to compute web session durations of users who visit your site, and you want to change its data type to the TIMESTAMP. You want to minimize the migration effort without making future queries computationally expensive.

What should you do?

  • A . Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type. Reload the data.
  • B . Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from the column TS for each row. Reference the column TS instead of the column DT from now on.
  • C . Create a view CLICK_STREAM_V, where strings from the column DT are cast into TIMESTAMP values. Reference the view CLICK_STREAM_V instead of the table CLICK_STREAM from now on.
  • D . Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the BOOLEAN type. Reload all data in append mode. For each appended row, set the value of IS_NEW to true. For future queries, reference the column TS instead of the column DT, with the WHERE clause ensuring that the value of IS_NEW must be true.
  • E . Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table NEW_CLICK_STREAM.

Reveal Solution Hide Solution

Correct Answer: D
Question #25

You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables.

What should you do?

  • A . Make a call to the Stackdriver API to list all logs, and apply an advanced filter.
  • B . In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.
  • C . In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.
  • D . Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.

Reveal Solution Hide Solution

Correct Answer: B
Question #26

You are working on a sensitive project involving private user data. You have set up a project on Google Cloud Platform to house your work internally. An external consultant is going to assist with coding a complex transformation in a Google Cloud Dataflow pipeline for your project.

How should you maintain users’ privacy?

  • A . Grant the consultant the Viewer role on the project.
  • B . Grant the consultant the Cloud Dataflow Developer role on the project.
  • C . Create a service account and allow the consultant to log on with it.
  • D . Create an anonymized sample of the data for the consultant to work with in a different project.

Reveal Solution Hide Solution

Correct Answer: C
Question #27

You are building a model to predict whether or not it will rain on a given day. You have thousands of input features and want to see if you can improve training speed by removing some features while having a minimum effect on model accuracy.

What can you do?

  • A . Eliminate features that are highly correlated to the output labels.
  • B . Combine highly co-dependent features into one representative feature.
  • C . Instead of feeding in each feature individually, average their values in batches of 3.
  • D . Remove the features that have null values for more than 50% of the training records.

Reveal Solution Hide Solution

Correct Answer: B
Question #28

Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow. Numerous data logs are being are being generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially every hour.

The data scientists have written the following code to read the data for a new key features in the logs.

BigQueryIO.Read

.named(“ReadLogData”)

.from(“clouddataflow-readonly:samples.log_data”)

You want to improve the performance of this data read.

What should you do?

  • A . Specify the TableReference object in the code.
  • B . Use .fromQuery operation to read specific fields from the table.
  • C . Use of both the Google BigQuery TableSchema and TableFieldSchema classes.
  • D . Call a transform that returns TableRow objects, where each element in the PCollexction represents a single row in the table.

Reveal Solution Hide Solution

Correct Answer: D
Question #29

Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance.

How should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards?

  • A . Use a row key of the form <timestamp>.
  • B . Use a row key of the form <sensorid>.
  • C . Use a row key of the form <timestamp>#<sensorid>.
  • D . Use a row key of the form >#<sensorid>#<timestamp>.

Reveal Solution Hide Solution

Correct Answer: A
Question #30

Your company’s customer and order databases are often under heavy load. This makes performing analytics against them difficult without harming operations. The databases are in a MySQL cluster, with nightly backups taken using MySQL dump. You want to perform analytics with minimal impact on operations.

What should you do?

  • A . Add a node to the MySQL cluster and build an OLAP cube there.
  • B . Use an ETL tool to load the data from MySQL into Google BigQuery.
  • C . Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL.
  • D . Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.

Reveal Solution Hide Solution

Correct Answer: C

Question #31

You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update.

What should you do?

  • A . Update the current pipeline and use the drain flag.
  • B . Update the current pipeline and provide the transform mapping JSON object.
  • C . Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.
  • D . Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.

Reveal Solution Hide Solution

Correct Answer: D
Question #32

Your company is running their first dynamic campaign, serving different offers by analyzing real-time data during the holiday season. The data scientists are collecting terabytes of data that rapidly grows every hour during their 30-day campaign. They are using Google Cloud Dataflow to preprocess the data and collect the feature (signals) data that is needed for the machine learning model in Google Cloud Bigtable. The team is observing suboptimal performance with reads and writes of their initial load of 10 TB of data. They want to improve this performance while minimizing cost.

What should they do?

  • A . Redefine the schema by evenly distributing reads and writes across the row space of the table.
  • B . The performance issue should be resolved over time as the site of the BigDate cluster is increased.
  • C . Redesign the schema to use a single row key to identify values that need to be updated frequently in the cluster.
  • D . Redesign the schema to use row keys based on numeric IDs that increase sequentially per user viewing the offers.

Reveal Solution Hide Solution

Correct Answer: A
Question #33

Your software uses a simple JSON format for all messages. These messages are published to Google

Cloud Pub/Sub, then processed with Google Cloud Dataflow to create a real-time dashboard for the CFO. During testing, you notice that some messages are missing in the dashboard. You check the logs, and all messages are being published to Cloud Pub/Sub successfully.

What should you do next?

  • A . Check the dashboard application to see if it is not displaying correctly.
  • B . Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output.
  • C . Use Google Stackdriver Monitoring on Cloud Pub/Sub to find the missing messages.
  • D . Switch Cloud Dataflow to pull messages from Cloud Pub/Sub instead of Cloud Pub/Sub pushing messages to Cloud Dataflow.

Reveal Solution Hide Solution

Correct Answer: B
Question #34

Topic 2, Flowlogistic Case Study

Company Overview

Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.

Company Background

The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.

Solution Concept

Flowlogistic wants to implement two concepts using the cloud:

Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads

Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.

Existing Technical Environment

Flowlogistic architecture resides in a single data center:

Databases

8 physical servers in 2 clusters

SQL Server C user data, inventory, static data

3 physical servers

Cassandra C metadata, tracking messages

10 Kafka servers C tracking message aggregation and batch insert Application servers C customer front end, middleware for order/customs

60 virtual machines across 20 physical servers

Tomcat C Java services

Nginx C static content

Batch servers

Storage appliances

iSCSI for virtual machine (VM) hosts

Fibre Channel storage area network (FC SAN) C SQL server storage

Network-attached storage (NAS) image storage, logs, backups

Apache Hadoop /Spark servers

Core Data Lake

Data analysis workloads

20 miscellaneous servers

Jenkins, monitoring, bastion hosts,

Business Requirements

Build a reliable and reproducible environment with scaled panty of production.

Aggregate data in a centralized Data Lake for analysis

Use historical data to perform predictive analytics on future shipments Accurately track every shipment worldwide using proprietary technology

Improve business agility and speed of innovation through rapid provisioning of new resources

Analyze and optimize architecture for performance in the cloud

Migrate fully to the cloud if all other requirements are met

Technical Requirements

Handle both streaming and batch data

Migrate existing Hadoop workloads

Ensure architecture is scalable and elastic to meet the changing demands of the company.

Use managed services whenever possible

Encrypt data flight and at rest

Connect a VPN between the production data center and cloud environment

SEO Statement

We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around.

We need to organize our information so we can more easily understand where our customers are and what they are shipping.

CTO Statement

IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO’ s tracking technology.

CFO Statement

Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don’t want to commit capital to building out a server environment.

Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads.

What should they do?

  • A . Store the common data in BigQuery as partitioned tables.
  • B . Store the common data in BigQuery and expose authorized views.
  • C . Store the common data encoded as Avro in Google Cloud Storage.
  • D . Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.

Reveal Solution Hide Solution

Correct Answer: B
Question #35

Flowlogistic’s management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system. You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably.

Which combination of GCP products should you choose?

  • A . Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage
  • B . Cloud Pub/Sub, Cloud Dataflow, and Local SSD
  • C . Cloud Pub/Sub, Cloud SQL, and Cloud Storage
  • D . Cloud Load Balancing, Cloud Dataflow, and Cloud Storage

Reveal Solution Hide Solution

Correct Answer: C
Question #36

Flowlogistic’s CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they’ve purchased a visualization tool to simplify the creation of BigQuery reports. However, they’ve been overwhelmed by all the data in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way.

What should you do?

  • A . Export the data into a Google Sheet for virtualization.
  • B . Create an additional table with only the necessary columns.
  • C . Create a view on the table to present to the virtualization tool.
  • D . Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.

Reveal Solution Hide Solution

Correct Answer: C
Question #37

Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.

Which approach should you take?

  • A . Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.
  • B . Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.
  • C . Use the NOW () function in BigQuery to record the event’s time.
  • D . Use the automatically generated timestamp from Cloud Pub/Sub to order the data.

Reveal Solution Hide Solution

Correct Answer: B
Question #38

Topic 3, MJTelco Case Study

Company Overview

MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware.

Company Background

Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost.

Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.

Solution Concept

MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:

Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.

Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.

MJTelco will also use three separate operating environments C development/test, staging, and production C to meet the needs of running experiments, deploying new features, and serving production customers.

Business Requirements

Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community.

Ensure security of their proprietary data to protect their leading-edge machine learning and analysis. Provide reliable and timely access to data for analysis from distributed research workers

Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.

Technical Requirements

Ensure secure and efficient transport and storage of telemetry data

Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.

Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day

Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles.

CEO Statement

Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.

CTO Statement

Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.

CFO Statement

The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud’s machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines.

MJTelco’s Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required.

Which Cloud Dataflow pipeline configuration setting should you update?

  • A . The zone
  • B . The number of workers
  • C . The disk size per worker
  • D . The maximum number of workers

Reveal Solution Hide Solution

Correct Answer: A
Question #39

You need to compose visualizations for operations teams with the following requirements:

Which approach meets the requirements?

  • A . Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.
  • B . Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.
  • C . Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.
  • D . Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

Reveal Solution Hide Solution

Correct Answer: C
Question #40

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.

Which two actions should you take? (Choose two.)

  • A . Ensure all the tables are included in global dataset.
  • B . Ensure each table is included in a dataset for a region.
  • C . Adjust the settings for each table to allow a related region-based security group view access.
  • D . Adjust the settings for each view to allow a related region-based security group view access.
  • E . Adjust the settings for each dataset to allow a related region-based security group view access.

Reveal Solution Hide Solution

Correct Answer: B,D

Question #41

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day.

Which schema should you use?

  • A . Rowkey: date#device_idColumn data: data_point
  • B . Rowkey: dateColumn data: device_id, data_point
  • C . Rowkey: device_idColumn data: date, data_point
  • D . Rowkey: data_pointColumn data: device_id, date
  • E . Rowkey: date#data_pointColumn data: device_id

Reveal Solution Hide Solution

Correct Answer: D
Question #42

MJTelco is building a custom interface to share data. They have these requirements:

They need to do aggregations over their petabyte-scale datasets.

They need to scan specific time range rows with a very fast response time (milliseconds).

Which combination of Google Cloud Platform products should you recommend?

  • A . Cloud Datastore and Cloud Bigtable
  • B . Cloud Bigtable and Cloud SQL
  • C . BigQuery and Cloud Bigtable
  • D . BigQuery and Cloud Storage

Reveal Solution Hide Solution

Correct Answer: C
Question #43

You need to compose visualization for operations teams with the following requirements:

Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute)

The report must not be more than 3 hours delayed from live data.

The actionable report should only show suboptimal links.

Most suboptimal links should be sorted to the top.

Suboptimal links can be grouped and filtered by regional geography.

User response time to load the report must be <5 seconds.

You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid creating and updating new visualizations each month.

What should you do?

  • A . Look through the current data and compose a series of charts and tables, one for each possible combination of criteria.
  • B . Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.
  • C . Export the data to a spreadsheet, compose a series of charts and tables, one for each possible combination of criteria, and spread them across multiple tabs.
  • D . Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each criteria, and then renders results using the Google Charts and visualization API.

Reveal Solution Hide Solution

Correct Answer: B
Question #44

Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day’s events. They also want to use streaming ingestion.

What should you do?

  • A . Create a table called tracking_table and include a DATE column.
  • B . Create a partitioned table called tracking_table and include a TIMESTAMP column.
  • C . Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.
  • D . Create a table called tracking_table with a TIMESTAMP column to represent the day.

Reveal Solution Hide Solution

Correct Answer: B
Question #45

Topic 4, Main Questions Set B

Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs.

What should you recommend they do?

  • A . Rewrite the job in Pig.
  • B . Rewrite the job in Apache Spark.
  • C . Increase the size of the Hadoop cluster.
  • D . Decrease the size of the Hadoop cluster but also rewrite the job in Hive.

Reveal Solution Hide Solution

Correct Answer: A
Question #46

You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the value of the LastName field for each employee.

How can you make that data available while minimizing cost?

  • A . Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName.
  • B . Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the concatenation of the FirstName and LastName values.
  • C . Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery.
  • D . Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.

Reveal Solution Hide Solution

Correct Answer: C
Question #47

You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity ‘Movie’ the property ‘actors’

and the property ‘tags’ have multiple values but the property ‘date released’ does not. A typical query would ask for all movies with actor=<actorname> ordered by date_released or all movies with tag=Comedy ordered by date_released.

How should you avoid a combinatorial explosion in the number of indexes?

  • A . Option A
  • B . Option B.
  • C . Option C
  • D . Option D

Reveal Solution Hide Solution

Correct Answer: A
Question #48

You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 AM. You have written a Google Cloud Dataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible.

What should you do?

  • A . Change the processing job to use Google Cloud Dataproc instead.
  • B . Manually start the Cloud Dataflow job each morning when you get into the office.
  • C . Create a cron job with Google App Engine Cron Service to run the Cloud Dataflow job.
  • D . Configure the Cloud Dataflow job as a streaming job so that it processes the log data immediately.

Reveal Solution Hide Solution

Correct Answer: C
Question #49

You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in BigQuery as cheaply as possible.

What should you do?

  • A . Load the data every 30 minutes into a new partitioned table in BigQuery.
  • B . Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery
  • C . Store the data in Google Cloud Datastore. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Cloud Datastore
  • D . Store the data in a file in a regional Google Cloud Storage bucket. Use Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Google Cloud Storage.

Reveal Solution Hide Solution

Correct Answer: A
Question #50

You are designing the database schema for a machine learning-based food ordering service that will predict what users want to eat.

Here is some of the information you need to store:

The user profile: What the user likes and doesn’t like to eat

The user account information: Name, address, preferred meal times

The order information: When orders are made, from where, to whom

The database will be used to store all the transactional data of the product.

You want to optimize the data schema.

Which Google Cloud Platform product should you use?

  • A . BigQuery
  • B . Cloud SQL
  • C . Cloud Bigtable
  • D . Cloud Datastore

Reveal Solution Hide Solution

Correct Answer: A

Question #51

Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file.

What is the most likely cause of this problem?

  • A . The CSV data loaded in BigQuery is not flagged as CSV.
  • B . The CSV data has invalid rows that were skipped on import.
  • C . The CSV data loaded in BigQuery is not using BigQuery’s default encoding.
  • D . The CSV data has not gone through an ETL phase before loading into BigQuery.

Reveal Solution Hide Solution

Correct Answer: B
Question #52

Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (CSV) file that is less than 4 KB. All files must be ingested on Google Cloud Platform before they can be processed. Your company site has a 200 ms latency to Google Cloud, and your Internet connection bandwidth is limited as 50 Mbps. You currently deploy a secure FTP (SFTP) server on a virtual machine in Google Compute Engine as the data ingestion point. A local SFTP client runs on a dedicated machine to transmit the CSV files as is. The goal is to make reports with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume, even though the bandwidth utilization is rather low.

You are told that due to seasonality, your company expects the number of files to double for the next three months.

Which two actions should you take? (choose two.)

  • A . Introduce data compression for each file to increase the rate file of file transfer.
  • B . Contact your internet service provider (ISP) to increase your maximum bandwidth to at least 100 Mbps.
  • C . Redesign the data ingestion process to use gsutil tool to send the CSV files to a storage bucket in parallel.
  • D . Assemble 1,000 files into a tape archive (TAR) file. Transmit the TAR files instead, and disassemble the CSV files in the cloud upon receiving them.
  • E . Create an S3-compatible storage endpoint in your network, and use Google Cloud Storage Transfer Service to transfer on-premices data to the designated storage bucket.

Reveal Solution Hide Solution

Correct Answer: C,E
Question #53

You are choosing a NoSQL database to handle telemetry data submitted from millions of Internet-of-Things (IoT) devices. The volume of data is growing at 100 TB per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and durability (ACID). However, high availability and low latency are required.

You need to analyze the data by querying against individual fields.

Which three databases meet your requirements? (Choose three.)

  • A . Redis
  • B . HBase
  • C . MySQL
  • D . MongoDB
  • E . Cassandra
  • F . HDFS with Hive

Reveal Solution Hide Solution

Correct Answer: B,D,F
Question #54

Topic 5, Practice Questions

Suppose you have a table that includes a nested column called "city" inside a column called "person", but when you try to submit the following query in BigQuery, it gives you an error. SELECT person FROM `project1.example.table1` WHERE city = "London"

How would you correct the error?

  • A . Add ", UNNEST(person)" before the WHERE clause.
  • B . Change "person" to "person.city".
  • C . Change "person" to "city.person".
  • D . Add ", UNNEST(city)" before the WHERE clause.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

To access the person.city column, you need to "UNNEST(person)" and JOIN it to table1 using a comma.

Reference: https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql#nested_repeated_results

Question #55

What are two of the benefits of using denormalized data structures in BigQuery?

  • A . Reduces the amount of data processed, reduces the amount of storage required
  • B . Increases query speed, makes queries simpler
  • C . Reduces the amount of storage required, increases query speed
  • D . Reduces the amount of data processed, increases query speed

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Denormalization increases query speed for tables with billions of rows because BigQuery’s performance degrades when doing JOINs on large tables, but with a denormalized data structure, you don’t have to use JOINs, since all of the data has been combined into one table.

Denormalization also makes queries simpler because you do not have to use JOIN clauses.

Denormalization increases the amount of data processed and the amount of storage required because it creates redundant data.

Reference: https://cloud.google.com/solutions/bigquery-data-warehouse#denormalizing_data

Question #56

Which of these statements about exporting data from BigQuery is false?

  • A . To export more than 1 GB of data, you need to put a wildcard in the destination filename.
  • B . The only supported export destination is Google Cloud Storage.
  • C . Data can only be exported in JSON or Avro format.
  • D . The only compression option available is GZIP.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Data can be exported in CSV, JSON, or Avro format. If you are exporting nested or repeated data, then CSV format is not supported.

Reference: https://cloud.google.com/bigquery/docs/exporting-data

Question #57

What are all of the BigQuery operations that Google charges for?

  • A . Storage, queries, and streaming inserts
  • B . Storage, queries, and loading data from a file
  • C . Storage, queries, and exporting data
  • D . Queries and streaming inserts

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Google charges for storage, queries, and streaming inserts. Loading data from a file and exporting data are free operations.

Reference: https://cloud.google.com/bigquery/pricing

Question #58

Which of the following is not possible using primitive roles?

  • A . Give a user viewer access to BigQuery and owner access to Google Compute Engine instances.
  • B . Give UserA owner access and UserB editor access for all datasets in a project.
  • C . Give a user access to view all datasets in a project, but not run queries on them.
  • D . Give GroupA owner access and GroupB editor access for all datasets in a project.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Primitive roles can be used to give owner, editor, or viewer access to a user or group, but they can’t be used to separate data access permissions from job-running permissions.

Reference: https://cloud.google.com/bigquery/docs/access-control#primitive_iam_roles

Question #59

Which of these statements about BigQuery caching is true?

  • A . By default, a query’s results are not cached.
  • B . BigQuery caches query results for 48 hours.
  • C . Query results are cached even if you specify a destination table.
  • D . There is no charge for a query that retrieves its results from cache.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

When query results are retrieved from a cached results table, you are not charged for the query.

BigQuery caches query results for 24 hours, not 48 hours.

Query results are not cached if you specify a destination table.

A query’s results are always cached except under certain conditions, such as if you specify a destination table.

Reference: https://cloud.google.com/bigquery/querying-data#query-caching

Question #60

Which of these sources can you not load data into BigQuery from?

  • A . File upload
  • B . Google Drive
  • C . Google Cloud Storage
  • D . Google Cloud SQL

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

You can load data into BigQuery from a file upload, Google Cloud Storage, Google Drive, or Google Cloud Bigtable. It is not possible to load data into BigQuery directly from Google Cloud SQL. One way to get data from Cloud SQL to BigQuery would be to export data from Cloud SQL to Cloud Storage and then load it from there.

Reference: https://cloud.google.com/bigquery/loading-data

Question #61

Which of the following statements about Legacy SQL and Standard SQL is not true?

  • A . Standard SQL is the preferred query language for BigQuery.
  • B . If you write a query in Legacy SQL, it might generate an error if you try to run it with Standard SQL.
  • C . One difference between the two query languages is how you specify fully-qualified table names (i.e. table names that include their associated project name).
  • D . You need to set a query language for each dataset and the default is Standard SQL.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

You do not set a query language for each dataset. It is set each time you run a query and the default query language is Legacy SQL.

Standard SQL has been the preferred query language since BigQuery 2.0 was released.

In legacy SQL, to query a table with a project-qualified name, you use a colon, :, as a separator. In standard SQL, you use a period, ., instead.

Due to the differences in syntax between the two query languages (such as with project-qualified table names), if you write a query in Legacy SQL, it might generate an error if you try to run it with Standard SQL.

Reference: https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql

Question #62

How would you query specific partitions in a BigQuery table?

  • A . Use the DAY column in the WHERE clause
  • B . Use the EXTRACT(DAY) clause
  • C . Use the __PARTITIONTIME pseudo-column in the WHERE clause
  • D . Use DATE BETWEEN in the WHERE clause

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Partitioned tables include a pseudo column named _PARTITIONTIME that contains a date-based timestamp for data loaded into the table.

To limit a query to particular partitions (such as Jan 1st and 2nd of 2017), use a clause similar to this:

WHERE _PARTITIONTIME BETWEEN TIMESTAMP (‘2017-01-01’) AND TIMESTAMP (‘2017-01-02’)

Reference: https://cloud.google.com/bigquery/docs/partitioned-tables#the_partitiontime_pseudo_column

Question #63

Which SQL keyword can be used to reduce the number of columns processed by BigQuery?

  • A . BETWEEN
  • B . WHERE
  • C . SELECT
  • D . LIMIT

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

SELECT allows you to query specific columns rather than the whole table.

LIMIT, BETWEEN, and WHERE clauses will not reduce the number of columns processed by BigQuery.

Reference: https://cloud.google.com/bigquery/launch-checklist#architecture_design_and_development_checklist

Question #64

To give a user read permission for only the first three columns of a table, which access control method would you use?

  • A . Primitive role
  • B . Predefined role
  • C . Authorized view
  • D . It’s not possible to give access to only the first three columns of a table.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

An authorized view allows you to share query results with particular users and groups without giving them read access to the underlying tables. Authorized views can only be created in a dataset that does not contain the tables queried by the view.

When you create an authorized view, you use the view’s SQL query to restrict access to only the rows and columns you want the users to see.

Reference: https://cloud.google.com/bigquery/docs/views#authorized-views

Question #65

What are two methods that can be used to denormalize tables in BigQuery?

  • A . 1) Split table into multiple tables;
    2) Use a partitioned table
  • B . 1) Join tables into one table;
    2) Use nested repeated fields
  • C . 1) Use a partitioned table;
    2) Join tables into one table
  • D . 1) Use nested repeated fields;
    2) Use a partitioned table

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The conventional method of denormalizing data involves simply writing a fact, along with all its dimensions, into a flat table structure. For example, if you are dealing with sales transactions, you would write each individual fact to a record, along with the accompanying dimensions such as order and customer information.

The other method for denormalizing data takes advantage of BigQuery’s native support for nested and repeated structures in JSON or Avro input data. Expressing records using nested and repeated structures can provide a more natural representation of the underlying data. In the case of the sales order, the outer part of a JSON structure would contain the order and customer information, and the inner part of the structure would contain the individual line items of the order, which would be represented as nested, repeated elements.

Reference: https://cloud.google.com/solutions/bigquery-data-warehouse#denormalizing_data

Question #66

Which of these is not a supported method of putting data into a partitioned table?

  • A . If you have existing data in a separate file for each day, then create a partitioned table and upload each file into the appropriate partition.
  • B . Run a query to get the records for a specific day from an existing table and for the destination table, specify a partitioned table ending with the day in the format "$YYYYMMDD".
  • C . Create a partitioned table and stream new records to it every day.
  • D . Use ORDER BY to put a table’s rows into chronological order and then change the table’s type to "Partitioned".

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

You cannot change an existing table into a partitioned table. You must create a partitioned table from scratch. Then you can either stream data into it every day and the data will automatically be put in the right partition, or you can load data into a specific partition by using "$YYYYMMDD" at the end of the table name.

Reference: https://cloud.google.com/bigquery/docs/partitioned-tables

Question #67

Which of these operations can you perform from the BigQuery Web UI?

  • A . Upload a file in SQL format.
  • B . Load data with nested and repeated fields.
  • C . Upload a 20 MB file.
  • D . Upload multiple files using a wildcard.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

You can load data with nested and repeated fields using the Web UI.

You cannot use the Web UI to:

– Upload a file greater than 10 MB in size

– Upload multiple files at the same time

– Upload a file in SQL format

All three of the above operations can be performed using the "bq" command.

Reference: https://cloud.google.com/bigquery/loading-data

Question #68

Which methods can be used to reduce the number of rows processed by BigQuery?

  • A . Splitting tables into multiple tables; putting data in partitions
  • B . Splitting tables into multiple tables; putting data in partitions; using the LIMIT clause
  • C . Putting data in partitions; using the LIMIT clause
  • D . Splitting tables into multiple tables; using the LIMIT clause

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

If you split a table into multiple tables (such as one table for each day), then you can limit your query to the data in specific tables (such as for particular days). A better method is to use a partitioned table, as long as your data can be separated by the day.

If you use the LIMIT clause, BigQuery will still process the entire table.

Reference: https://cloud.google.com/bigquery/docs/partitioned-tables

Question #69

Why do you need to split a machine learning dataset into training data and test data?

  • A . So you can try two different sets of features
  • B . To make sure your model is generalized for more than just the training data
  • C . To allow you to create unit tests in your code
  • D . So you can use one dataset for a wide model and one for a deep model

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The flaw with evaluating a predictive model on training data is that it does not inform you on how well the model has generalized to new unseen data. A model that is selected for its accuracy on the training dataset rather than its accuracy on an unseen test dataset is very likely to have lower accuracy on an unseen test dataset. The reason is that the model is not as generalized. It has specialized to the structure in the training dataset. This is called overfitting.

Reference: https://machinelearningmastery.com/a-simple-intuition-for-overfitting/

Question #70

Which of these numbers are adjusted by a neural network as it learns from a training dataset (select 2 answers)?

  • A . Weights
  • B . Biases
  • C . Continuous features
  • D . Input values

Reveal Solution Hide Solution

Correct Answer: AB
AB

Explanation:

A neural network is a simple mechanism that’s implemented with basic math. The only difference between the traditional programming model and a neural network is that you let the computer determine the parameters (weights and bias) by learning from training datasets.

Reference: https://cloud.google.com/blog/big-data/2016/07/understanding-neural-networks-with-tensorflow-playground

Question #71

The CUSTOM tier for Cloud Machine Learning Engine allows you to specify the number of which types of cluster nodes?

  • A . Workers
  • B . Masters, workers, and parameter servers
  • C . Workers and parameter servers
  • D . Parameter servers

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines:

You must set TrainingInput.masterType to specify the type of machine to use for your master node.

You may set TrainingInput.workerCount to specify the number of workers to use.

You may set TrainingInput.parameterServerCount to specify the number of parameter servers to use. You can specify the type of machine for the master node, but you can’t specify more than one master node.

Reference: https://cloud.google.com/ml-engine/docs/training-overview#job_configuration_parameters

Question #72

Which software libraries are supported by Cloud Machine Learning Engine?

  • A . Theano and TensorFlow
  • B . Theano and Torch
  • C . TensorFlow
  • D . TensorFlow and Torch

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Cloud ML Engine mainly does two things:

– Enables you to train machine learning models at scale by running TensorFlow training applications in the cloud.

– Hosts those trained models for you in the cloud so that you can use them to get predictions about new data.

Reference: https://cloud.google.com/ml-engine/docs/technical-overview#what_it_does

Question #73

Which TensorFlow function can you use to configure a categorical column if you don’t know all of the possible values for that column?

  • A . categorical_column_with_vocabulary_list
  • B . categorical_column_with_hash_bucket
  • C . categorical_column_with_unknown_values
  • D . sparse_column_with_keys

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

If you know the set of all possible feature values of a column and there are only a few of them, you can use categorical_column_with_vocabulary_list. Each key in the list will get assigned an auto-incremental ID starting from 0.

What if we don’t know the set of possible values in advance? Not a problem. We can use categorical_column_with_hash_bucket instead.

What will happen is that each possible value in the feature column occupation will be hashed to an integer ID as we encounter them in training.

Reference: https://www.tensorflow.org/tutorials/wide

Question #74

Which of the following statements about the Wide & Deep Learning model are true? (Select 2 answers.)

  • A . The wide model is used for memorization, while the deep model is used for generalization.
  • B . A good use for the wide and deep model is a recommender system.
  • C . The wide model is used for generalization, while the deep model is used for memorization.
  • D . A good use for the wide and deep model is a small-scale linear regression problem.

Reveal Solution Hide Solution

Correct Answer: AB
AB

Explanation:

Can we teach computers to learn like humans do, by combining the power of memorization and generalization? It’s not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It’s useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.

Reference: https://research.googleblog.com/2016/06/wide-deep-learning-better-together-with.html

Question #75

To run a TensorFlow training job on your own computer using Cloud Machine Learning Engine, what would your command start with?

  • A . gcloud ml-engine local train
  • B . gcloud ml-engine jobs submit training
  • C . gcloud ml-engine jobs submit training local
  • D . You can’t run a TensorFlow program on your own computer using Cloud ML Engine .

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

gcloud ml-engine local train – run a Cloud ML Engine training job locally

This command runs the specified module in an environment similar to that of a live Cloud ML Engine Training Job.

This is especially useful in the case of testing distributed models, as it allows you to validate that you are properly interacting with the Cloud ML Engine cluster configuration.

Reference: https://cloud.google.com/sdk/gcloud/reference/ml-engine/local/train

Question #76

If you want to create a machine learning model that predicts the price of a particular stock based on its recent price history, what type of estimator should you use?

  • A . Unsupervised learning
  • B . Regressor
  • C . Classifier
  • D . Clustering estimator

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Regression is the supervised learning task for modeling and predicting continuous, numeric variables. Examples include predicting real-estate prices, stock price movements, or student test scores.

Classification is the supervised learning task for modeling and predicting categorical variables. Examples include predicting employee churn, email spam, financial fraud, or student letter grades. Clustering is an unsupervised learning task for finding natural groupings of observations (i.e. clusters) based on the inherent structure within your dataset. Examples include customer segmentation, grouping similar items in e-commerce, and social network analysis.

Reference: https://elitedatascience.com/machine-learning-algorithms

Question #77

Suppose you have a dataset of images that are each labeled as to whether or not they contain a human face.

To create a neural network that recognizes human faces in images using this labeled dataset, what approach would likely be the most effective?

  • A . Use K-means Clustering to detect faces in the pixels.
  • B . Use feature engineering to add features for eyes, noses, and mouths to the input data.
  • C . Use deep learning by creating a neural network with multiple hidden layers to automatically detect features of faces.
  • D . Build a neural network with an input layer of pixels, a hidden layer, and an output layer with two categories.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Traditional machine learning relies on shallow nets, composed of one input and one output layer, and at most one hidden layer in between. More than three layers (including input and output) qualifies as “deep” learning. So deep is a strictly defined, technical term that means more than one hidden layer.

In deep-learning networks, each layer of nodes trains on a distinct set of features based on the previous layer’s output. The further you advance into the neural net, the more complex the features your nodes can recognize, since they aggregate and recombine features from the previous layer.

A neural network with only one hidden layer would be unable to automatically recognize high-level features of faces, such as eyes, because it wouldn’t be able to "build" these features using previous hidden layers that detect low-level features, such as lines. Feature engineering is difficult to perform on raw image data.

K-means Clustering is an unsupervised learning method used to categorize unlabeled data.

Reference: https://deeplearning4j.org/neuralnet-overview

Question #78

What are two of the characteristics of using online prediction rather than batch prediction?

  • A . It is optimized to handle a high volume of data instances in a job and to run more complex models.
  • B . Predictions are returned in the response message.
  • C . Predictions are written to output files in a Cloud Storage location that you specify.
  • D . It is optimized to minimize the latency of serving predictions.

Reveal Solution Hide Solution

Correct Answer: BD
BD

Explanation:

Online prediction

.Optimized to minimize the latency of serving predictions.

.Predictions returned in the response message.

Batch prediction

.Optimized to handle a high volume of instances in a job and to run more complex models.

.Predictions written to output files in a Cloud Storage location that you specify.

Reference: https://cloud.google.com/ml-engine/docs/prediction-overview#online_prediction_versus_batch_prediction

Question #79

Which of these are examples of a value in a sparse vector? (Select 2 answers.)

  • A . [0, 5, 0, 0, 0, 0]
  • B . [0, 0, 0, 1, 0, 0, 1]
  • C . [0, 1]
  • D . [1, 0, 0, 0, 0, 0, 0]

Reveal Solution Hide Solution

Correct Answer: CD
CD

Explanation:

Categorical features in linear models are typically translated into a sparse vector in which each possible value has a corresponding index or id. For example, if there are only three possible eye colors you can represent ‘eye_color’ as a length 3 vector: ‘brown’ would become [1, 0, 0], ‘blue’ would become [0, 1, 0] and ‘green’ would become [0, 0, 1]. These vectors are called "sparse" because they may be very long, with many zeros, when the set of possible values is very large (such as all English words).

[0, 0, 0, 1, 0, 0, 1] is not a sparse vector because it has two 1s in it. A sparse vector contains only a single 1.

[0, 5, 0, 0, 0, 0] is not a sparse vector because it has a 5 in it. Sparse vectors only contain 0s and 1s.

Reference: https://www.tensorflow.org/tutorials/linear#feature_columns_and_transformations

Question #80

How can you get a neural network to learn about relationships between categories in a categorical feature?

  • A . Create a multi-hot column
  • B . Create a one-hot column
  • C . Create a hash bucket
  • D . Create an embedding column

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

There are two problems with one-hot encoding. First, it has high dimensionality, meaning that instead of having just one value, like a continuous feature, it has many values, or dimensions. This makes computation more time-consuming, especially if a feature has a very large number of categories. The second problem is that it doesn’t encode any relationships between the categories. They are completely independent from each other, so the network has no way of knowing which ones are similar to each other.

Both of these problems can be solved by representing a categorical feature with an embedding column. The idea is that each category has a smaller vector with, let’s say, 5 values in it. But unlike a one-hot vector, the values are not usually 0. The values are weights, similar to the weights that are used for basic features in a neural network. The difference is that each category has a set of weights

(5 of them in this case).

You can think of each value in the embedding vector as a feature of the category. So, if two categories are very similar to each other, then their embedding vectors should be very similar too.

Reference: https://cloudacademy.com/google/introduction-to-google-cloud-machine-learning-engine-course/a-wide-and-deep-model.html

Question #81

If a dataset contains rows with individual people and columns for year of birth, country, and income, how many of the columns are continuous and how many are categorical?

  • A . 1 continuous and 2 categorical
  • B . 3 categorical
  • C . 3 continuous
  • D . 2 continuous and 1 categorical

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The columns can be grouped into two types―categorical and continuous columns:

A column is called categorical if its value can only be one of the categories in a finite set. For example, the native country of a person (U.S., India, Japan, etc.) or the education level (high school, college, etc.) are categorical columns.

A column is called continuous if its value can be any numerical value in a continuous range. For example, the capital gain of a person (e.g. $14,084) is a continuous column.

Year of birth and income are continuous columns. Country is a categorical column.

You could use bucketization to turn year of birth and/or income into categorical features, but the raw columns are continuous.

Reference: https://www.tensorflow.org/tutorials/wide#reading_the_census_data

Question #82

Which of the following are examples of hyperparameters? (Select 2 answers.)

  • A . Number of hidden layers
  • B . Number of nodes in each hidden layer
  • C . Biases
  • D . Weights

Reveal Solution Hide Solution

Correct Answer: AB
AB

Explanation:

If model parameters are variables that get adjusted by training with existing data, your hyperparameters are the variables about the training process itself. For example, part of setting up a deep neural network is deciding how many "hidden" layers of nodes to use between the input layer and the output layer, as well as how many nodes each layer should use. These variables are not directly related to the training data at all. They are configuration variables. Another difference is that parameters change during a training job, while the hyperparameters are usually constant during a job.

Weights and biases are variables that get adjusted during the training process, so they are not hyperparameters.

Reference: https://cloud.google.com/ml-engine/docs/hyperparameter-tuning-overview

Question #83

Which of the following are feature engineering techniques? (Select 2 answers)

  • A . Hidden feature layers
  • B . Feature prioritization
  • C . Crossed feature columns
  • D . Bucketization of a continuous feature

Reveal Solution Hide Solution

Correct Answer: CD
CD

Explanation:

Selecting and crafting the right set of feature columns is key to learning an effective model. Bucketization is a process of dividing the entire range of a continuous feature into a set of consecutive bins/buckets, and then converting the original numerical feature into a bucket ID (as a categorical feature) depending on which bucket that value falls into.

Using each base feature column separately may not be enough to explain the data. To learn the differences between different feature combinations, we can add crossed feature columns to the model.

Reference: https://www.tensorflow.org/tutorials/wide#selecting_and_engineering_features_for_the_model

Question #84

You want to use a BigQuery table as a data sink. In which writing mode(s) can you use BigQuery as a sink?

  • A . Both batch and streaming
  • B . BigQuery cannot be used as a sink
  • C . Only batch
  • D . Only streaming

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

When you apply a BigQueryIO.Write transform in batch mode to write to a single table, Dataflow invokes a BigQuery load job. When you apply a BigQueryIO.Write transform in streaming mode or in batch mode using a function to specify the destination table, Dataflow uses BigQuery’s streaming inserts

Reference: https://cloud.google.com/dataflow/model/bigquery-io

Question #85

You have a job that you want to cancel. It is a streaming pipeline, and you want to ensure that any data that is in-flight is processed and written to the output.

Which of the following commands can you use on the Dataflow monitoring console to stop the pipeline job?

  • A . Cancel
  • B . Drain
  • C . Stop
  • D . Finish

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Using the Drain option to stop your job tells the Dataflow service to finish your job in its current state. Your job will immediately stop ingesting new data from input sources, but the Dataflow service will preserve any existing resources (such as worker instances) to finish processing and writing any buffered data in your pipeline.

Reference: https://cloud.google.com/dataflow/pipelines/stopping-a-pipeline

Question #86

When running a pipeline that has a BigQuery source, on your local machine, you continue to get permission denied errors.

What could be the reason for that?

  • A . Your gcloud does not have access to the BigQuery resources
  • B . BigQuery cannot be accessed from local machines
  • C . You are missing gcloud on your machine
  • D . Pipelines cannot be run locally

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

When reading from a Dataflow source or writing to a Dataflow sink using DirectPipelineRunner, the Cloud Platform account that you configured with the gcloud executable will need access to the corresponding source/sink

Reference: https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/runners/DirectPipelineRunner

Question #87

What Dataflow concept determines when a Window’s contents should be output based on certain criteria being met?

  • A . Sessions
  • B . OutputCriteria
  • C . Windows
  • D . Triggers

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Triggers control when the elements for a specific key and window are output. As elements arrive, they are put into one or more windows by a Window transform and its associated WindowFn, and then passed to the associated Trigger to determine if the Windows contents should be output.

Reference: https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/transforms/windowing/Trigger

Question #88

Which of the following is NOT one of the three main types of triggers that Dataflow supports?

  • A . Trigger based on element size in bytes
  • B . Trigger that is a combination of other triggers
  • C . Trigger based on element count
  • D . Trigger based on time

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

There are three major kinds of triggers that Dataflow supports:

Question #88

Which of the following is NOT one of the three main types of triggers that Dataflow supports?

  • A . Trigger based on element size in bytes
  • B . Trigger that is a combination of other triggers
  • C . Trigger based on element count
  • D . Trigger based on time

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

There are three major kinds of triggers that Dataflow supports:

Question #88

Which of the following is NOT one of the three main types of triggers that Dataflow supports?

  • A . Trigger based on element size in bytes
  • B . Trigger that is a combination of other triggers
  • C . Trigger based on element count
  • D . Trigger based on time

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

There are three major kinds of triggers that Dataflow supports:

Question #88

Which of the following is NOT one of the three main types of triggers that Dataflow supports?

  • A . Trigger based on element size in bytes
  • B . Trigger that is a combination of other triggers
  • C . Trigger based on element count
  • D . Trigger based on time

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

There are three major kinds of triggers that Dataflow supports:

Question #92

Which Java SDK class can you use to run your Dataflow programs locally?

  • A . LocalRunner
  • B . DirectPipelineRunner
  • C . MachineRunner
  • D . LocalPipelineRunner

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

DirectPipelineRunner allows you to execute operations in the pipeline directly, without any optimization. Useful for small local execution and tests

Reference: https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/runners/DirectPipelineRunner

Question #93

The Dataflow SDKs have been recently transitioned into which Apache service?

  • A . Apache Spark
  • B . Apache Hadoop
  • C . Apache Kafka
  • D . Apache Beam

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Dataflow SDKs are being transitioned to Apache Beam, as per the latest Google directive

Reference: https://cloud.google.com/dataflow/docs/

Question #94

The _________ for Cloud Bigtable makes it possible to use Cloud Bigtable in a Cloud Dataflow pipeline.

  • A . Cloud Dataflow connector
  • B . DataFlow SDK
  • C . BiqQuery API
  • D . BigQuery Data Transfer Service

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The Cloud Dataflow connector for Cloud Bigtable makes it possible to use Cloud Bigtable in a Cloud Dataflow pipeline. You can use the connector for both batch and streaming operations.

Reference: https://cloud.google.com/bigtable/docs/dataflow-hbase

Question #95

Does Dataflow process batch data pipelines or streaming data pipelines?

  • A . Only Batch Data Pipelines
  • B . Both Batch and Streaming Data Pipelines
  • C . Only Streaming Data Pipelines
  • D . None of the above

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Dataflow is a unified processing model, and can execute both streaming and batch data pipelines

Reference: https://cloud.google.com/dataflow/

Question #96

You are planning to use Google’s Dataflow SDK to analyze customer data such as displayed below.

Your project requirement is to extract only the customer name from the data source and then write to an output PCollection.

Tom,555 X street

Tim,553 Y street

Sam, 111 Z street

Which operation is best suited for the above data processing requirement?

  • A . ParDo
  • B . Sink API
  • C . Source API
  • D . Data extraction

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

In Google Cloud dataflow SDK, you can use the ParDo to extract only a customer name of each element in your PCollection.

Reference: https://cloud.google.com/dataflow/model/par-do

Question #97

Which Cloud Dataflow / Beam feature should you use to aggregate data in an unbounded data source every hour based on the time when the data entered the pipeline?

  • A . An hourly watermark
  • B . An event time trigger
  • C . The with Allowed Lateness method
  • D . A processing time trigger

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

When collecting and grouping data into windows, Beam uses triggers to determine when to emit the aggregated results of each window.

Processing time triggers. These triggers operate on the processing time C the time when the data element is processed at any given stage in the pipeline.

Event time triggers. These triggers operate on the event time, as indicated by the timestamp on each data element. Beam’s default trigger is event time-based.

Reference: https://beam.apache.org/documentation/programming-guide/#triggers

Question #98

Which of the following is NOT true about Dataflow pipelines?

  • A . Dataflow pipelines are tied to Dataflow, and cannot be run on any other runner
  • B . Dataflow pipelines can consume data from other Google Cloud services
  • C . Dataflow pipelines can be programmed in Java
  • D . Dataflow pipelines use a unified programming model, so can work both with streaming and batch data sources

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Dataflow pipelines can also run on alternate runtimes like Spark and Flink, as they are built using the Apache Beam SDKs

Reference: https://cloud.google.com/dataflow/

Question #99

You are developing a software application using Google’s Dataflow SDK, and want to use conditional, for loops and other complex programming structures to create a branching pipeline.

Which component will be used for the data processing operation?

  • A . PCollection
  • B . Transform
  • C . Pipeline
  • D . Sink API

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

In Google Cloud, the Dataflow SDK provides a transform component. It is responsible for the data processing operation. You can use conditional, for loops, and other complex programming structure to create a branching pipeline.

Reference: https://cloud.google.com/dataflow/model/programming-model

Question #100

Which of the following IAM roles does your Compute Engine account require to be able to run pipeline jobs?

  • A . dataflow.worker
  • B . dataflow.compute
  • C . dataflow.developer
  • D . dataflow.viewer

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The dataflow.worker role provides the permissions necessary for a Compute Engine service account to execute work units for a Dataflow pipeline

Reference: https://cloud.google.com/dataflow/access-control

Question #101

Which of the following is not true about Dataflow pipelines?

  • A . Pipelines are a set of operations
  • B . Pipelines represent a data processing job
  • C . Pipelines represent a directed graph of steps
  • D . Pipelines can share data between instances

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The data and transforms in a pipeline are unique to, and owned by, that pipeline. While your program can create multiple pipelines, pipelines cannot share data or transforms

Reference: https://cloud.google.com/dataflow/model/pipelines

Question #102

By default, which of the following windowing behavior does Dataflow apply to unbounded data sets?

  • A . Windows at every 100 MB of data
  • B . Single, Global Window
  • C . Windows at every 1 minute
  • D . Windows at every 10 minutes

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Dataflow’s default windowing behavior is to assign all elements of a PCollection to a single, global window, even for unbounded PCollections

Reference: https://cloud.google.com/dataflow/model/pcollection

Question #103

Which of the following job types are supported by Cloud Dataproc (select 3 answers)?

  • A . Hive
  • B . Pig
  • C . YARN
  • D . Spark

Reveal Solution Hide Solution

Correct Answer: ABD
ABD

Explanation:

Cloud Dataproc provides out-of-the box and end-to-end support for many of the most popular job types, including Spark, Spark SQL, PySpark, MapReduce, Hive, and Pig jobs.

Reference: https://cloud.google.com/dataproc/docs/resources/faq#what_type_of_jobs_can_i_run

Exit mobile version