Databricks Databricks Certified Data Engineer Associate Databricks Certified Data Engineer Associate Exam Online Training
Databricks Databricks Certified Data Engineer Associate Online Training
The questions for Databricks Certified Data Engineer Associate were last updated at Jul 07,2025.
- Exam Code: Databricks Certified Data Engineer Associate
- Exam Name: Databricks Certified Data Engineer Associate Exam
- Certification Provider: Databricks
- Latest update: Jul 07,2025
A data engineering team has two tables. The first table march_transactions is a collection of all retail transactions in the month of March. The second table april_transactions is a collection of all retail
transactions in the month of April. There are no duplicate records between the tables.
Which of the following commands should be run to create a new table all_transactions that contains all records from march_transactions and april_transactions without duplicate records?
- A . CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
INNER JOIN SELECT * FROM april_transactions; - B . CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
UNION SELECT * FROM april_transactions; - C . CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
OUTER JOIN SELECT * FROM april_transactions; - D . CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
INTERSECT SELECT * from april_transactions; - E . CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
MERGE SELECT * FROM april_transactions;
A data engineer only wants to execute the final block of a Python program if the Python variable day_of_week is equal to 1 and the Python variable review_period is True.
Which of the following control flow statements should the data engineer use to begin this conditionally executed code block?
- A . if day_of_week = 1 and review_period:
- B . if day_of_week = 1 and review_period = "True":
- C . if day_of_week == 1 and review_period == "True":
- D . if day_of_week == 1 and review_period:
- E . if day_of_week = 1 & review_period: = "True":
A data engineer is attempting to drop a Spark SQL table my_table. The data engineer wants to delete all table metadata and data.
They run the following command:
DROP TABLE IF EXISTS my_table
While the object no longer appears when they run SHOW TABLES, the data files still exist.
Which of the following describes why the data files still exist and the metadata files were deleted?
- A . The table’s data was larger than 10 GB
- B . The table’s data was smaller than 10 GB
- C . The table was external
- D . The table did not have a location
- E . The table was managed
A data engineer wants to create a data entity from a couple of tables. The data entity must be used by other data engineers in other sessions. It also must be saved to a physical location.
Which of the following data entities should the data engineer create?
- A . Database
- B . Function
- C . View
- D . Temporary view
- E . Table
A data engineer is maintaining a data pipeline. Upon data ingestion, the data engineer notices that the source data is starting to have a lower level of quality. The data engineer would like to automate the process of monitoring the quality level.
Which of the following tools can the data engineer use to solve this problem?
- A . Unity Catalog
- B . Data Explorer
- C . Delta Lake
- D . Delta Live Tables
- E . Auto Loader
A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three datasets are defined against Delta Lake table sources using LIVE TABLE.
The table is configured to run in Production mode using the Continuous Pipeline Mode.
Assuming previously unprocessed data exists and all definitions are valid, what is the expected outcome after clicking Start to update the pipeline?
- A . All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will persist to allow for additional testing.
- B . All datasets will be updated once and the pipeline will persist without any processing. The compute resources will persist but go unused.
- C . All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will be deployed for the update and terminated when the pipeline is stopped.
- D . All datasets will be updated once and the pipeline will shut down. The compute resources will be terminated.
- E . All datasets will be updated once and the pipeline will shut down. The compute resources will persist to allow for additional testing.
In order for Structured Streaming to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing, which of the following two approaches is used by Spark to record the offset range of the data being processed in each trigger?
- A . Checkpointing and Write-ahead Logs
- B . Structured Streaming cannot record the offset range of the data being processed in each trigger.
- C . Replayable Sources and Idempotent Sinks
- D . Write-ahead Logs and Idempotent Sinks
- E . Checkpointing and Idempotent Sinks
Which of the following describes the relationship between Gold tables and Silver tables?
- A . Gold tables are more likely to contain aggregations than Silver tables.
- B . Gold tables are more likely to contain valuable data than Silver tables.
- C . Gold tables are more likely to contain a less refined view of data than Silver tables.
- D . Gold tables are more likely to contain more data than Silver tables.
- E . Gold tables are more likely to contain truthful data than Silver tables.
Which of the following describes the relationship between Bronze tables and raw data?
- A . Bronze tables contain less data than raw data files.
- B . Bronze tables contain more truthful data than raw data.
- C . Bronze tables contain aggregates while raw data is unaggregated.
- D . Bronze tables contain a less refined view of data than raw data.
- E . Bronze tables contain raw data with a schema applied.
Which of the following tools is used by Auto Loader process data incrementally?
- A . Checkpointing
- B . Spark Structured Streaming
- C . Data Explorer
- D . Unity Catalog
- E . Databricks SQL