Official Exam Questions Flashcards

1
Q

Question 1

Which of the following describes a benefit of a data lakehouse that is unavailable in a traditional data warehouse?

A. A data lakehouse provides a relational system of data management.
B. A data lakehouse captures snapshots of data for version control purposes.
C. A data lakehouse couples storage and compute for complete control.
D. A data lakehouse utilizes proprietary storage formats for data.
E. A data lakehouse enables both batch and streaming analytics.

A

E. A data lakehouse enables both batch and streaming analytics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Question 2

Which of the following locations hosts the driver and worker nodes of a
Databricks-managed cluster?

A. Data plane
B. Control plane
C. Databricks Filesystem
D. JDBC data source
E. Databricks web application

A

A. Data plane

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Question 3

A data architect is designing a data model that works for both video-based machine learning workloads and highly audited batch ETL/ELT workloads.

Which of the following describes how using a data lakehouse can help the data architect meet the needs of both workloads?

A. A data lakehouse requires very little data modeling.
B. A data lakehouse combines compute and storage for simple governance.
C. A data lakehouse provides autoscaling for compute clusters.
D. A data lakehouse stores unstructured data and is ACID-compliant.
E. A data lakehouse fully exists in the cloud.

A

D. A data lakehouse stores unstructured data and is ACID-compliant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Question 4

Which of the following describes a scenario in which a data engineer will want to use a Job cluster instead of an all-purpose cluster?

A. An ad-hoc analytics report needs to be developed while minimizing compute costs.
B. A data team needs to collaborate on the development of a machine learning model.
C. An automated workflow needs to be run every 30 minutes.
D. A Databricks SQL query needs to be scheduled for upward reporting.
E. A data engineer needs to manually investigate a production error.

A

C. An automated workflow needs to be run every 30 minutes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Question 5

A data engineer has created a Delta table as part of a data pipeline. Downstream data analysts now need SELECT permission on the Delta table.

Assuming the data engineer is the Delta table owner, which part of the Databricks Lakehouse Platform can the data engineer use to grant the data analysts the appropriate access?

A. Repos
B. Jobs
C. Data Explorer
D. Databricks Filesystem
E. Dashboards

A

C. Data Explorer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Question 6

Two junior data engineers are authoring separate parts of a single data pipeline notebook. They are working on separate Git branches so they can pair program on the same notebook simultaneously. A senior data engineer experienced in Databricks suggests there is a better
alternative for this type of collaboration.

Which of the following supports the senior data engineer’s claim?

A. Databricks Notebooks support automatic change-tracking and versioning
B. Databricks Notebooks support real-time coauthoring on a single notebook
C. Databricks Notebooks support commenting and notification comments
D. Databricks Notebooks support the use of multiple languages in the same notebook
E. Databricks Notebooks support the creation of interactive data visualizations

A

B. Databricks Notebooks support real-time coauthoring on a single notebook

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Question 7

Which of the following describes how Databricks Repos can help facilitate CI/CD workflows on the Databricks Lakehouse Platform?

A. Databricks Repos can facilitate the pull request, review, and approval process before
merging branches
B. Databricks Repos can merge changes from a secondary Git branch into a main Git
branch
C. Databricks Repos can be used to design, develop, and trigger Git automation
pipelines
D. Databricks Repos can store the single-source-of-truth Git repository
E. Databricks Repos can commit or push code changes to trigger a CI/CD process

A

E. Databricks Repos can commit or push code changes to trigger a CI/CD process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Question 8

Which of the following statements describes Delta Lake?

A. Delta Lake is an open source analytics engine used for big data workloads.
B. Delta Lake is an open format storage layer that delivers reliability, security, and performance.
C. Delta Lake is an open source platform to help manage the complete machine
learning lifecycle.
D. Delta Lake is an open source data storage format for distributed data.
E. Delta Lake is an open format storage layer that processes data.

A

B. Delta Lake is an open format storage layer that delivers reliability, security, and performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Question 9

A data architect has determined that a table of the following format is necessary:

|id |birthData |avgRating|
|a1|1900-01-06|5.5 |
|a2|1974-11-21|7.1 |

Which of the following code blocks uses SQL DDL commands to create an empty Delta table in the above format regardless of whether a table already exists with this name?

A. CREATE OR REPLACE TABLE table_name AS
SELECT
id STRING,
birthDate DATE,
avgRating FLOAT
USING DELTA

B. CREATE OR REPLACE TABLE table_name (
id STRING,
birthDate DATE,
avgRating FLOAT
)

C. CREATE TABLE IF NOT EXISTS table_name (
id STRING,
birthDate DATE,
avgRating FLOAT
)

D. CREATE TABLE table_name AS
SELECT
id STRING,
birthDate DATE,
avgRating FLOAT

E. CREATE OR REPLACE TABLE table_name WITH COLUMNS (
id STRING,
birthDate DATE,
avgRating FLOAT
) USING DELTA

A

B. CREATE OR REPLACE TABLE table_name (
id STRING,
birthDate DATE,
avgRating FLOAT
)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Question 10

Which of the following SQL keywords can be used to append new rows to an existing Delta table?

A. UPDATE
B. COPY
C. INSERT INTO
D. DELETE
E. UNION

A

C. INSERT INTO

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Question 11

A data engineering team needs to query a Delta table to extract rows that all meet the same condition. However, the team has noticed that the query is running slowly. The team has already tuned the size of the data files. Upon investigating, the team has concluded that the rows meeting the condition are sparsely located throughout each of the data files.

Based on the scenario, which of the following optimization techniques could speed up the query?

A. Data skipping
B. Z-Ordering
C. Bin-packing
D. Write as a Parquet file
E. Tuning the file size

A

B. Z-Ordering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Question 12

A data engineer needs to create a database called customer360 at the location /customer/customer360. The data engineer is unsure if one of their colleagues has already created the database.

Which of the following commands should the data engineer run to complete this task?

A. CREATE DATABASE customer360 LOCATION ‘/customer/customer360’;
B. CREATE DATABASE IF NOT EXISTS customer360;
C. CREATE DATABASE IF NOT EXISTS customer360 LOCATION
‘/customer/customer360’;
D. CREATE DATABASE IF NOT EXISTS customer360 DELTA LOCATION
‘/customer/customer360’;
E. CREATE DATABASE customer360 DELTA LOCATION
‘/customer/customer360’;

A

C. CREATE DATABASE IF NOT EXISTS customer360 LOCATION
‘/customer/customer360’;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Question 13

A junior data engineer needs to create a Spark SQL table my_table for which Spark manages both the data and the metadata. The metadata and data should also be stored in the Databricks Filesystem (DBFS).

Which of the following commands should a senior data engineer share with the junior data engineer to complete this task?

A. CREATE TABLE my_table (id STRING, value STRING) USING
org.apache.spark.sql.parquet OPTIONS (PATH “storage-path”);
B. CREATE MANAGED TABLE my_table (id STRING, value STRING) USING
org.apache.spark.sql.parquet OPTIONS (PATH “storage-path”);
C. CREATE MANAGED TABLE my_table (id STRING, value STRING);
D. CREATE TABLE my_table (id STRING, value STRING) USING DBFS;
E. CREATE TABLE my_table (id STRING, value STRING);

A

E. CREATE TABLE my_table (id STRING, value STRING);

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Question 14

A data engineer wants to create a relational object by pulling data from two tables. The relational object must be used by other data engineers in other sessions. In order to save on
storage costs, the data engineer wants to avoid copying and storing physical data.

Which of the following relational objects should the data engineer create?

A. View
B. Temporary view
C. Delta Table
D. Database
E. Spark SQL Table

A

A. View

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Question 15

A data engineering team has created a series of tables using Parquet data stored in an external system. The team is noticing that after appending new rows to the data in the external system, their queries within Databricks are not returning the new rows. They identify
the caching of the previous data as the cause of this issue.

Which of the following approaches will ensure that the data returned by queries is always up-to-date?

A. The tables should be converted to the Delta format
B. The tables should be stored in a cloud-based external system
C. The tables should be refreshed in the writing cluster before the next query is run
D. The tables should be altered to include metadata to not cache
E. The tables should be updated before the next query is run

A

A. The tables should be converted to the Delta format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Question 16

A table customerLocations exists with the following schema:

id STRING,
date STRING,
city STRING,
country STRING

A senior data engineer wants to create a new table from this table using the following command:

CREATE TABLE customersPerCountry AS
SELECT country,
COUNT(*) AS customers
FROM customerLocations
GROUP BY country;

A junior data engineer asks why the schema is not being declared for the new table.

Which of the following responses explains why declaring the schema is not necessary?

A. CREATE TABLE AS SELECT statements adopt schema details from the source
table and query.
B. CREATE TABLE AS SELECT statements infer the schema by scanning the data.
C. CREATE TABLE AS SELECT statements result in tables where schemas are
optional.
D. CREATE TABLE AS SELECT statements assign all columns the type STRING.
E. CREATE TABLE AS SELECT statements result in tables that do not support
schemas.

A

A. CREATE TABLE AS SELECT statements adopt schema details from the source
table and query.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Question 17

A data engineer is overwriting data in a table by deleting the table and recreating the table. Another data
engineer suggests that this is inefficient and the table should simply be overwritten instead.

Which of the following reasons to overwrite the table instead of deleting and recreating the table is incorrect?

A. Overwriting a table is efficient because no files need to be deleted.
B. Overwriting a table results in a clean table history for logging and audit purposes.
C. Overwriting a table maintains the old version of the table for Time Travel.
D. Overwriting a table is an atomic operation and will not leave the table in an
unfinished state.
E. Overwriting a table allows for concurrent queries to be completed while in progress.

A

B. Overwriting a table results in a clean table history for logging and audit purposes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Question 18

Which of the following commands will return records from an existing Delta table my_table where duplicates have been removed?

A. DROP DUPLICATES FROM my_table;
B. SELECT * FROM my_table WHERE duplicate = False;
C. SELECT DISTINCT * FROM my_table;
D. MERGE INTO my_table a USING new_records b ON a.id = b.id WHEN
NOT MATCHED THEN INSERT *;
E. MERGE INTO my_table a USING new_records b;

A

C. SELECT DISTINCT * FROM my_table;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Question 19

A data engineer wants to horizontally combine two tables as a part of a query. They want to use a shared column as a key column, and they only want the query result to contain rows
whose value in the key column is present in both tables.

Which of the following SQL commands can they use to accomplish this task?

A. INNER JOIN
B. OUTER JOIN
C. LEFT JOIN
D. MERGE
E. UNION

A

A. INNER JOIN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Question 20

A junior data engineer has ingested a JSON file into a table raw_table with the following schema:

cart_id STRING,
items ARRAY<item_id:STRING></item_id:STRING>

The junior data engineer would like to unnest the items column in raw_table to result in a new table with the following schema:

cart_id STRING,
item_id STRING

Which of the following commands should the junior data engineer run to complete this task?

A. SELECT cart_id, filter(items) AS item_id FROM raw_table;
B. SELECT cart_id, flatten(items) AS item_id FROM raw_table;
C. SELECT cart_id, reduce(items) AS item_id FROM raw_table;
D. SELECT cart_id, explode(items) AS item_id FROM raw_table;
E. SELECT cart_id, slice(items) AS item_id FROM raw_table;

A

D. SELECT cart_id, explode(items) AS item_id FROM raw_table;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Question 21

A data engineer has ingested a JSON file into a table raw_table with the following schema:

transaction_id STRING,
payload ARRAY<customer_id:STRING, date:TIMESTAMP, store_id:STRING>

The data engineer wants to efficiently extract the date of each transaction into a table with the following schema:

transaction_id STRING,
date TIMESTAMP

Which of the following commands should the data engineer run to complete this task?

A. SELECT transaction_id, explode(payload) FROM raw_table;
B. SELECT transaction_id, payload.date FROM raw_table;
C. SELECT transaction_id, date FROM raw_table;
D. SELECT transaction_id, payload[date] FROM raw_table;
E. SELECT transaction_id, date from payload FROM raw_table;

A

B. SELECT transaction_id, payload.date FROM raw_table;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Question 22

A data analyst has provided a data engineering team with the following Spark SQL query:

SELECT district,
avg(sales)
FROM store_sales_20220101
GROUP BY district;

The data analyst would like the data engineering team to run this query every day. The date at the end of the table name (20220101) should automatically be replaced with the current date each time the query is run.

Which of the following approaches could be used by the data engineering team to efficiently automate this process?

A. They could wrap the query using PySpark and use Python’s string variable system to automatically update the table name.
B. They could manually replace the date within the table name with the current day’s
date.
C. They could request that the data analyst rewrites the query to be run less frequently.
D. They could replace the string-formatted date in the table with a
timestamp-formatted date.
E. They could pass the table into PySpark and develop a robustly tested module on the
existing query

A

A. They could wrap the query using PySpark and use Python’s string variable system to automatically update the table name.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Question 23

A data engineer has ingested data from an external source into a PySpark DataFrame raw_df. They need to briefly make this data available in SQL for a data analyst to perform a quality assurance check on the data.

Which of the following commands should the data engineer run to make this data available in SQL for only the remainder of the Spark session?

A. raw_df.createOrReplaceTempView(“raw_df”)
B. raw_df.createTable(“raw_df”)
C. raw_df.write.save(“raw_df”)
D. raw_df.saveAsTable(“raw_df”)
E. There is no way to share data between PySpark and SQL.

A

A. raw_df.createOrReplaceTempView(“raw_df”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Question 24

A data engineer needs to dynamically create a table name string using three Python variables: region, store, and year. An example of a table name is below when region =
“nyc”, store = “100”, and year = “2021”:
nyc100_sales_2021

Which of the following commands should the data engineer use to construct the table name in Python?

A. “{region}+{store}+sales+{year}”
B. f”{region}+{store}+sales+{year}”
C. “{region}{store}sales{year}”
D. f”{region}{store}sales{year}”
E. {region}+{store}+”sales”+{year}

A

D. f”{region}{store}sales{year}”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Question 25

A data engineer has developed a code block to perform a streaming read on a data source. The code block is below:

(spark
.read
.schema(schema)
.format(“cloudFiles”)
.option(“cloudFiles.format”, “json”)
.load(dataSource)
)

The code block is returning an error.
Which of the following changes should be made to the code block to configure the block to successfully perform a streaming read?

A. The .read line should be replaced with .readStream.
B. A new .stream line should be added after the .read line.
C. The .format(“cloudFiles”) line should be replaced with .format(“stream”).
D. A new .stream line should be added after the spark line.
E. A new .stream line should be added after the .load(dataSource) line.

A

A. The .read line should be replaced with .readStream.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Question 26

A data engineer has configured a Structured Streaming job to read from a table, manipulate the data, and then perform a streaming write into a new table.

The code block used by the data engineer is below:

(spark.table(“sales”)
.withColumn(“avg_price”, col(“sales”) / col(“units”))
.writeStream
.option(“checkpointLocation”, checkpointPath)
.outputMode(“complete”)
._____
.table(“new_sales”)
)

If the data engineer only wants the query to execute a single micro-batch to process all of the available data, which of the following lines of code should the data engineer use to fill in
the blank?

A. trigger(once=True)
B. trigger(continuous=”once”)
C. processingTime(“once”)
D. trigger(processingTime=”once”)
E. processingTime(1)

A

A. trigger(once=True)

27
Q

Question 27

A data engineer is designing a data pipeline. The source system generates files in a shared directory that is also used by other processes. As a result, the files should be kept as is and will accumulate in the directory. The data engineer needs to identify which files are new since the previous run in the pipeline, and set up the pipeline to only ingest those new files with each run.

Which of the following tools can the data engineer use to solve this problem?

A. Databricks SQL
B. Delta Lake
C. Unity Catalog
D. Data Explorer
E. Auto Loader

A

E. Auto Loader

28
Q

Question 28

A data engineering team is in the process of converting their existing data pipeline to utilize Auto Loader for incremental processing in the ingestion of JSON files. One data engineer
comes across the following code block in the Auto Loader documentation:

(streaming_df = spark.readStream.format(“cloudFiles”)
.option(“cloudFiles.format”, “json”)
.option(“cloudFiles.schemaLocation”, schemaLocation)
.load(sourcePath))

Assuming that schemaLocation and sourcePath have been set correctly, which of the following changes does the data engineer need to make to convert this code block to use Auto Loader to ingest the data?

A. The data engineer needs to change the format(“cloudFiles”) line to
format(“autoLoader”).
B. There is no change required. Databricks automatically uses Auto Loader for streaming reads.
C. There is no change required. The inclusion of format(“cloudFiles”) enables the use of Auto Loader.
D. The data engineer needs to add the .autoLoader line before the .load(sourcePath) line.
E. There is no change required. The data engineer needs to ask their administrator to turn on Auto Loader.

A

C. There is no change required. The inclusion of format(“cloudFiles”) enables the use of Auto Loader.

29
Q

Question 29

Which of the following data workloads will utilize a Bronze table as its source?

A. A job that aggregates cleaned data to create standard summary statistics
B. A job that queries aggregated data to publish key insights into a dashboard
C. A job that ingests raw data from a streaming source into the Lakehouse
D. A job that develops a feature set for a machine learning application
E. A job that enriches data by parsing its timestamps into a human-readable format

A

E. A job that enriches data by parsing its timestamps into a human-readable format

30
Q

Question 30

Which of the following data workloads will utilize a Silver table as its source?

A. A job that enriches data by parsing its timestamps into a human-readable format
B. A job that queries aggregated data that already feeds into a dashboard
C. A job that ingests raw data from a streaming source into the Lakehouse
D. A job that aggregates cleaned data to create standard summary statistics
E. A job that cleans data by removing malformatted records

A

D. A job that aggregates cleaned data to create standard summary statistics

31
Q

Question 31

Which of the following Structured Streaming queries is performing a hop from a Bronze table to a Silver table?

A. (spark.table(“sales”)
.groupBy(“store”)
.agg(sum(“sales”))
.writeStream
.option(“checkpointLocation”, checkpointPath)
.outputMode(“complete”)
.table(“aggregatedSales”)
)

B. (spark.table(“sales”)
.agg(sum(“sales”),
sum(“units”))
.writeStream
.option(“checkpointLocation”, checkpointPath)
.outputMode(“complete”)
.table(“aggregatedSales”)
)

C. (spark.table(“sales”)
.withColumn(“avgPrice”, col(“sales”) / col(“units”))
.writeStream
.option(“checkpointLocation”, checkpointPath)
.outputMode(“append”)
.table(“cleanedSales”)
)

D. (spark.readStream.load(rawSalesLocation)
.writeStream
.option(“checkpointLocation”, checkpointPath)
.outputMode(“append”)
.table(“uncleanedSales”)
)

E. (spark.read.load(rawSalesLocation)
.writeStream
.option(“checkpointLocation”, checkpointPath)
.outputMode(“append”)
.table(“uncleanedSales”)
)

A

C. (spark.table(“sales”)
.withColumn(“avgPrice”, col(“sales”) / col(“units”))
.writeStream
.option(“checkpointLocation”, checkpointPath)
.outputMode(“append”)
.table(“cleanedSales”)
)

32
Q

Question 32

Which of the following benefits does Delta Live Tables provide for ELT pipelines over standard data pipelines that utilize Spark and Delta Lake on Databricks?

A. The ability to declare and maintain data table dependencies
B. The ability to write pipelines in Python and/or SQL
C. The ability to access previous versions of data tables
D. The ability to automatically scale compute resources
E. The ability to perform batch and streaming queries

A

A. The ability to declare and maintain data table dependencies

33
Q

Question 33

A data engineer has three notebooks in an ELT pipeline. The notebooks need to be executed in a specific order for the pipeline to complete successfully. The data engineer would like to use Delta Live Tables to manage this process.

Which of the following steps must the data engineer take as part of implementing this pipeline using Delta Live Tables?

A. They need to create a Delta Live Tables pipeline from the Data page.
B. They need to create a Delta Live Tables pipeline from the Jobs page.
C. They need to create a Delta Live tables pipeline from the Compute page.
D. They need to refactor their notebook to use Python and the dlt library.
E. They need to refactor their notebook to use SQL and CREATE LIVE TABLE keyword.

A

B. They need to create a Delta Live Tables pipeline from the Jobs page.

34
Q

Question 34

A data engineer has written the following query:

SELECT *
FROM json./path/to/json/file.json;

The data engineer asks a colleague for help to convert this query for use in a Delta Live Tables (DLT) pipeline. The query should create the first table in the DLT pipeline.

Which of the following describes the change the colleague needs to make to the query?

A. They need to add a COMMENT line at the beginning of the query.
B. They need to add a CREATE LIVE TABLE table_name AS line at the beginning of the query.
C. They need to add a live. prefix prior to json. in the FROM line.
D. They need to add a CREATE DELTA LIVE TABLE table_name AS line at the
beginning of the query.
E. They need to add the cloud_files(…) wrapper to the JSON file path.

A

B. They need to add a CREATE LIVE TABLE table_name AS line at the beginning of the query.

35
Q

Question 35

A dataset has been defined using Delta Live Tables and includes an expectations clause:

CONSTRAINT valid_timestamp EXPECT (timestamp > ‘2020-01-01’)

What is the expected behavior when a batch of data containing data that violates these constraints is processed?

A. Records that violate the expectation are added to the target dataset and recorded as invalid in the event log.
B. Records that violate the expectation are dropped from the target dataset and
recorded as invalid in the event log.
C. Records that violate the expectation cause the job to fail.
D. Records that violate the expectation are added to the target dataset and flagged as
invalid in a field added to the target dataset.
E. Records that violate the expectation are dropped from the target dataset and loaded into a quarantine table.

A

A. Records that violate the expectation are added to the target dataset and recorded as invalid in the event log.

36
Q

Question 36

A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three datasets are defined against Delta Lake table sources using LIVE TABLE.

The table is configured to run in Development mode using the Triggered Pipeline Mode.

Assuming previously unprocessed data exists and all definitions are valid, what is the expected outcome after clicking Start to update the pipeline?

A. All datasets will be updated once and the pipeline will shut down. The compute resources will be terminated.
B. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will be deployed for the update and terminated when the
pipeline is stopped.
C. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will persist after the pipeline is stopped to allow for additional testing.
D. All datasets will be updated once and the pipeline will shut down. The compute resources will persist to allow for additional testing.
E. All datasets will be updated continuously and the pipeline will not shut down. The compute resources will persist with the pipeline.

A

D. All datasets will be updated once and the pipeline will shut down. The compute resources will persist to allow for additional testing.

Bing Copilot:

The correct answer is D. All datasets will be updated once and the pipeline will shut down. The compute resources will persist to allow for additional testing.

According to the documentation, a Delta Live Tables pipeline can run in two modes: triggered or continuous. In triggered mode, the pipeline runs once and then stops. In continuous mode, the pipeline runs continuously until it is manually stopped or encounters an error. The pipeline mode can be specified in the pipeline configuration or in the UI when creating or editing a pipeline.

When a pipeline runs in development mode, it uses interactive clusters to execute the updates. Interactive clusters are clusters that are created and managed by the user in the Clusters page of the workspace UI. Interactive clusters are persistent and do not terminate automatically unless configured to do so. Interactive clusters allow the user to test and debug their pipelines interactively.

Therefore, if a Delta Live Tables pipeline is configured to run in development mode and triggered mode, it will update all datasets once and then shut down, and it will use interactive clusters that will persist after the pipeline is stopped to allow for additional testing.

37
Q

Question 37

A data engineer has a Job with multiple tasks that runs nightly. One of the tasks
unexpectedly fails during 10 percent of the runs.

Which of the following actions can the data engineer perform to ensure the Job completes each night while minimizing compute costs?

A. They can institute a retry policy for the entire Job
B. They can observe the task as it runs to try and determine why it is failing
C. They can set up the Job to run multiple times ensuring that at least one will
complete
D. They can institute a retry policy for the task that periodically fails
E. They can utilize a Jobs cluster for each of the tasks in the Job

A

D. They can institute a retry policy for the task that periodically fails

38
Q

Question 38

A data engineer has set up two Jobs that each run nightly. The first Job starts at 12:00 AM, and it usually completes in about 20 minutes. The second Job depends on the first Job, and it starts at 12:30 AM. Sometimes, the second Job
fails when the first Job does not complete by 12:30 AM.

Which of the following approaches can the data engineer use to avoid this problem?

A. They can utilize multiple tasks in a single job with a linear dependency
B. They can use cluster pools to help the Jobs run more efficiently
C. They can set up a retry policy on the first Job to help it run more quickly
D. They can limit the size of the output in the second Job so that it will not fail as easily
E. They can set up the data to stream from the first Job to the second Job

A

A. They can utilize multiple tasks in a single job with a linear dependency

39
Q

Question 39

A data engineer has set up a notebook to automatically process using a Job. The data engineer’s manager wants to version control the schedule due to its complexity.

Which of the following approaches can the data engineer use to obtain a version-controllable configuration of the Job’s schedule?

A. They can link the Job to notebooks that are a part of a Databricks Repo.
B. They can submit the Job once on a Job cluster.
C. They can download the JSON description of the Job from the Job’s page.
D. They can submit the Job once on an all-purpose cluster.
E. They can download the XML description of the Job from the Job’s page.

A

C. They can download the JSON description of the Job from the Job’s page.

40
Q

Question 40

A data analyst has noticed that their Databricks SQL queries are running too slowly. They claim that this issue is affecting all of their sequentially run queries. They ask the data
engineering team for help. The data engineering team notices that each of the queries uses the same SQL endpoint, but the SQL endpoint is not used by any other user.

Which of the following approaches can the data engineering team use to improve the latency of the data analyst’s queries?

A. They can turn on the Serverless feature for the SQL endpoint.
B. They can increase the maximum bound of the SQL endpoint’s scaling range.
C. They can increase the cluster size of the SQL endpoint.
D. They can turn on the Auto Stop feature for the SQL endpoint.
E. They can turn on the Serverless feature for the SQL endpoint and change the Spot
Instance Policy to “Reliability Optimized.”

A

C. They can increase the cluster size of the SQL endpoint.

41
Q

Question 41

An engineering manager uses a Databricks SQL query to monitor their team’s progress on fixes related to customer-reported bugs. The manager checks the results of the query every day, but they are manually rerunning the query each day and waiting for the results.

Which of the following approaches can the manager use to ensure the results of the query are updated each day?

A. They can schedule the query to run every 1 day from the Jobs UI.
B. They can schedule the query to refresh every 1 day from the query’s page in Databricks SQL.
C. They can schedule the query to run every 12 hours from the Jobs UI.
D. They can schedule the query to refresh every 1 day from the SQL endpoint’s page in Databricks SQL.
E. They can schedule the query to refresh every 12 hours from the SQL endpoint’s page in Databricks SQL.

A

B. They can schedule the query to refresh every 1 day from the query’s page in Databricks SQL.

42
Q

Question 42

A data engineering team has been using a Databricks SQL query to monitor the
performance of an ELT job. The ELT job is triggered by a specific number of input record being ready to process. The Databricks SQL query returns the number of minutes since the job’s most recent runtime.

Which of the following approaches can enable the data engineering team to be notified if the ELT job has not been run in an hour?

A. They can set up an Alert for the accompanying dashboard to notify them if the returned value is greater than 60.
B. They can set up an Alert for the query to notify when the ELT job fails.
C. They can set up an Alert for the accompanying dashboard to notify when it has not refreshed in 60 minutes.
D. They can set up an Alert for the query to notify them if the returned value is greater than 60.
E. This type of alerting is not possible in Databricks.

A

D. They can set up an Alert for the query to notify them if the returned value is greater than 60.

43
Q

Question 43

A data engineering manager has noticed that each of the queries in a Databricks SQL dashboard takes a few minutes to update when they manually click the “Refresh” button. They are curious why this might be occurring, so a team member provides a variety of
reasons on why the delay might be occurring.

Which of the following reasons fails to explain why the dashboard might be taking a few minutes to update?

A. The SQL endpoint being used by each of the queries might need a few minutes to start up.
B. The queries attached to the dashboard might take a few minutes to run under normal circumstances.
C. The queries attached to the dashboard might first be checking to determine if new data is available.
D. The Job associated with updating the dashboard might be using a non-pooled
endpoint.
E. The queries attached to the dashboard might all be connected to their own, unstarted Databricks clusters.

A

D. The Job associated with updating the dashboard might be using a non-pooled
endpoint.

44
Q

Question 44

A new data engineer has started at a company. The data engineer has recently been added to the company’s Databricks workspace as new.engineer@company.com. The data
engineer needs to be able to query the table sales in the database retail. The new data engineer already has been granted USAGE on the database retail.

Which of the following commands can be used to grant the appropriate permissions to the
new data engineer?

A. GRANT USAGE ON TABLE sales TO new.engineer@company.com;
B. GRANT CREATE ON TABLE sales TO new.engineer@company.com;
C. GRANT SELECT ON TABLE sales TO new.engineer@company.com;
D. GRANT USAGE ON TABLE new.engineer@company.com TO sales;
E. GRANT SELECT ON TABLE new.engineer@company.com TO sales;

A

C. GRANT SELECT ON TABLE sales TO new.engineer@company.com;

45
Q

Question 45

A new data engineer
new.engineer@company.com has been assigned to an ELT project. The new data engineer will need full privileges on the table sales to fully manage the project.

Which of the following commands can be used to grant full permissions on the table to the new data engineer?

A. GRANT ALL PRIVILEGES ON TABLE sales TO new.engineer@company.com;
B. GRANT USAGE ON TABLE sales TO new.engineer@company.com;
C. GRANT ALL PRIVILEGES ON TABLE new.engineer@company.com TO
sales;
D. GRANT SELECT ON TABLE sales TO new.engineer@company.com;
E. GRANT SELECT CREATE MODIFY ON TABLE sales TO
new.engineer@company.com;

A

A. GRANT ALL PRIVILEGES ON TABLE sales TO new.engineer@company.com;

46
Q

True or False

Is it possible to write a SQL query directly against a file or a directory of files in Databricks?

A

Using the following syntax you can run a query directly against a file or a directory of files.

SELECT * FROM file_format./path/to/file

The file path can be a single file or a directory

File Format examples would be json,csv,parquet etc

47
Q

Which of the following statements would read from a json file and filter for records where the country = “SWE”

A. SELECT * FROM json.${wasbs://some_account/some_container/countrydata.jsonwhere country = “SWE”

B. Create table as select * from
(wasbs://some_account/some_container/countrydata.json).filter(“country=‘SWE’)

C. From (create table using json, location =
wasbs://some_account/some_container/countrydata.json) as table1, select * where country =‘swe’

A

A. SELECT * FROM json.${wasbs://some_account/some_container/countrydata.jsonwhere country = “SWE”

DatabricksCertifiedAssociateDataEngineerExam.pdf

48
Q

When reading directly from a file in SQL how does spark determine the schema.

A. The schema must be supplied
B. The schema is inferred
C. The default schema of _c0 String, _c1 String,_c2 String,. is always used

A

B. The schema is inferred

When reading from a file if the file is csv the header of the file will be used for column names.

If the file is JSON the JSON will be parsed to determine the schema.

When reading from parquet, the parquet header will provide the schema

49
Q

You are tasked with reading user data that has a history of having a small but significant percentage of dates formatted incorrectly that when parsed end up being in
the future. What strategy might you employ to avoid reading those records.

  1. Add a check constraint to the table Add constraint not_in_future( date <=
    current_date())
  2. Select * from source where date <= current_date()
  3. Use a Foreign Key constraint
  4. Quarantine the source table
A

Correct answers 1,2

Discussion:

Check constraints can be added to delta tables to enforce rules that can be expressed as a sql statement.

A filter as describe in answer 2 would also work.

Delta does not at this time enforce Foreign Keys, besides it is hard to imagine how they would prevent date format issues,

Quarantining the source table would prevent any of the records from being read, instead of just the incorrectly formatted dates.

50
Q

Comments can be added as informational fields to which of the following

A. Databases(also known as schemas)
B. Tables
C. Columns
D. All of the above

A

Discussion,

Correct answer = all of the above, comments can be added to Tables, Columns and
Databases(schemas)

51
Q

Cloning Delta Tables

Which of the following statements is correct:

Definitions used in this question:
“source” = table to be cloned
“clone” = a table created using a create table table_name Deep|shallow clone source

A. Modifying the clone may conflict with writes in progress on the source.
B. Time travel on the clone is available to versions of the source created before the clone was
created
C. Delta tables with constraints can not be cloned
D. Modification of the clone will never lead to data change on the source

A

Answer = D

No operations on the clone will effect the source. It will not conflict with writes on the source, constraints
on the source will exist on the clone.
Time travel however on the clone is limited, to either the version that existed from the time of the clone,
and any future changes to the clone including an incremental application of deep clone which only copies
new data over to the clone from the source

52
Q

You have two tables, one is a delta table named conveniently enough as “delta_table” and the other is a parquet table named once again quite descriptively as parquet_table. Some error in ETL upstream has led to
source_table having zero records, when it is supposed to have new records generated daily.

If I run the following statements.

Insert overwrite delta_table select * from source_table;
Insert overwrite parquet_table select * from source table;

Which statement below is correct?

A. Both tables can be restored using “Restore table table_name version as of <previous>
B. Both tables, delta_table and parquet_table have been completed deleted, with no options to restore
C. The current version of the delta table is a full replacement of the previous version, but it can be recovered
through time travel or a restore statement
D. If the table is an external table the data is recoverable for the parquet table</previous>

A

Answer C.

The delta table can be recovered. The parquet table could only be recovered if it was stored in a
location that was backed up in some way. Wether or not the table is external or managed makes
no difference in this case.

53
Q

A data engineer has developed a code block to perform a streaming read on a data source. The
code is below:

(spark
.read
.schema(schema)
.format(“CloudFiles”)
.option(“cloudFiles.format”, “json”)
.load(dataSource)
)

The code is returning an error.

Which of the following changes should be made to the code block to configure it to successfully perform a streaming read?

A. The .read line should be replaced with .readStream.
B. A new .stream line should be added after the .read line.
C. The .format(“cloudFiles”) line should be replaced with .format(“stream”).
D. A new .stream line should be added after the spark line.
E. A new .stream line should be added after the .load(dataSource) line.

A

A. The .read line should be replaced with .readStream.

54
Q

A data engineer has three notebooks in an ELT pipeline. The notebooks need to be executed in a specific order for the pipeline to complete successfully. The data engineer would like to use Delta Live Tables to manage this process.

Which of the following steps must the data engineer take as part of implementing this pipeline using Delta Live Tables?

A. They need to create a Delta Live Tables pipeline from the Data page.
B. They need to create a Delta Live Tables pipeline from the Jobs page.
C. They need to refactor their notebook to use Python and the dlt library.
D. They need to refactor their notebook to use SQL and CREATE LIVE TABLE keyword.

A

A or B is the closest.

We can use either SQL or Python and then C and D are excluded.

I would say thay we have to refactor the notebooks to use either Python and the dlt library or SQL and the CREATE LIVE TABLE statement. Furthermore, I would say that one would have to selet the Workflows page and then the “Delta Live Tables” tab to create a Delta Live Table pipeline.

55
Q

You have written a notebook to generate a summary data set for reporting, Notebook was scheduled
using the job cluster, but you realized it takes 8 minutes to start the cluster, what feature can be used to start the cluster in a timely fashion so your job can run immediately?

A. Setup an additional job to run ahead of the actual job so the cluster is running when the second
job starts
B. Use the Databricks cluster pool feature to reduce the startup time
C. Use Databricks Premium Edition instead of Databricks Standard Edition
D. Pin the cluster in the Cluster UI page so it is always available to the jobs
E. Disable auto termination so the cluster is always running.

A

B. Use the Databricks cluster pool feature to reduce the startup time

Cluster pools allow us to reserve VM’s ahead of time, when a new job cluster is created VM are grabbed from the pool.

Note: when the VM’s are waiting to be used by the cluster only cost incurred is Azure. Databricks run time cost is only billed once VM is allocated to a cluster.

56
Q

Which of the following approaches can the data engineer use to obtain a version-controllable configuration of the Job’s schedule and configuration?

A. They can link the job to notebooks that are a part of a Databricks Repo
B. They can submit the job once on a Job Cluster
C. They can download the JSON equivalent of the job from the Job’s page
D. They can submit the Job once on a All-Purpose Cluster
E. They can download the XML description of the job from the Job’s Page

A

C. They can download the JSON equivalent of the job from the Job’s page

57
Q

A data analyst has noticed that their Databricks SQL queries are running too slowly. They claim that
this issue is affecting all of their sequentially run queries. They ask the data engineering team for help. The data engineering team notices that each of the queries uses the same SQL endpoint, but the SQL endpoint is not used by any other user.

Which of the following approaches can the data engineering team use to improve the latency of the
data analyst’s queries?

A. They can turn on the Serverless feature for the SQL endpoint.
B. They can increase the maximum bound of the SQL endpoint’s scaling range.
C. They can increase the cluster size of the SQL endpoint.
D. They can turn on the Auto Stop feature for the SQL endpoint.
E. They can turn on the Serverless feature for the SQL endpoint and change the Spot
Instance Policy to “Reliability Optimized.”

A

C. They can increase the cluster size of the SQL endpoint.

58
Q

An engineering manager uses a Databricks SQL query to monitor their team’s progress on fixes related to customer-reported bugs. The manager checks the results of the query every day, but they are manually rerunning the query each day and waiting for the results.

Which of the following approaches can the manager use to ensure the results of the query are
updated each day?

A. They can schedule the query to run every 1 day from the Jobs UI.
B. They can schedule the query to refresh every 1 day from the query’s page in Databricks SQL.
C. They can schedule the query to run every 12 hours from the Jobs UI.
D. They can schedule the query to refresh every 1 day from the SQL endpoint’s page in Databricks SQL.
E. They can schedule the query to refresh every 12 hours from the SQL endpoint’s page in Databricks SQL.

A

B. They can schedule the query to refresh every 1 day from the query’s page in Databricks SQL.

59
Q

Data engineering team has provided 10 queries and asked Data Analyst team to build a dashboard and refresh the data every day at 8 AM, identify the best approach to set up data refresh for this dashboard? Which of the following approaches can the manager use to ensure the results of the query are updated each day?

A. Each query requires a separate task and setup 10 tasks under a single job to run at 8 AM to refresh the dashboard
B. The entire dashboard with 10 queries can be refreshed at once, single schedule needs to be setup to refresh at 8 AM.
C. Setup Job with Linear Dependency to load all 10 queries into a table so the dashboard can be refreshed at once.
D. A Dashboard can only refresh one query at a time, 10 schedules to set up the refresh.
E. Use Incremental refresh to run at 8 AM every day

A

B. The entire dashboard with 10 queries can be refreshed at once, single schedule needs to be setup to refresh at 8 AM.

60
Q

What can you do with the data explorer?
A. Navigate databases tables views
B. Explore data schemas and metadata history
C. Set and modify permissions
D. All of the above

A

D. All of the above

Bing Copilot:

You can do all of the above with the data explorer in Databricks. The data explorer is a feature that allows you to explore, visualize, and manage data in tables1. You can use the data explorer to:

  • Navigate databases, tables, views, and other data objects in the Catalog Explorer1.
  • Explore data schemas, metadata, history, sample data, and details in the Schema, Details, History, and Sample Data tabs1.
  • Set and modify permissions, owners, external locations, and storage credentials for data objects in the Permissions, Owner, External Location, and Storage Credentials tabs1.

The data explorer is a powerful tool for data discovery and management in Databricks. For more information on how to use the data explorer, see Discover and manage data and other AI assets using Catalog Explorer.

61
Q

The permission of the following objects can be configured:

A. CATALOG, DATABASE, TABLE, VIEW, FUNCTION, ANY FILE
B. CATALOG, DATABASE, TABLE, VIEW, FUNCTION
C. CATALOG, DATABASE
D. DATABASE, TABLE, VIEW, FUNCTION

A

B. CATALOG, DATABASE, TABLE, VIEW, FUNCTION

62
Q

The MODIFY permission gives the ability to:

A. Add, delete and modify
B. Modify
C. Modify and delete
D. Modify and Add

A

A. Add, delete and modify

given that this means INSERT, UPDATE, DELETE rows in a table.

63
Q

The USAGE permission gives

A. Ability to Add, delete and modify
B. No ability, it is an additional requirement to perform any action on a database object
C. Modify and delete
D. Modify and Add

A

“USAGE: does not give any abilities, but is an additional requirement to perform any action on a schema object.”

“USAGE privilege

To perform an action on a schema object in the Hive metastore, a user must have the USAGE privilege on that schema in addition to the privilege to perform that action. Any one of the following satisfies the USAGE requirement:

  • Be a workspace admin
  • Have the USAGE privilege on the schema or be in a group that has the USAGE privilege on the schema
  • Have the USAGE privilege on the CATALOG or be in a group that has the USAGE privilege
  • Be the owner of the schema or be in a group that owns the schema

Even the owner of an object inside a schema must have the USAGE privilege in order to use it.”

(https://docs.databricks.com/en/data-governance/table-acls/object-privileges.html)

64
Q
A