ExamTopics Flashcards
Question #5
Which of the following describes the storage organization of a Delta table?
A. Delta tables are stored in a single file that contains data, history, metadata, and other attributes.
B. Delta tables store their data in a single file and all metadata in a collection of files in a separate location.
C. Delta tables are stored in a collection of files that contain data, history, metadata, and other attributes.
D. Delta tables are stored in a collection of files that contain only the data stored within the table.
E. Delta tables are stored in a single file that contains only the data stored within the table.
C. Delta tables are stored in a collection of files that contain data, history, metadata, and other attributes.
Which of the following Git operations must be performed outside of Databricks Repos?
A. Commit
B. Pull
C. Push
D. Clone
E. Merge
Outdated. All are possible. But earlier E. Merge was correct.
According to https://learn.microsoft.com/en-us/azure/databricks/repos/:
In Databricks Repos, you can use Git functionality to:
- Clone, push to, and pull from a remote Git repository.
- Create and manage branches for development work, including merging, rebasing, and resolving conflicts.
- Create notebooks—including IPYNB notebooks—and edit them and other files.
- Visually compare differences upon commit and resolve merge conflicts.
A data engineer has configured a Structured Streaming job to read from a table, manipulate the data, and then perform a streaming write into a new table.
The cade block used by the data engineer is below:
(spark. table (“sales”)
.withcolumn (“avg_price”, col (“sales”) / col (“units”) )
.writestream
. option (“checkpointLocation”, checkpointpath)
. outputMode (“complete”)
.____
.table(“new_sales”)
If the data engineer only wants the query to execute a micro-batch to process data every 5 seconds, which of the following lines of code should the data engineer use to fill in the blank?
A. trigger(“5 seconds”)
B. trigger()
C. trigger(once=”5 seconds”)
D. trigger(processingTime=”5 seconds”)
E. trigger(continuous=”5 seconds”)
D. trigger(processingTime=”5 seconds”)
A dataset has been defined using Delta Live Tables and includes an expectations clause:
CONSTRAINT valid_timestamp EXPECT (timestamp > ‘2020-01-01’) ON VIOLATION DROP ROW
What is the expected behavior when a batch of data containing data that violates these constraints is processed?
A. Records that violate the expectation are dropped from the target dataset and loaded into a quarantine table.
B. Records that violate the expectation are added to the target dataset and flagged as invalid in a field added to the target dataset.
C. Records that violate the expectation are dropped from the target dataset and recorded as invalid in the event log.
D. Records that violate the expectation are added to the target dataset and recorded as invalid in the event log.
E. Records that violate the expectation cause the job to fail.
C. Records that violate the expectation are dropped from the target dataset and recorded as invalid in the event log.
Which of the following describes when to use the CREATE STREAMING LIVE TABLE (formerly CREATE INCREMENTAL LIVE TABLE) syntax over the CREATE LIVE TABLE syntax when creating Delta Live Tables (DLT) tables using SQL?
A. CREATE STREAMING LIVE TABLE should be used when the subsequent step in the DLT pipeline is static.
B. CREATE STREAMING LIVE TABLE should be used when data needs to be processed incrementally.
C. CREATE STREAMING LIVE TABLE is redundant for DLT and it does not need to be used.
D. CREATE STREAMING LIVE TABLE should be used when data needs to be processed through complicated aggregations.
E. CREATE STREAMING LIVE TABLE should be used when the previous step in the DLT pipeline is static.
B. CREATE STREAMING LIVE TABLE should be used when data needs to be processed incrementally.
Bing Copilot:
The correct answer is B. CREATE STREAMING LIVE TABLE should be used when data needs to be processed incrementally. According to the documentation1, streaming tables allow you to process a growing dataset, handling each row only once. Streaming tables are optimal for pipelines that require data freshness and low latency. Streaming tables can also be useful for massive scale transformations, as results can be incrementally calculated as new data arrives, keeping results up to date without needing to fully recompute all source data with each update.
The other options are incorrect for the following reasons:
A. CREATE STREAMING LIVE TABLE should be used when the subsequent step in the DLT pipeline is static. This is not true, as streaming tables can be followed by either static or streaming steps in the pipeline. The choice of streaming or static depends on the nature of the data source and the desired latency of the results.
C. CREATE STREAMING LIVE TABLE is redundant for DLT and it does not need to be used. This is not true, as streaming tables are a distinct type of dataset in DLT that have different processing semantics than static tables. Streaming tables are useful for handling append-only sources that need to be processed in real-time.
D. CREATE STREAMING LIVE TABLE should be used when data needs to be processed through complicated aggregations. This is not true, as streaming tables are not suitable for data sources that have updates, deletions, or aggregations. For these cases, materialized views should be used instead, as they can handle complex transformations and maintain data consistency.
E. CREATE STREAMING LIVE TABLE should be used when the previous step in the DLT pipeline is static. This is not true, as streaming tables can be preceded by either static or streaming steps in the pipeline. The choice of streaming or static depends on the nature of the data source and the desired latency of the results.
A data engineer is designing a data pipeline. The source system generates files in a shared directory that is also used by other processes. As a result, the files should be kept as is and will accumulate in the directory. The data engineer needs to identify which files are new since the previous run in the pipeline, and set up the pipeline to only ingest those new files with each run.
Which of the following tools can the data engineer use to solve this problem?
A. Unity Catalog
B. Delta Lake
C. Databricks SQL
D. Data Explorer
E. Auto Loader
E. Auto Loader
Which of the following Structured Streaming queries is performing a hop from a Silver table to a Gold table?
A
( spark. readstream. load (rawsa1esLocation)
. writeStream
. option ( “checkpointLocation”, checkpointpath)
. outputMode ( “ append” )
. table ( “newsales )
B
( spark. read. Ioad ( rawSaIesLocation)
. writestream
. option (“checkpointLocation”, checkpointpath)
. outputMode (“append” )
. table ( “ newSaIes “ )
C
(spark. table (“sales “ )
.withColumn (“avgPrice”,col (“sales”) / col (“units”) )
. writeStream
. option (“checkpointLocation”, checkpointpath)
. outputMode(“append”)
. table (“newsales”)
D
( spark. table (“sales”)
.filter(col(“units”) > 0)
.writestream
. option (“checkpointLocation”, checkpointPath)
. outputMode (“append”)
. table (“newsales”)
)
E
(spark. table sales”)
. groupBy (“store”)
. agg (sum (“sales”) )
. writestream
. option ( “checkpointLocation “, checkpointpath)
. outputMode(“complete” )
. table (“newSales”)
E as we’re doing an aggregation and we’re rewriting the whole table and not just appending.
A data engineer has three tables in a Delta Live Tables (DLT) pipeline. They have configured the pipeline to drop invalid records at each table. They notice that some data is being dropped due to quality concerns at some point in the DLT pipeline. They would like to determine at which table in their pipeline the data is being dropped.
Which of the following approaches can the data engineer take to identify the table that is dropping the records?
A. They can set up separate expectations for each table when developing their DLT pipeline.
B. They cannot determine which table is dropping the records.
C. They can set up DLT to notify them via email when records are dropped.
D. They can navigate to the DLT pipeline page, click on each table, and view the data quality statistics.
E. They can navigate to the DLT pipeline page, click on the “Error” button, and review the present errors.
D. They can navigate to the DLT pipeline page, click on each table, and view the data quality statistics.
To identify the table in a Delta Live Tables (DLT) pipeline where data is being dropped due to quality concerns, the data engineer can navigate to the DLT pipeline page, click on each table in the pipeline, and view the data quality statistics. These statistics often include information about records dropped, violations of expectations, and other data quality metrics. By examining the data quality statistics for each table in the pipeline, the data engineer can determine at which table the data is being dropped.
A data engineer has a single-task Job that runs each morning before they begin working. After identifying an upstream data issue, they need to set up another task to run a new notebook prior to the original task.
Which of the following approaches can the data engineer use to set up the new task?
A. They can clone the existing task in the existing Job and update it to run the new notebook.
B. They can create a new task in the existing Job and then add it as a dependency of the original task.
C. They can create a new task in the existing Job and then add the original task as a dependency of the new task.
D. They can create a new job from scratch and add both tasks to run concurrently.
E. They can clone the existing task to a new Job and then edit it to run the new notebook.
B. They can create a new task in the existing Job and then add it as a dependency of the original task.
That the new task is a dependency of the original task means that it must be run before the original task.
An engineering manager wants to monitor the performance of a recent project using a Databricks SQL query. For the first week following the project’s release, the manager wants the query results to be updated every minute. However, the manager is concerned that the compute resources used for the query will be left running and cost the organization a lot of money beyond the first week of the project’s release.
Which of the following approaches can the engineering team use to ensure the query does not cost the organization any money beyond the first week of the project’s release?
A. They can set a limit to the number of DBUs that are consumed by the SQL Endpoint.
B. They can set the query’s refresh schedule to end after a certain number of refreshes.
C. They cannot ensure the query does not cost the organization money beyond the first week of the project’s release.
D. They can set a limit to the number of individuals that are able to manage the query’s refresh schedule.
E. They can set the query’s refresh schedule to end on a certain date in the query scheduler.
C. They cannot ensure the query does not cost the organization money beyond the first week of the project’s release.
A data analysis team has noticed that their Databricks SQL queries are running too slowly when connected to their always-on SQL endpoint. They claim that this issue is present when many members of the team are running small queries simultaneously. They ask the data engineering team for help. The data engineering team notices that each of the team’s queries uses the same SQL endpoint.
Which of the following approaches can the data engineering team use to improve the latency of the team’s queries?
A. They can increase the cluster size of the SQL endpoint.
B. They can increase the maximum bound of the SQL endpoint’s scaling range.
C. They can turn on the Auto Stop feature for the SQL endpoint.
D. They can turn on the Serverless feature for the SQL endpoint.
E. They can turn on the Serverless feature for the SQL endpoint and change the Spot Instance Policy to “Reliability Optimized.”
A. They can increase the cluster size of the SQL endpoint.
(This is similar to question 40 in the official exam dump.)
A data engineer wants to schedule their Databricks SQL dashboard to refresh once per day, but they only want the associated SQL endpoint to be running when it is necessary.
Which of the following approaches can the data engineer use to minimize the total running time of the SQL endpoint used in the refresh schedule of their dashboard?
A. They can ensure the dashboard’s SQL endpoint matches each of the queries’ SQL endpoints.
B. They can set up the dashboard’s SQL endpoint to be serverless.
C. They can turn on the Auto Stop feature for the SQL endpoint.
D. They can reduce the cluster size of the SQL endpoint.
E. They can ensure the dashboard’s SQL endpoint is not one of the included query’s SQL endpoint.
C
Bing Copilot agrees:
The correct answer is C. They can turn on the Auto Stop feature for the SQL endpoint.
According to the documentation1, the Auto Stop feature determines whether the SQL endpoint stops if it’s idle for the specified number of minutes. Idle SQL endpoints continue to accumulate DBU and cloud instance charges until they are stopped. The default is 120 minutes, set to 0 to disable the auto stop.
The other options are not valid because:
A. Ensuring the dashboard’s SQL endpoint matches each of the queries’ SQL endpoints will not affect the running time of the SQL endpoint, but only ensure consistency and avoid errors.
B. Setting up the dashboard’s SQL endpoint to be serverless will not minimize the running time of the SQL endpoint, but only enable it to scale up and down automatically based on demand.
D. Reducing the cluster size of the SQL endpoint will not minimize the running time of the SQL endpoint, but only affect its performance and capacity.
E. Ensuring the dashboard’s SQL endpoint is not one of the included query’s SQL endpoint will not minimize the running time of the SQL endpoint, but only create unnecessary complexity and confusion.
“Warehouse settings
[…]
- Auto Stop determines whether the warehouse stops if it’s idle for the specified number of minutes. Idle SQL warehouses continue to accumulate DBU and cloud instance charges until they are stopped.
– Pro and classic SQL warehouses: The default is 45 minutes, which is recommended for typical use. The minimum is 10 minutes.
– Serverless SQL warehouses: The default is 10 minutes, which is recommended for typical use. The minimum is 5 minutes when you use the UI. Note that you can create a serverless SQL warehouse using the SQL warehouses API, in which case you can set the Auto Stop value as low as 1 minute.”
https://docs.databricks.com/en/sql/admin/create-sql-warehouse.html
A data engineer has been using a Databricks SQL dashboard to monitor the cleanliness of the input data to an ELT job. The ELT job has its Databricks SQL query that returns the number of input records containing unexpected NULL values. The data engineer wants their entire team to be notified via a messaging webhook whenever this value reaches 100.
Which of the following approaches can the data engineer use to notify their entire team via a messaging webhook whenever the number of NULL values reaches 100?
A. They can set up an Alert with a custom template.
B. They can set up an Alert with a new email alert destination.
C. They can set up an Alert with a new webhook alert destination.
D. They can set up an Alert with one-time notifications.
E. They can set up an Alert without notifications.
C. They can set up an Alert with a new webhook alert destination.
A single Job runs two notebooks as two separate tasks. A data engineer has noticed that one of the notebooks is running slowly in the Job’s current run. The data engineer asks a tech lead for help in identifying why this might be the case.
Which of the following approaches can the tech lead use to identify why the notebook is running slowly as part of the Job?
A. They can navigate to the Runs tab in the Jobs UI to immediately review the processing notebook.
B. They can navigate to the Tasks tab in the Jobs UI and click on the active run to review the processing notebook.
C. They can navigate to the Runs tab in the Jobs UI and click on the active run to review the processing notebook.
D. There is no way to determine why a Job task is running slowly.
E. They can navigate to the Tasks tab in the Jobs UI to immediately review the processing notebook.
C. They can navigate to the Runs tab in the Jobs UI and click on the active run to review the processing notebook.
A data engineer has a Job with multiple tasks that runs nightly. Each of the tasks runs slowly because the clusters take a long time to start.
Which of the following actions can the data engineer perform to improve the start up time for the clusters used for the Job?
A. They can use endpoints available in Databricks SQL
B. They can use jobs clusters instead of all-purpose clusters
C. They can configure the clusters to be single-node
D. They can use clusters that are from a cluster pool
E. They can configure the clusters to autoscale for larger data sizes
D. They can use clusters that are from a cluster pool
Bing Copilot:
The correct answer is D. They can use clusters that are from a cluster pool.
According to the documentation1, Databricks pools are a set of idle, ready-to-use instances. When cluster nodes are created using the idle instances, cluster start and auto-scaling times are reduced. If the pool has no idle instances, the pool expands by allocating a new instance from the instance provider in order to accommodate the cluster’s request.
The other options are not valid because:
A. Using endpoints available in Databricks SQL will not improve the start up time for the clusters used for the Job, but only enable the data engineer to run SQL queries and visualize results in a web-based interface.
B. Using jobs clusters instead of all-purpose clusters will not improve the start up time for the clusters used for the Job, but only allow the data engineer to run scheduled or triggered tasks on isolated clusters.
C. Configuring the clusters to be single-node will not improve the start up time for the clusters used for the Job, but only limit the scalability and performance of the clusters.
E. Configuring the clusters to autoscale for larger data sizes will not improve the start up time for the clusters used for the Job, but only enable the clusters to dynamically add or remove nodes based on the workload.
A new data engineering team team. has been assigned to an ELT project. The new data engineering team will need full privileges on the database customers to fully manage the project.
Which of the following commands can be used to grant full permissions on the database to the new data engineering team?
A. GRANT USAGE ON DATABASE customers TO team;
B. GRANT ALL PRIVILEGES ON DATABASE team TO customers;
C. GRANT SELECT PRIVILEGES ON DATABASE customers TO teams;
D. GRANT SELECT CREATE MODIFY USAGE PRIVILEGES ON DATABASE customers TO team;
E. GRANT ALL PRIVILEGES ON DATABASE customers TO team;
E. GRANT ALL PRIVILEGES ON DATABASE customers TO team;
A new data engineering team has been assigned to work on a project. The team will need access to database customers in order to see what tables already exist. The team has its own group team.
Which of the following commands can be used to grant the necessary permission on the entire database to the new team?
A. GRANT VIEW ON CATALOG customers TO team;
B. GRANT CREATE ON DATABASE customers TO team;
C. GRANT USAGE ON CATALOG team TO customers;
D. GRANT CREATE ON DATABASE team TO customers;
E. GRANT USAGE ON DATABASE customers TO team;
E. GRANT USAGE ON DATABASE customers TO team;
Bing Copilot:
The correct answer is E. GRANT USAGE ON DATABASE customers TO team;
According to the documentation1, the USAGE privilege allows a principal to access an object. For example, to access a table, the principal must have the USAGE privilege on the database and schema that contain the table, as well as the SELECT privilege on the table.
The other options are not valid because:
A. GRANT VIEW ON CATALOG customers TO team; is not a valid syntax, as there is no VIEW privilege or CATALOG object in Databricks SQL.
B. GRANT CREATE ON DATABASE customers TO team; will grant the team the privilege to create new objects in the customers database, but not to see what tables already exist.
C. GRANT USAGE ON CATALOG team TO customers; is not a valid syntax, as there is no CATALOG object in Databricks SQL. Moreover, this command would grant the customers database the privilege to access the team group, which is not the intended outcome.
D. GRANT CREATE ON DATABASE team TO customers; will grant the customers database the privilege to create new objects in the team group, which is not a valid securable object or the intended outcome.
A data engineer is running code in a Databricks Repo that is cloned from a central Git repository. A colleague of the data engineer informs them that changes have been made and synced to the central Git repository. The data engineer now needs to sync their Databricks Repo to get the changes from the central Git repository.
Which of the following Git operations does the data engineer need to run to accomplish this task?
A. Merge
B. Push
C. Pull
D. Commit
E. Clone
C. Pull
Which of the following is a benefit of the Databricks Lakehouse Platform embracing open source technologies?
A. Cloud-specific integrations
B. Simplified governance
C. Ability to scale storage
D. Ability to scale workloads
E. Avoiding vendor lock-in
E. Avoiding vendor lock-in
A data engineer needs to use a Delta table as part of a data pipeline, but they do not know if they have the appropriate permissions.
In which of the following locations can the data engineer review their permissions on the table?
A. Databricks Filesystem
B. Jobs
C. Dashboards
D. Repos
E. Data Explorer
E. Data Explorer
Which of the following describes a scenario in which a data engineer will want to use a single-node cluster?
A. When they are working interactively with a small amount of data
B. When they are running automated reports to be refreshed as quickly as possible
C. When they are working with SQL within Databricks SQL
D. When they are concerned about the ability to automatically scale with larger data
E. When they are manually running reports with a large amount of data
A. When they are working interactively with a small amount of data
A data engineer has been given a new record of data:
id STRING = ‘a1’
rank INTEGER = 6
rating FLOAT = 9.4
Which of the following SQL commands can be used to append the new record to an existing Delta table my_table?
A. INSERT INTO my_table VALUES (‘a1’, 6, 9.4)
B. my_table UNION VALUES (‘a1’, 6, 9.4)
C. INSERT VALUES ( ‘a1’ , 6, 9.4) INTO my_table
D. UPDATE my_table VALUES (‘a1’, 6, 9.4)
E. UPDATE VALUES (‘a1’, 6, 9.4) my_table
A. INSERT INTO my_table VALUES (‘a1’, 6, 9.4)
A data engineer has realized that the data files associated with a Delta table are incredibly small. They want to compact the small files to form larger files to improve performance.
Which of the following keywords can be used to compact the small files?
A. REDUCE
B. OPTIMIZE
C. COMPACTION
D. REPARTITION
E. VACUUM
B. OPTIMIZE
In which of the following file formats is data from Delta Lake tables primarily stored?
A. Delta
B. CSV
C. Parquet
D. JSON
E. A proprietary, optimized format specific to Databricks
C. Parquet