Build Data Pipelines with Delta Live Tables and Spark SQL (Databricks Q) Flashcards

1
Q

Which of the following correctly describes how code from one library notebook can be referenced by code from another library notebook? Select one response.

Within a DLT Pipeline, code in a notebook library can reference tables and views created in another notebook library that is running on the same cluster.

Within a DLT Pipeline, code in notebook libraries cannot reference tables and views created in a different notebook library.

Within a DLT Pipeline, code in a notebook library can reference tables and views created in another notebook library as long as one notebook library references the other notebook library.

Within a DLT Pipeline, code in any notebook library can reference tables and views created in any other notebook library.

Within a DLT Pipeline, code in a notebook library can reference tables and views created in another notebook library as long as the referenced notebook library is installed on the other notebook library’s cluster.

A

Within a DLT Pipeline, code in any notebook library can reference tables and views created in any other notebook library.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A data engineer needs to add a file path to their DLT pipeline. They want to use the file path throughout the pipeline as a parameter for various statements and functions.

Which of the following options can be specified during the configuration of a DLT pipeline in order to allow this? Select one response.

They can add a parameter when scheduling the pipeline job and then perform a variable substitution of the file path.

They can specify the file path in the job scheduler when deploying the pipeline.

They can set the variable in a notebook command and then perform a variable substitution of the file path.

They can add a key-value pair in the Configurations field and then perform a string substitution of the file path.

They can add a widget to the notebook and then perform a string substitution of the file path.

A

They can add a key-value pair in the Configurations field and then perform a string substitution of the file path.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A data engineer needs to ensure the table updated_history, which is derived from the table history, contains all records from history. Each record in both tables contains a value for the column user_id.

Which of the following approaches can the data engineer use to create a new data object from updated_history and history containing the records with matching user_id values in both tables? Select one response.

The data engineer can create a new common table expression from the history table that queries the updated_history table.

The data engineer can merge the history and updated_history tables on user_id.

The data engineer can create a new view by joining the history and updated_history tables.

The data engineer can create a new temporary view by querying the history and updated_history tables.

The data engineer can create a new dynamic view by querying the history and updated_history tables.

A

The data engineer can create a new view by joining the history and updated_history tables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A data engineer has a Delta Live Tables (DLT) pipeline that uses a change data capture (CDC) data source. They need to write a quality enforcement rule that ensures that values in the column operation do not contain null values. If the constraint is violated, the associated records cannot be included in the dataset.

Which of the following constraints does the data engineer need to use to enforce this rule? Select one

CONSTRAINT valid_operation EXPECT (operation IS NOT NULL)

CONSTRAINT valid_operation EXCEPT (operation) ON VIOLATION DROP ROW

CONSTRAINT valid_operation EXCEPT (operation != null) ON VIOLATION FAIL UPDATE

CONSTRAINT valid_operation EXCEPT (operation) ON VIOLATION DROP ROW

CONSTRAINT valid_operation EXPECT (operation IS NOT NULL) ON VIOLATION DROP ROW

CONSTRAINT valid_operation ON VIOLATION FAIL UPDATE

A

CONSTRAINT valid_operation EXPECT (operation IS NOT NULL) ON VIOLATION DROP ROW

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A data engineer needs to review the events related to their pipeline and the pipeline’s configurations.

Which of the following approaches can the data engineer take to accomplish this? Select one response.

The data engineer can select events of type user_action in the output table of the pipeline.

The data engineer can query events of type user_action from the checkpoint directory.

The data engineer can query events of type user_action from the event log.

The data engineer can query events of type user_action from the configured storage location.

The data engineer can select events of type user_action in the resultant DAG.

A

The data engineer can query events of type user_action from the event log.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A data engineer is configuring a new DLT pipeline and is unsure what mode to choose. They are working with a small batch of unchanging data and need to minimize the costs associated with the pipeline.

Which of the following modes do they need to use and why? Select one response

Continuous; continuous pipelines run at set intervals and then shut down until the next manual or scheduled update.

Triggered; triggered pipelines update once and cannot be updated again until they are manually run.

Continuous; continuous pipelines ingest new data as it arrives.

Triggered; triggered pipelines update once and cannot be updated again for 24 hours.

Triggered; triggered pipelines run once and then shut down until the next manual or scheduled update.

A

Triggered; triggered pipelines run once and then shut down until the next manual or scheduled update.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following are guaranteed when processing a change data capture (CDC) feed with APPLY CHANGES INTO? Select three responses.

APPLY CHANGES INTO automatically orders late-arriving records using a user-provided sequencing key.

APPLY CHANGES INTO supports insert-only and append-only data.

APPLY CHANGES INTO defaults to creating a Type 1 SCD table.

APPLY CHANGES INTO automatically quarantines late-arriving data in a separate table.

APPLY CHANGES INTO assumes by default that rows will contain inserts and updates

A

APPLY CHANGES INTO automatically orders late-arriving records using a user-provided sequencing key.

APPLY CHANGES INTO defaults to creating a Type 1 SCD table.

APPLY CHANGES INTO assumes by default that rows will contain inserts and updates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A data engineer needs to query a Delta Live Table (DLT) in a notebook. The notebook is not attached to a DLT pipeline.

Which of the following correctly describes the form of results that the query returns? Select one response.

Queries outside of DLT will return snapshot results from DLT tables only if they were defined as a streaming table

Queries outside of DLT will return the most recent version from DLT tables only if they were defined as a streaming table

Queries outside of DLT will return the most recent version from DLT tables, regardless of how they were defined.

Queries outside of DLT will return snapshot results from DLT tables, regardless of how they were defined.

Live queries outside of DLT will return snapshot results from DLT tables only if they were defined as a batch table.

A

Queries outside of DLT will return snapshot results from DLT tables, regardless of how they were defined

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following data quality metrics are captured through row_epectations in a pipeline’s event log? Select three responses.

Dataset
Failed records
Update ID
Name
Flow progress

A

Dataset
Failed records
Name

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A data engineer has built and deployed a DLT pipeline. They want to see the output for each individual task.

Which of the following describes how to explore the output for each task in the pipeline? Select one response.

They can go to the Job Runs page and click on the individual tables in the job run history.

They can display the output for each individual command from within the notebook using the %run command.

They can go to the Pipeline Details page and click on the individual tables in the resultant Directed Acyclic Graph (DAG).

They can run the commands connected to each task from within the DLT notebook after deploying the pipeline.

They can specify a folder for the task run details during pipeline configuration.

A

They can go to the Pipeline Details page and click on the individual tables in the resultant Directed Acyclic Graph (DAG).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A data engineer needs to identify the cloud provider and region of origin for each event within their DLT pipeline.

Which of the following approaches allows the data engineer to view this information? Select one response.

The data engineer can load the contents of the event log into a view and display the view.

The data engineer can view this information in the Task Details page for each task in the pipeline.

The data engineer can use a utility command in Python to list information about each update made to a particular data object.

The data engineer can use a SELECT query to directly query the cloud_details field of the event.

The data engineer can view the event details for the pipeline from the resultant Directed Acyclic Graph (DAG).

A

The data engineer can load the contents of the event log into a view and display the view.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A data engineer has created the following query to create a streaming live table from transactions.

Code block:

CREATE OR REFRESH STREAMING LIVE TABLE transactions

AS SELECT timestamp(transaction_timestamp) AS transaction_timestamp, * EXCEPT (transaction_timestamp, source)

________________________

Which of the following lines of code correctly fills in the blank? Select two responses.

FROM DELTA STREAM(LIVE.transactions)
FROM LIVE.transactions
FROM STREAMING LIVE (transactions)
FROM STREAM(LIVE.transactions)
FROM STREAMING LIVE.transactions

A

FROM LIVE.transactions

FROM STREAM(LIVE.transactions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following statements accurately describes the difference in behavior between live views and live tables? Select one response.

Metrics for live tables can be collected and reported, while data quality metrics for views are abstracted to the user.

The results of live tables are stored to disk, while the results of views can only be referenced from within the DLT pipeline in which they are defined.

The results of live tables can be viewed through a Directed Acyclic Graph (DAG), while the results for live views cannot.

Live tables can be used with a stream as its source, while live views are incompatible with structured streaming.

Live tables can be used to enforce data quality, while views do not have the same guarantees in schema enforcement.

A

The results of live tables are stored to disk, while the results of views can only be referenced from within the DLT pipeline in which they are defined.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A data engineer is running a Delta Live Tables (DLT) notebook. They notice that several commands display the following message:

This Delta Live Tables query is syntactically valid, but you must create a pipeline in order to define and populate your table.

Which of the following statements explains this message? Select one response.

DLT is not intended for interactive execution in a notebook.

DLT notebooks must be run at scheduled intervals using the job scheduler.

DLT does not support the execution of Python commands.

DLT does not support the execution of Python and SQL notebooks within a single pipeline.

DLT queries must be connected to a pipeline using the pipeline scheduler.

A

DLT is not intended for interactive execution in a notebook.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A data engineer is creating a live streaming table to be used by other members of their team. They want to indicate that the table contains silver quality data.

Which of the following describes how the data engineer can clarify this to other members of their team? Select two responses.

EXPECT QUALITY = SILVER
COMMENT “This is a silver table”
TBLPROPERTIES (“quality” = “silver”)
None of these answer choices are correct.
WHEN QUALITY = SILVER THEN PASS

A

COMMENT “This is a silver table”
TBLPROPERTIES (“quality” = “silver”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A data engineer has configured and deployed a DLT pipeline that contains an error. The error is thrown at the third stage of the pipeline, but since DLT resolves the order of tables in the pipeline at different steps, they are not sure if the first stage succeeded.

Which of the following is considered a good practice to determine this? Select one response.

The data engineer can fix the tables using iterative logic, starting at their earliest dataset.

The data engineer can fix the tables from the Directed Acyclic Graph (DAG), starting at their earliest dataset.

The data engineer can fix the tables from the Directed Acyclic Graph (DAG), starting at the dataset containing the error.

The data engineer can fix the tables using iterative logic, starting at the dataset containing the error.

The data engineer can fix one table at a time, starting at their earliest dataset.

A

The data engineer can fix one table at a time, starting at their earliest dataset.

17
Q

Which of the following correctly describes how Auto Loader ingests data? Select one response.

Auto Loader automatically detects new data files during manual or scheduled updates.

Auto Loader automatically writes new data files continuously as they land.

Auto Loader incrementally ingests new data files in batches.

Auto Loader only detects new data files during scheduled update intervals.

Auto Loader writes new data files in incremental batches.

A

Auto Loader incrementally ingests new data files in batches.

18
Q

A data engineer has built and deployed a DLT pipeline. They want to perform an update that writes a batch of data to the output directory.

Which of the following statements about performing this update is true? Select one response.

With each triggered update, all newly arriving data will be processed through their pipeline. Metrics will not be reported for the current run.

All newly arriving data will be continuously processed through their pipeline. Metrics will be reported for the current run if specified during pipeline deployment.

With each triggered update, all newly arriving data will be processed through their pipeline. Metrics will always be reported for the current run.

With each triggered update, all newly arriving data will be processed through their pipeline. Metrics will be reported if specified during pipeline deployment.

All newly arriving data will be continuously processed through their pipeline. Metrics will always be reported for the current run.

A

With each triggered update, all newly arriving data will be processed through their pipeline. Metrics will always be reported for the current run.

19
Q

A data engineer is using the code below to create a new table transactions_silver from the table transaction_bronze. However, when running the code, an error is thrown.

CREATE OR REFRESH STREAMING LIVE TABLE transactions_silver

(CONSTRAINT valid_date EXPECT (order_timestamp > “2022-01-01”) ON VIOLATION DROP ROW)

FROM LIVE.transactions_bronze

Which of the following statements correctly identifies the error and the stage at which the error was thrown? Select one response.

LIVE.orders_bronze needs to be changed to STREAM(LIVE.orders_bronze). The error will be detected during the Setting Up Tables stage.

The EXPECT statement needs to be changed to EXPECT (order_timestamp is NOT NULL). The error will be detected during the Setting Up Tables stage.

A SELECT statement needs to be added to create the columns for the transactions_silver table. The error will be detected during the Setting Up Tables stage.

A SELECT statement needs to be added to create the columns for the transactions_silver table. The error will be detected during the Initializing stage.

LIVE.orders_bronze needs to be changed to STREAM(LIVE.orders_bronze). The error will be detected during the Initializing stage.

A

A SELECT statement needs to be added to create the columns for the transactions_silver table. The error will be detected during the Setting Up Tables stage.

20
Q

Which of the following correctly describes how to access contents of the table directory? Select one response.

The contents of the table directory can be viewed through the event log

The contents of the table directory can be viewed through the flow definition’s output dataset.

The contents of the table directory can be viewed through the Auto Loader directory.

The contents of the table directory can be viewed through the checkpointing directory.

The contents of the table directory can be viewed through the metastore.

A

The contents of the table directory can be viewed through the metastore.

21
Q

A data engineer needs to examine how data is flowing through tables within their pipeline.

Which of the following correctly describes how they can accomplish this? Select one response.

The data engineer can view the flow definition of each table in the pipeline from the Pipeline Events log.

The data engineer can query the flow definition for each table and then combine the results.

The data engineer can query the flow definition for the direct predecessor of each table and then combine the results.

The data engineer can query the flow definition for the direct successor of the table and then combine the results.

The data engineer can combine the flow definitions for all of the tables into one query.

A

The data engineer can query the flow definition for the direct predecessor of each table and then combine the results.

22
Q

A data engineer wants to query metrics on the latest update made to their pipeline. They need to be able to see the event type and timestamp for each update.

Which of the following approaches allows the data engineer to complete this task? Select one response.

The data engineer can query the update ID from the events log where the event type is create_update.

The data engineer can query the update ID from the events log where the event type is last_update.

The data engineer can view the update ID from the Pipeline Details page in the create_update table.

The data engineer can query the update ID from the events log where the action type is user_action.

The data engineer can view the update ID from the Pipeline Details page in the user_action table.

A

The data engineer can query the update ID from the events log where the event type is create_update.

23
Q

Which of the following are advantages of using a Delta Live Tables (DLT) pipeline over a traditional ETL pipeline in Databricks? Select two responses.

DLT provides granular observability into pipeline operations and automatic error handling.

DLT automates data management through physical data optimizations and schema evolution.

DLT has built-in quality controls and data quality monitoring.

DLT decouples compute and storage costs regardless of scale.

DLT leverages additional metadata over other open source formats such as JSON, CSV, and Parquet.

A

DLT provides granular observability into pipeline operations and automatic error handling.

DLT has built-in quality controls and data quality monitoring.

24
Q

A data engineer wants to query metrics on the latest update made to their pipeline. The pipeline has multiple data sources. Despite the input data sources having low data retention, the data engineer needs to retain the results of the query indefinitely.

Which of the following statements identifies the type of table that needs to be used and why? Select one response.

Streaming live table; streaming live tables record live metrics on the query.

Live table; live tables only support reading from “append-only” streaming sources.

Streaming live table; streaming live tables are always “correct”, meaning their contents will match their definition after any update.

Live table; live tables retain the results of a query for up to 30 days.

Streaming live table; streaming live tables can preserve data indefinitely.

A

Streaming live table; streaming live tables can preserve data indefinitely.

25
Q

A data engineer has a Delta Live Tables (DLT) pipeline that uses a change data capture (CDC) data source. They need to write a quality enforcement rule that ensures that records containing the values INSERT or UPDATE in the operation column cannot contain a null value in the name column. The operation column can contain one of three values: INSERT, UPDATE, and DELETE. If the constraint is violated, then the entire transaction needs to fail.

Which of the following constraints can the data engineer use to enforce this rule? Select one response.

CONSTRAINT valid_name EXPECT (name IS NOT NULL or operation = “DELETE”) ON VIOLATION FAIL UPDATE

CONSTRAINT valid_operation EXPECT (operation IS NOT NULL) ON VIOLATION DROP ROW

CONSTRAINT valid_operation EXPECT (valid_operation IS NOT NULL and valid_operation = “INSERT”) ON VIOLATION DROP ROW

CONSTRAINT valid_name EXPECT (name IS NOT NULL or operation = “DELETE”) ON VIOLATION DROP ROW

CONSTRAINT valid_id_not_null EXPECT (valid_id IS NOT NULL or operation = “INSERT”)ON VIOLATION FAIL UPDATE

A

CONSTRAINT valid_name EXPECT (name IS NOT NULL or operation = “DELETE”) ON VIOLATION FAIL UPDATE