SET - 3 Flashcards
(50 cards)
A data engineer has realized that the data files associated with a Delta table are incredibly small. They want to compact the small files to form larger files to improve performance.
Which keyword can be used to compact the small files?
A. OPTIMIZE
B. VACUUM
C. COMPACTION
D. REPARTITION
A. OPTIMIZE
….
A data engineer runs a statement every day to copy the previous day’s sales into the table transactions. Each day’s sales are in their own file in the location “/transactions/raw”.
Today, the data engineer runs the following command to complete this task:
After running the command today, the data engineer notices that the number of records in table transactions has not changed.
What explains why the statement might not have copied any new records into the table?
A. The format of the files to be copied were not included with the FORMAT_OPTIONS keyword.
B. The COPY INTO statement requires the table to be refreshed to view the copied rows.
C. The previous day’s file has already been copied into the table.
D. The PARQUET file format does not support COPY INTO.
C. The previous day’s file has already been copied into the table.
Which command can be used to write data into a Delta table while avoiding the writing of duplicate records?
A. DROP
B. INSERT
C. MERGE
D. APPEND
C. MERGE
A data analyst has created a Delta table sales that is used by the entire data analysis team. They want help from the data engineering team to implement a series of tests to ensure the data is clean. However, the data engineering team uses Python for its tests rather than SQL.
Which command could the data engineering team use to access sales in PySpark?
A. SELECT * FROM sales
B. spark.table(“sales”)
C. spark.sql(“sales”)
D. spark.delta.table(“sales”)
B. spark.table(“sales”)
A data engineer has created a new database using the following command:
CREATE DATABASE IF NOT EXISTS customer360;
In which location will the customer360 database be located?
A. dbfs:/user/hive/database/customer360
B. dbfs:/user/hive/warehouse
C. dbfs:/user/hive/customer360
D. dbfs:/user/hive/database
B. dbfs:/user/hive/warehouse
A data engineer is attempting to drop a Spark SQL table my_table and runs the following command:
DROP TABLE IF EXISTS my_table;
After running this command, the engineer notices that the data files and metadata files have been deleted from the file system.
What is the reason behind the deletion of all these files?
A. The table was managed
B. The table’s data was smaller than 10 GB
C. The table did not have a location
D. The table was external
A. The table was managed
A data engineer needs to create a table in Databricks using data from a CSV file at location /path/to/csv.
They run the following command:
Which of the following lines of code fills in the above blank to successfully complete the task?
A. FROM “path/to/csv”
B. USING CSV
C. FROM CSV
D. USING DELTA
B. USING CSV
….
Which SQL keyword can be used to convert a table from a long format to a wide format?
A. TRANSFORM
B. PIVOT
C. SUM
D. CONVERT
B. PIVOT
A data engineer has a Python variable table_name that they would like to use in a SQL query. They want to construct a Python code block that will run the query using table_name.
They have the following incomplete code block:
____(f”SELECT customer_id, spend FROM {table_name}”)
What can be used to fill in the blank to successfully complete the task?
A. spark.delta.sql
B. spark.sql
C. spark.table
D. dbutils.sql
B. spark.sql
Image
Image
Image
A data engineer has configured a Structured Streaming job to read from a table, manipulate the data, and then perform a streaming write into a new table.
The code block used by the data engineer is below:
Which line of code should the data engineer use to fill in the blank if the data engineer only wants the query to execute a micro-batch to process data every 5 seconds?
A. trigger(“5 seconds”)
B. trigger(continuous=”5 seconds”)
C. trigger(once=”5 seconds”)
D. trigger(processingTime=”5 seconds”)
D. trigger(processingTime=”5 seconds”)
A data engineer is maintaining a data pipeline. Upon data ingestion, the data engineer notices that the source data is starting to have a lower level of quality. The data engineer would like to automate the process of monitoring the quality level.
Which of the following tools can the data engineer use to solve this problem?
A. Auto Loader
B. Unity Catalog
C. Delta Lake
D. Delta Live Tables
D. Delta Live Tables
A data engineer has three tables in a Delta Live Tables (DLT) pipeline. They have configured the pipeline to drop invalid records at each table. They notice that some data is being dropped due to quality concerns at some point in the DLT pipeline. They would like to determine at which table in their pipeline the data is being dropped.
Which approach can the data engineer take to identify the table that is dropping the records?
A. They can set up separate expectations for each table when developing their DLT pipeline.
B. They can navigate to the DLT pipeline page, click on the “Error” button, and review the present errors.
C. They can set up DLT to notify them via email when records are dropped.
D. They can navigate to the DLT pipeline page, click on each table, and view the data quality statistics.
D. They can navigate to the DLT pipeline page, click on each table, and view the data quality statistics.
What is used by Spark to record the offset range of the data being processed in each trigger in order for Structured Streaming to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing?
A. Checkpointing and Write-ahead Logs
B. Replayable Sources and Idempotent Sinks
C. Write-ahead Logs and Idempotent Sinks
D. Checkpointing and Idempotent Sinks
A. Checkpointing and Write-ahead Logs
What describes the relationship between Gold tables and Silver tables?
A. Gold tables are more likely to contain aggregations than Silver tables.
B. Gold tables are more likely to contain valuable data than Silver tables.
C. Gold tables are more likely to contain a less refined view of data than Silver tables.
D. Gold tables are more likely to contain truthful data than Silver tables.
A. Gold tables are more likely to contain aggregations than Silver tables.
What describes when to use the CREATE STREAMING LIVE TABLE (formerly CREATE INCREMENTAL LIVE TABLE) syntax over the CREATE LIVE TABLE syntax when creating Delta Live Tables (DLT) tables using SQL?
A. CREATE STREAMING LIVE TABLE should be used when the subsequent step in the DLT pipeline is static.
B. CREATE STREAMING LIVE TABLE should be used when data needs to be processed incrementally.
C. CREATE STREAMING LIVE TABLE should be used when data needs to be processed through complicated aggregations.
D. CREATE STREAMING LIVE TABLE should be used when the previous step in the DLT pipeline is static.
B. CREATE STREAMING LIVE TABLE should be used when data needs to be processed incrementally.
A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three datasets are defined against Delta Lake table sources using LIVE TABLE.
The table is configured to run in Production mode using the Continuous Pipeline Mode.
What is the expected outcome after clicking Start to update the pipeline assuming previously unprocessed data exists and all definitions are valid?
A. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will persist to allow for additional testing.
B. All datasets will be updated once and the pipeline will shut down. The compute resources will persist to allow for additional testing.
C. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will be deployed for the update and terminated when the pipeline is stopped.
D. All datasets will be updated once and the pipeline will shut down. The compute resources will be termin
C. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will be deployed for the update and terminated when the pipeline is stopped
Which type of workloads are compatible with Auto Loader?
A. Streaming workloads
B. Machine learning workloads
C. Serverless workloads
D. Batch workloads
A. Streaming workloads
A data engineer has developed a data pipeline to ingest data from a JSON source using Auto Loader, but the engineer has not provided any type inference or schema hints in their pipeline. Upon reviewing the data, the data engineer has noticed that all of the columns in the target table are of the string type despite some of the fields only including float or boolean values.
Why has Auto Loader inferred all of the columns to be of the string type?
A. Auto Loader cannot infer the schema of ingested data
B. JSON data is a text-based format
C. Auto Loader only works with string data
D. All of the fields had at least one null value
B. JSON data is a text-based format
Which statement regarding the relationship between Silver tables and Bronze tables is always true?
A. Silver tables contain a less refined, less clean view of data than Bronze data.
B. Silver tables contain aggregates while Bronze data is unaggregated.
C. Silver tables contain more data than Bronze tables.
D. Silver tables contain less data than Bronze tables
D. Silver tables contain less data than Bronze tables.