DEA-C01 Flashcards
(141 cards)
A data engineer is configuring an AWS Glue job to read data from an Amazon S3 bucket. The data engineer has set up the necessary AWS Glue connection details and an associated IAM role. However, when the data engineer attempts to run the AWS Glue job, the data engineer receives an error message that indicates that there are problems with the Amazon S3 VPC gateway endpoint.
The data engineer must resolve the error and connect the AWS Glue job to the S3 bucket.
Which solution will meet this requirement?
A.
Update the AWS Glue security group to allow inbound traffic from the Amazon S3 VPC gateway endpoint.
B.
Configure an S3 bucket policy to explicitly grant the AWS Glue job permissions to access the S3 bucket.
C.
Review the AWS Glue job code to ensure that the AWS Glue connection details include a fully qualified domain name.
D.
Verify that the VPC’s route table includes inbound and outbound routes for the Amazon S3 VPC gateway endpoint.
Verify that the VPC’s route table includes inbound and outbound routes for the Amazon S3 VPC gateway endpoint.
A data engineer needs to create an AWS Lambda function that converts the format of data from .csv to Apache Parquet. The Lambda function must run only if a user uploads a .csv file to an Amazon S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?
A.
Create an S3 event notification that has an event type of s3:ObjectCreated:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
B.
Create an S3 event notification that has an event type of s3:ObjectTagging:* for objects that have a tag set to .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
C.
Create an S3 event notification that has an event type of s3:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
D.
Create an S3 event notification that has an event type of s3:ObjectCreated:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set an Amazon Simple Notification Service (Amazon SNS) topic as the destination for the event notification. Subscribe the Lambda function to the SNS topic.
Create an S3 event notification that has an event type of s3:ObjectCreated:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
An insurance company stores transaction data that the company compressed with gzip.
The company needs to query the transaction data for occasional audits.
Which solution will meet this requirement in the MOST cost-effective way?
A.
Store the data in Amazon Glacier Flexible Retrieval. Use Amazon S3 Glacier Select to query the data.
B.
Store the data in Amazon S3. Use Amazon S3 Select to query the data.
C.
Store the data in Amazon S3. Use Amazon Athena to query the data.
D.
Store the data in Amazon Glacier Instant Retrieval. Use Amazon Athena to query the data.
Store the data in Amazon S3. Use Amazon S3 Select to query the data.
A data engineer finished testing an Amazon Redshift stored procedure that processes and inserts data into a table that is not mission critical. The engineer wants to automatically run the stored procedure on a daily basis.
Which solution will meet this requirement in the MOST cost-effective way?
A.
Create an AWS Lambda function to schedule a cron job to run the stored procedure.
B.
Schedule and run the stored procedure by using the Amazon Redshift Data API in an Amazon EC2 Spot Instance.
C.
Use query editor v2 to run the stored procedure on a schedule.
D.
Schedule an AWS Glue Python shell job to run the stored procedure.
Use query editor v2 to run the stored procedure on a schedule.
A marketing company collects clickstream data. The company sends the clickstream data to Amazon Kinesis Data Firehose and stores the clickstream data in Amazon S3. The company wants to build a series of dashboards that hundreds of users from multiple departments will use.
The company will use Amazon QuickSight to develop the dashboards. The company wants a solution that can scale and provide daily updates about clickstream activity.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
A.
Use Amazon Redshift to store and query the clickstream data.
B.
Use Amazon Athena to query the clickstream data
C.
Use Amazon S3 analytics to query the clickstream data.
D.
Access the query data through a QuickSight direct SQL query.
E.
Access the query data through QuickSight SPICE (Super-fast, Parallel, In-memory Calculation Engine). Configure a daily refresh for the dataset.
Use Amazon Athena to query the clickstream data
Access the query data through QuickSight SPICE (Super-fast, Parallel, In-memory Calculation Engine). Configure a daily refresh for the dataset.
A data engineer is building a data orchestration workflow. The data engineer plans to use a hybrid model that includes some on-premises resources and some resources that are in the cloud. The data engineer wants to prioritize portability and open source resources.
Which service should the data engineer use in both the on-premises environment and the cloud-based environment?
A.
AWS Data Exchange
B.
Amazon Simple Workflow Service (Amazon SWF)
C.
Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
D.
AWS Glue
Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
A gaming company uses a NoSQL database to store customer information. The company is planning to migrate to AWS.
The company needs a fully managed AWS solution that will handle high online transaction processing (OLTP) workload, provide single-digit millisecond performance, and provide high availability around the world.
Which solution will meet these requirements with the LEAST operational overhead?
A.
Amazon Keyspaces (for Apache Cassandra)
B.
Amazon DocumentDB (with MongoDB compatibility)
C.
Amazon DynamoDB
D.
Amazon Timestream
Amazon DynamoDB
A data engineer creates an AWS Lambda function that an Amazon EventBridge event will invoke. When the data engineer tries to invoke the Lambda function by using an EventBridge event, an AccessDeniedException message appears.
How should the data engineer resolve the exception?
A.
Ensure that the trust policy of the Lambda function execution role allows EventBridge to assume the execution role.
B.
Ensure that both the IAM role that EventBridge uses and the Lambda function’s resource-based policy have the necessary permissions.
C.
Ensure that the subnet where the Lambda function is deployed is configured to be a private subnet.
D.
Ensure that EventBridge schemas are valid and that the event mapping configuration is correct.
Ensure that both the IAM role that EventBridge uses and the Lambda function’s resource-based policy have the necessary permissions.
A company uses a data lake that is based on an Amazon S3 bucket. To comply with regulations, the company must apply two layers of server-side encryption to files that are uploaded to the S3 bucket. The company wants to use an AWS Lambda function to apply the necessary encryption.
Which solution will meet these requirements?
A.
Use both server-side encryption with AWS KMS keys (SSE-KMS) and the Amazon S3 Encryption Client.
B.
Use dual-layer server-side encryption with AWS KMS keys (DSSE-KMS).
C.
Use server-side encryption with customer-provided keys (SSE-C) before files are uploaded.
D.
Use server-side encryption with AWS KMS keys (SSE-KMS).
Use dual-layer server-side encryption with AWS KMS keys (DSSE-KMS).
A data engineer notices that Amazon Athena queries are held in a queue before the queries run.
How can the data engineer prevent the queries from queueing?
A.
Increase the query result limit.
B.
Configure provisioned capacity for an existing workgroup.
C.
Use federated queries.
D.
Allow users who run the Athena queries to an existing workgroup.
Configure provisioned capacity for an existing workgroup.
A data engineer needs to debug an AWS Glue job that reads from Amazon S3 and writes to Amazon Redshift. The data engineer enabled the bookmark feature for the AWS Glue job.
The data engineer has set the maximum concurrency for the AWS Glue job to 1.
The AWS Glue job is successfully writing the output to Amazon Redshift. However, the Amazon S3 files that were loaded during previous runs of the AWS Glue job are being reprocessed by subsequent runs.
What is the likely reason the AWS Glue job is reprocessing the files?
A.
The AWS Glue job does not have the s3:GetObjectAcl permission that is required for bookmarks to work correctly.
B.
The maximum concurrency for the AWS Glue job is set to 1.
C.
The data engineer incorrectly specified an older version of AWS Glue for the Glue job.
D.
The AWS Glue job does not have a required commit statement.
The AWS Glue job does not have the s3:GetObjectAcl permission that is required for bookmarks to work correctly.
An ecommerce company wants to use AWS to migrate data pipelines from an on-premises environment into the AWS Cloud. The company currently uses a third-party tool in the on-premises environment to orchestrate data ingestion processes.
The company wants a migration solution that does not require the company to manage servers. The solution must be able to orchestrate Python and Bash scripts. The solution must not require the company to refactor any code.
Which solution will meet these requirements with the LEAST operational overhead?
A.
AWS Lambda
B.
Amazon Managed Workflows for Apache Airflow (Amazon MVVAA)
C.
AWS Step Functions
D.
AWS Glue
Amazon Managed Workflows for Apache Airflow (Amazon MVVAA)
A data engineer needs Amazon Athena queries to finish faster. The data engineer notices that all the files the Athena queries use are currently stored in uncompressed .csv format. The data engineer also notices that users perform most queries by selecting a specific column.
Which solution will MOST speed up the Athena query performance?
A.
Change the data format from .csv to JSON format. Apply Snappy compression.
B.
Compress the .csv files by using Snappy compression.
C.
Change the data format from .csv to Apache Parquet. Apply Snappy compression.
D.
Compress the .csv files by using gzip compression.
Change the data format from .csv to Apache Parquet. Apply Snappy compression.
A retail company stores data from a product lifecycle management (PLM) application in an on-premises MySQL database. The PLM application frequently updates the database when transactions occur.
The company wants to gather insights from the PLM application in near real time. The company wants to integrate the insights with other business datasets and to analyze the combined dataset by using an Amazon Redshift data warehouse.
The company has already established an AWS Direct Connect connection between the on-premises infrastructure and AWS.
Which solution will meet these requirements with the LEAST development effort?
A.
Run a scheduled AWS Glue extract, transform, and load (ETL) job to get the MySQL database updates by using a Java Database Connectivity (JDBC) connection. Set Amazon Redshift as the destination for the ETL job.
B.
Run a full load plus CDC task in AWS Database Migration Service (AWS DMS) to continuously replicate the MySQL database changes. Set Amazon Redshift as the destination for the task.
C.
Use the Amazon AppFlow SDK to build a custom connector for the MySQL database to continuously replicate the database changes. Set Amazon Redshift as the destination for the connector.
D.
Run scheduled AWS DataSync tasks to synchronize data from the MySQL database. Set Amazon Redshift as the destination for the tasks.
Run a full load plus CDC task in AWS Database Migration Service (AWS DMS) to continuously replicate the MySQL database changes. Set Amazon Redshift as the destination for the task.
A marketing company uses Amazon S3 to store clickstream data. The company queries the data at the end of each day by using a SQL JOIN clause on S3 objects that are stored in separate buckets.
The company creates key performance indicators (KPIs) based on the objects. The company needs a serverless solution that will give users the ability to query data by partitioning the data. The solution must maintain the atomicity, consistency, isolation, and durability (ACID) properties of the data.
Which solution will meet these requirements MOST cost-effectively?
A.
Amazon S3 Select
B.
Amazon Redshift Spectrum
C.
Amazon Athena
D.
Amazon EMRModify the processing application to publish the data to an Amazon Kinesis data stream. Create an Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) application to detect drops in network usage.
Amazon Athena
A company wants to migrate data from an Amazon RDS for PostgreSQL DB instance in the eu-east-1 Region of an AWS account named Account_A. The company will migrate the data to an Amazon Redshift cluster in the eu-west-1 Region of an AWS account named Account_B.
Which solution will give AWS Database Migration Service (AWS DMS) the ability to replicate data between two data stores?
A.
Set up an AWS DMS replication instance in Account_B in eu-west-1.
B.
Set up an AWS DMS replication instance in Account_B in eu-east-1.
C.
Set up an AWS DMS replication instance in a new AWS account in eu-west-1.
D.
Set up an AWS DMS replication instance in Account_A in eu-east-1.
Set up an AWS DMS replication instance in Account_B in eu-west-1.
A company uses Amazon S3 as a data lake. The company sets up a data warehouse by using a multi-node Amazon Redshift cluster. The company organizes the data files in the data lake based on the data source of each data file.
The company loads all the data files into one table in the Redshift cluster by using a separate COPY command for each data file location. This approach takes a long time to load all the data files into the table. The company must increase the speed of the data ingestion. The company does not want to increase the cost of the process.
Which solution will meet these requirements?
A.
Use a provisioned Amazon EMR cluster to copy all the data files into one folder. Use a COPY command to load the data into Amazon Redshift.
B.
Load all the data files in parallel into Amazon Aurora. Run an AWS Glue job to load the data into Amazon Redshift.
C.
Use an AWS Give job to copy all the data files into one folder. Use a COPY command to load the data into Amazon Redshift.
D.
Create a manifest file that contains the data file locations. Use a COPY command to load the data into Amazon Redshift.
Create a manifest file that contains the data file locations. Use a COPY command to load the data into Amazon Redshift.
A company plans to use Amazon Kinesis Data Firehose to store data in Amazon S3. The source data consists of 2 MB .csv files. The company must convert the .csv files to JSON format. The company must store the files in Apache Parquet format.
Which solution will meet these requirements with the LEAST development effort?
A.
Use Kinesis Data Firehose to convert the .csv files to JSON. Use an AWS Lambda function to store the files in Parquet format.
B.
Use Kinesis Data Firehose to convert the .csv files to JSON and to store the files in Parquet format.
C.
Use Kinesis Data Firehose to invoke an AWS Lambda function that transforms the .csv files to JSON and stores the files in Parquet format.
D.
Use Kinesis Data Firehose to invoke an AWS Lambda function that transforms the .csv files to JSON. Use Kinesis Data Firehose to store the files in Parquet format.
Use Kinesis Data Firehose to convert the .csv files to JSON and to store the files in Parquet format.
A company is using an AWS Transfer Family server to migrate data from an on-premises environment to AWS. Company policy mandates the use of TLS 1.2 or above to encrypt the data in transit.
Which solution will meet these requirements?
A.
Generate new SSH keys for the Transfer Family server. Make the old keys and the new keys available for use.
B.
Update the security group rules for the on-premises network to allow only connections that use TLS 1.2 or above.
C.
Update the security policy of the Transfer Family server to specify a minimum protocol version of TLS 1.2
D.
Install an SSL certificate on the Transfer Family server to encrypt data transfers by using TLS 1.2.
Update the security policy of the Transfer Family server to specify a minimum protocol version of TLS 1.2
A company wants to migrate an application and an on-premises Apache Kafka server to AWS. The application processes incremental updates that an on-premises Oracle database sends to the Kafka server. The company wants to use the replatform migration strategy instead of the refactor strategy.
Which solution will meet these requirements with the LEAST management overhead?
A.
Amazon Kinesis Data Streams
B.
Amazon Managed Streaming for Apache Kafka (Amazon MSK) provisioned cluster
C.
Amazon Kinesis Data Firehose
D.
Amazon Managed Streaming for Apache Kafka (Amazon MSK) Serverless
Amazon Managed Streaming for Apache Kafka (Amazon MSK) Serverless
A data engineer is building an automated extract, transform, and load (ETL) ingestion pipeline by using AWS Glue. The pipeline ingests compressed files that are in an Amazon S3 bucket. The ingestion pipeline must support incremental data processing.
Which AWS Glue feature should the data engineer use to meet this requirement?
A.
Workflows
B.
Triggers
C.
Job bookmarks
D.
Classifiers
Job bookmarks
A banking company uses an application to collect large volumes of transactional data. The company uses Amazon Kinesis Data Streams for real-time analytics. The company’s application uses the PutRecord action to send data to Kinesis Data Streams.
A data engineer has observed network outages during certain times of day. The data engineer wants to configure exactly-once delivery for the entire processing pipeline.
Which solution will meet this requirement?
A.
Design the application so it can remove duplicates during processing by embedding a unique ID in each record at the source.
B.
Update the checkpoint configuration of the Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) data collection application to avoid duplicate processing of events.
C.
Design the data source so events are not ingested into Kinesis Data Streams multiple times.
D.
Stop using Kinesis Data Streams. Use Amazon EMR instead. Use Apache Flink and Apache Spark Streaming in Amazon EMR.
Design the application so it can remove duplicates during processing by embedding a unique ID in each record at the source.
A company stores logs in an Amazon S3 bucket. When a data engineer attempts to access several log files, the data engineer discovers that some files have been unintentionally deleted.
The data engineer needs a solution that will prevent unintentional file deletion in the future.
Which solution will meet this requirement with the LEAST operational overhead?
A.
Manually back up the S3 bucket on a regular basis.
B.
Enable S3 Versioning for the S3 bucket.
C.
Configure replication for the S3 bucket.
D.
Use an Amazon S3 Glacier storage class to archive the data that is in the S3 bucket.
Enable S3 Versioning for the S3 bucket.
A manufacturing company collects sensor data from its factory floor to monitor and enhance operational efficiency. The company uses Amazon Kinesis Data Streams to publish the data that the sensors collect to a data stream. Then Amazon Kinesis Data Firehose writes the data to an Amazon S3 bucket.
The company needs to display a real-time view of operational efficiency on a large screen in the manufacturing facility.
Which solution will meet these requirements with the LOWEST latency?
A.
Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to process the sensor data. Use a connector for Apache Flink to write data to an Amazon Timestream database. Use the Timestream database as a source to create a Grafana dashboard.
B.
Configure the S3 bucket to send a notification to an AWS Lambda function when any new object is created. Use the Lambda function to publish the data to Amazon Aurora. Use Aurora as a source to create an Amazon QuickSight dashboard.
C.
Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to process the sensor data. Create a new Data Firehose delivery stream to publish data directly to an Amazon Timestream database. Use the Timestream database as a source to create an Amazon QuickSight dashboard.
D.
Use AWS Glue bookmarks to read sensor data from the S3 bucket in real time. Publish the data to an Amazon Timestream database. Use the Timestream database as a source to create a Grafana dashboard.
Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to process the sensor data. Use a connector for Apache Flink to write data to an Amazon Timestream database. Use the Timestream database as a source to create a Grafana dashboard.