Incorrect From Test Flashcards

1
Q

A Developer is creating a new web application that will be deployed using AWS Elastic Beanstalk from the AWS Management Console. The Developer is about to create a source bundle which will be uploaded using the console.

Which of the following are valid requirements for creating the source bundle? (Select TWO.)

​
Must consist of one or more ZIP files.
​
Must not exceed 512 MB.
​
Must not include a parent folder or top-level directory.
​
Must include the cron.yaml file.
​
Must include a parent folder or top-level directory.
A

Must not exceed 512 MB.​
Must not include a parent folder or top-level directory.

  • Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)
  • Not exceed 512 MB
  • Not include a parent folder or top-level directory (subdirectories are fine)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
An application uses AWS Lambda which makes remote calls to several downstream services. A developer wishes to add data to custom subsegments in AWS X-Ray that can be used with filter expressions. Which type of data should be used?
​
Annotations​
Trace ID​
Daemon​
Metadata
A

Annotations​

Annotations are key-value pairs with string, number, or Boolean values. Annotations are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API.

INCORRECT: “Metadata” is incorrect. Metadata are key-value pairs that can have values of any type, including objects and lists, but are not indexed for use with filter expressions. Use metadata to record additional data that you want stored in the trace but don’t need to use with search.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A serverless application uses an AWS Lambda function to process Amazon S3 events. The Lambda function executes 20 times per second and takes 20 seconds to complete each execution.

How many concurrent executions will the Lambda function require?

5
40
20
400

A

400

To calculate the concurrency requirements for the Lambda function simply multiply the number of executions per second (20) by the time it takes to complete the execution (20).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A Development team would use a GitHub repository and would like to migrate their application code to AWS CodeCommit.
What needs to be created before they can migrate a cloned repository to CodeCommit over HTTPS?


A set of Git credentials generated with IAM

An Amazon EC2 IAM role with CodeCommit permissions​

A public and private SSH key file

A GitHub secure authentication token

A

Git credentials, an IAM -generated user name and password pair you can use to communicate with CodeCommit repositories over HTTPS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A developer is planning to use a Lambda function to process incoming requests from an Application Load Balancer (ALB). How can this be achieved?


Create an Auto Scaling Group (ASG) and register the Lambda function in the launch configuration

Setup an API in front of the ALB using API Gateway and use an integration request to map the request to the Lambda function

Configure an event-source mapping between the ALB and the Lambda function

Create a target group and register the Lambda function using the AWS CLI

A

​Create a target group and register the Lambda function using the AWS CLI

You can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A Development team wants to run their container workloads on Amazon ECS. Each application container needs to share data with another container to collect logs and metrics.

What should the Development team do to meet these requirements?

Create two task definitions. Make one to include the application container and the other to include the other container. Mount a shared volume between the two tasks

Create a single pod specification. Include both containers in the specification. Mount a persistent volume to both containers

Create one task definition. Specify both containers in the definition. Mount a shared volume between those two containers

Create two pod specifications. Make one to include the application container and the other to include the other container. Link the two pods together

A

Create one task definition. Specify both containers in the definition. Mount a shared volume between those two containers

To configure a Docker volume, in the task definition volumes section, define a data volume with name and DockerVolumeConfiguration values. In the containerDefinitions section, define multiple containers with mountPoints values that reference the name of the defined volume and the containerPath value to mount the volume at on the container.

The containers should both be specified in the same task definition. Therefore, the Development team should create one task definition, specify both containers in the definition and then mount a shared volume between those two containers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A Developer is setting up a code update to Amazon ECS using AWS CodeDeploy. The Developer needs to complete the code update quickly. Which of the following deployment types should the Developer use?

Linear​
Canary​
In-place​
Blue/green

A


Blue/green

INCORRECT: “In-place” is incorrect as AWS Lambda and Amazon ECS deployments cannot use an in-place deployment type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An application serves customers in several different geographical regions. Information about the location users connect from is written to logs stored in Amazon CloudWatch Logs. The company needs to publish an Amazon CloudWatch custom metric that tracks connections for each location.

Which approach will meet these requirements?

Configure a CloudWatch Events rule that creates a custom metric from the CloudWatch Logs group.

Stream data to an Amazon Elasticsearch cluster in near-real time and export a custom metric.

Create a CloudWatch metric filter to extract metrics from the log files with location as a dimension.

Create a CloudWatch Logs Insights query to extract the location information from the logs and to create a custom metric with location as a dimension.

A

Create a CloudWatch metric filter to extract metrics from the log files with location as a dimension.

When you create a metric from a log filter, you can also choose to assign dimensions and a unit to the metric. In this case, the company can assign a dimension that uses the location information.

INCORRECT: “Create a CloudWatch Logs Insights query to extract the location information from the logs and to create a custom metric with location as a dimension” is incorrect. You cannot create a custom metric through CloudWatch Logs Insights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A developer is preparing to deploy a Docker container to Amazon ECS using CodeDeploy. The developer has defined the deployment actions in a file. What should the developer name the file?


appspec.yml

appspec.json

buildspec.yml

cron.yml

A


appspec.yml

The name of the AppSpec file for an EC2/On-Premises deployment must be appspec.yml. The name of the AppSpec file for an Amazon ECS or AWS Lambda deployment must be appspec.yaml.

INCORRECT: “buildspec.yml” is incorrect as this is the file name you should use for the file that defines the build instructions for AWS CodeBuild.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company has created a set of APIs using Amazon API Gateway and exposed them to partner companies. The APIs have caching enabled for all stages. The partners require a method of invalidating the cache that they can build into their applications.

What can the partners use to invalidate the API cache?

They can use the query string parameter INVALIDATE_CACHE

They can pass the HTTP header Cache-Control: max-age=0

They must wait for the TTL to expire

They can invoke an AWS API endpoint which invalidates the cache

A


They can pass the HTTP header Cache-Control: max-age=0

A client of your API can invalidate an existing cache entry and reload it from the integration endpoint for individual requests. The client must send a request that contains the Cache-Control: max-age=0 header.

To grant permission for a client, attach a policy of the following format to an IAM execution role for the user.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A serverless application uses an Amazon API Gateway and AWS Lambda. The application processes data submitted in a form by users of the application and certain data must be stored and available to subsequent function calls.

What is the BEST solution for storing this data?

​
Store the data in the /tmp directory
​
Store the data in an Amazon SQS queue
​
Store the data in an Amazon Kinesis Data Stream
​
Store the data in an Amazon DynamoDB table
A


Store the data in an Amazon DynamoDB table

Amazon DynamoDB is a good solution for this scenario as it is a low-latency NoSQL database that is often used for storing session state data. Amazon S3 would also be a good fit for this scenario but is not offered as an option.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

An application component writes thousands of item-level changes to a DynamoDB table per day. The developer requires that a record is maintained of the items before they were modified. What MUST the developer do to retain this information? (Select TWO.)


Create a CloudWatch alarm that sends a notification when an item is modified

Set the StreamViewType to NEW_AND_OLD_IMAGES

Use an AWS Lambda function to extract the item records from the notification and write to an S3 bucket

Set the StreamViewType to OLD_IMAGE

Enable DynamoDB Streams for the table

A

Set the StreamViewType to OLD_IMAGE

KEYS_ONLY — Only the key attributes of the modified item.

NEW_IMAGE — The entire item, as it appears after it was modified.

OLD_IMAGE — The entire item, as it appeared before it was modified.

NEW_AND_OLD_IMAGES — Both the new and the old images of the item.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A Developer is building an application that will store data relating to financial transactions in multiple DynamoDB tables. The Developer needs to ensure the transactions provide atomicity, isolation, and durability (ACID) and that changes are committed following an all-or nothing paradigm.

What write API should be used for the DynamoDB table?


Strongly consistent

Eventually consistent

Transactional

Standard

A

Transactional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company is deploying an on-premise application server that will connect to several AWS services. What is the BEST way to provide the application server with permissions to authenticate to AWS services?


Create an IAM role with the necessary permissions and assign it to the application server

Create an IAM group with the necessary permissions and add the on-premise application server to the group

Create an IAM user and generate access keys. Create a credentials file on the application server

Create an IAM user and generate a key pair. Use the key pair in API calls to AWS services

A

Create an IAM user and generate access keys. Create a credentials file on the application server

(ON PREMISE!)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A Developer requires a multi-threaded in-memory cache to place in front of an Amazon RDS database. Which caching solution should the Developer choose?

​
Amazon DynamoDB DAX
​
Amazon RedShift
​
Amazon ElastiCache Memcached

Amazon ElastiCache Redis

A

CORRECT: “Amazon ElastiCache Memcached” is the correct answer.

INCORRECT: “Amazon ElastiCache Redis” is incorrect as Redis it not multi-threaded.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A Developer is deploying an AWS Lambda update using AWS CodeDeploy. In the appspec.yaml file, which of the following is a valid structure for the order of hooks that should be specified?


BeforeInstall > AfterInstall > ApplicationStart > ValidateService

BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic

BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic

BeforeAllowTraffic > AfterAllowTraffic

A

BeforeAllowTraffic > AfterAllowTraffic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A Developer is building a three-tier web application that must be able to handle a minimum of 10,000 requests per minute. The requirements state that the web tier should be completely stateless while the application maintains session state data for users.

How can the session state data be maintained externally, whilst keeping latency at the LOWEST possible value?

Implement a shared Amazon EFS file system solution across the underlying Amazon EC2 instances, then implement session handling at the application level to leverage the EFS file system for session data storage

Create an Amazon RedShift instance, then implement session handling at the application level to leverage a database inside the RedShift database instance for session data storage

Create an Amazon ElastiCache Redis cluster, then implement session handling at the application level to leverage the cluster for session data storage

Create an Amazon DynamoDB table, then implement session handling at the application level to leverage the table for session data storage

A

CORRECT: “Create an Amazon ElastiCache Redis cluster, then implement session handling at the application level to leverage the cluster for session data storage” is the correct answer.

INCORRECT: “Create an Amazon DynamoDB table, then implement session handling at the application level to leverage the table for session data storage” is incorrect as though this is a good solution for storing session state data, the latency will not be as low as with ElastiCache.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company has a large Amazon DynamoDB table which they scan periodically so they can analyze several attributes. The scans are consuming a lot of provisioned throughput. What technique can a Developer use to minimize the impact of the scan on the table’s provisioned throughput?

​
Set a smaller page size for the scan​
Use parallel scans​
Define a range key on the table
Prewarm the table by updating all items
A


Set a smaller page size for the scan

Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A company has implemented AWS CodePipeline to automate its release pipelines. The Development team is writing an AWS Lambda function that will send notifications for state changes of each of the actions in the stages.

Which steps must be taken to associate the Lambda function with the event source?


Create an event trigger and specify the Lambda function from the CodePipeline console

​Create a trigger that invokes the Lambda function from the Lambda console by selecting CodePipeline as the event source

Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source

Create an Amazon CloudWatch alarm that monitors status changes in CodePipeline and triggers the Lambda function

A


Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source

Amazon CloudWatch Events help you to respond to state changes in your AWS resources. When your resources change state, they automatically send events into an event stream. You can create rules that match selected events in the stream and route them to your AWS Lambda function to take action.

AWS CodePipeline can be configured as an event source in CloudWatch Events and can then send notifications using as service such as Amazon SNS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A Developer is creating a DynamoDB table for storing transaction logs. The table has 10 write capacity units (WCUs). The Developer needs to configure the read capacity units (RCUs) for the table in order to MAXIMIZE the number of requests allowed per second. Which of the following configurations should the Developer use?


Strongly consistent reads of 5 RCUs reading items that are 4 KB in size

Eventually consistent reads of 15 RCUs reading items that are 1 KB in size

Strongly consistent reads of 15 RCUs reading items that are 1KB in size

Eventually consistent reads of 5 RCUs reading items that are 4 KB in size

A

Eventually consistent reads of 15 RCUs reading items that are 1 KB in size

Eventually consistent, 15 RCUs, 1 KB item = 30 items read per second.

· Strongly consistent, 15 RCUs, 1 KB item = 15 items read per second.

· Eventually consistent, 5 RCUs, 4 KB item = 10 items read per second.

· Strongly consistent, 5 RCUs, 4 KB item = 5 items read per second.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

There are multiple AWS accounts across multiple regions managed by a company. The operations team require a single operational dashboard that displays some key performance metrics from these accounts and regions. What is the SIMPLEST solution?


Create an AWS Lambda function that collects metrics from each account and region and pushes the metrics to the account where the dashboard has been created

Create an Amazon CloudWatch dashboard in one account and region and import the data from the other accounts and regions

Create an Amazon CloudTrail trail that applies to all regions and deliver the logs to a single Amazon S3 bucket. Create a dashboard using the data in the bucket

Create an Amazon CloudWatch cross-account cross-region dashboard

A

Create an Amazon CloudWatch cross-account cross-region dashboard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A developer needs use the attribute of an Amazon S3 object that uniquely identifies the object in a bucket. Which of the following represents an Object Key?

Development/Projects.xls
Project=Blue
s3://dctlabs/Development/Projects.xls​
arn:aws:s3:::dctlabs

A

Development/Projects.xls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A company maintains a REST API service using Amazon API Gateway with native API key validation. The company recently launched a new registration page, which allows users to sign up for the service. The registration page creates a new API key using CreateApiKey and sends the new key to the user. When the user attempts to call the API using this key, the user receives a 403 Forbidden error. Existing users are unaffected and can still call the API.

What code updates will grant these new users’ access to the API?

The createDeployment method must be called so the API can be redeployed to include the newly created API key

The importApiKeys method must be called to import all newly created API keys into the current stage of the API

The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan

The updateAuthorizer method must be called to update the API’s authorizer to include the newly created API key

A


The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan

A usage plan specifies who can access one or more deployed API stages and methods—and also how much and how fast they can access them. The plan uses API keys to identify API clients and meters access to the associated API stages for each key. It also lets you configure throttling limits and quota limits that are enforced on individual client API keys.

CORRECT: “The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A Developer is creating a serverless application that will process sensitive data. The AWS Lambda function must encrypt all data that is written to /tmp storage at rest.

How should the Developer encrypt this data?


Configure Lambda to use an AWS KMS customer managed customer master key (CMK). Use the CMK to generate a data key and encrypt all data prior to writing to /tmp storage.

Attach the Lambda function to a VPC and encrypt Amazon EBS volumes at rest using the AWS managed CMK. Mount the EBS volume to /tmp.

Enable default encryption on an Amazon S3 bucket using an AWS KMS customer managed customer master key (CMK). Mount the S3 bucket to /tmp.

Enable secure connections over HTTPS for the AWS Lambda API endpoints using Transport Layer Security (TLS).

A

CORRECT: “Configure Lambda to use an AWS KMS customer managed customer master key (CMK). Use the CMK to generate a data key and encrypt all data prior to writing to /tmp storage” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A company runs an e-commerce website that uses Amazon DynamoDB where pricing for items is dynamically updated in real time. At any given time, multiple updates may occur simultaneously for pricing information on a particular product. This is causing the original editor’s changes to be overwritten without a proper review process.

Which DynamoDB write option should be selected to prevent this overwriting?

​
Conditional writes​
Concurrent writes​
Batch writes​
Atomic writes
A

Conditional writes​

A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error. Conditional writes are helpful in many situations. For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A Developer is creating a REST service using Amazon API Gateway with AWS Lambda integration. The service adds data to a spreadsheet and the data is sent as query string parameters in the method request.

How should the Developer convert the query string parameters to arguments for the Lambda function?

​
Enable request validation
​
Include the Amazon Resource Name (ARN) of the Lambda function
​
Create a mapping template
​
Change the integration type
A

CORRECT: “Create a mapping template” is the correct answer.

Mapping template overrides provides you with the flexibility to perform many-to-one parameter mappings; override parameters after standard API Gateway mappings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

An organization has an account for each environment: Production, Testing, Development. A Developer with an IAM user in the Development account needs to launch resources in the Production and Testing accounts. What is the MOST efficient way to provide access

Create an IAM permissions policy in the Production and Testing accounts and reference the IAM user in the Development account

Create an IAM group in the Production and Testing accounts and add the Developer’s user from the Development account to the groups

Create a role with the required permissions in the Production and Testing accounts and have the Developer assume that role

Create a separate IAM user in each account and have the Developer login separately to each account

A

CORRECT: “Create a role with the required permissions in the Production and Testing accounts and have the Developer assume that role” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A Developer has created an Amazon S3 bucket and uploaded some objects that will be used for a publicly available static website. What steps MUST be performed to configure the bucket as a static website? (Select TWO.)


Create an object access control list (ACL) granting READ permissions to the AllUsers group

Enable public access and grant everyone the s3:GetObject permissions

Upload a certificate from AWS Certificate Manager

Upload an index and error document and enter the name of the index and error documents when enabling static website hosting

Upload an index document and enter the name of the index document when enabling static website hosting

A

Enable public access and grant everyone the s3:GetObject permissions

Upload an index document and enter the name of the index document when enabling static website hosting

INCORRECT: “Upload an index and error document and enter the name of the index and error documents when enabling static website hosting” is incorrect as the error document is optional and the question specifically asks for the steps that MUST be completed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A company runs a popular website behind an Amazon CloudFront distribution that uses an Application Load Balancer as the origin. The Developer wants to set up custom HTTP responses to 404 errors for content that has been removed from the origin that redirects the users to another page.

The Developer wants to use an AWS Lambda@Edge function that is associated with the current CloudFront distribution to accomplish this goal. The solution must use a minimum amount of resources.

Which CloudFront event type should the Developer use to invoke the Lambda@Edge function that contains the redirect logic?

​
Viewer response
​
Origin response
​
Viewer request
​
Origin request
A

Origin response” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A Developer is developing a web application and will maintain separate sets of resources for the alpha, beta, and release stages. Each version runs on Amazon EC2 and uses an Elastic Load Balancer.

How can the Developer create a single page to view and manage all of the resources?


Deploy all resources using a single Amazon CloudFormation stack

Create a resource group

Create a single AWS CodeDeploy deployment

Create an AWS Elastic Beanstalk environment for each stage

A


Create a resource group

In AWS, a resource is an entity that you can work with. Examples include an Amazon EC2 instance, an AWS CloudFormation stack, or an Amazon S3 bucket. If you work with multiple resources, you might find it useful to manage them as a group rather than move from one AWS service to another for each task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

An application will be hosted on the AWS Cloud. Developers will be using an Agile software development methodology with regular updates deployed through a continuous integration and delivery (CI/CD) model. Which AWS service can assist the Developers with automating the build, test, and deploy phases of the release process every time there is a code change?

​
AWS CloudFormation
​
AWS Elastic Beanstalk
​
AWS CodeBuild

AWS CodePipeline

A


AWS CodePipeline

INCORRECT: “AWS CodeBuild” is incorrect as CodeBuild is used for compiling code, running unit tests and creating the deployment package. It does not manage the deployment of the code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

A Developer is creating a design for an application that will include Docker containers on Amazon ECS with the EC2 launch type. The Developer needs to control the placement of tasks onto groups of container instances organized by availability zone and instance type.

Which Amazon ECS feature provides expressions that can be used to group container instances by the relevant attributes?

​
Task Group​
Task Placement Strategy​
Cluster Query Language
Task Placement Constraints
A

Cluster Query Language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A company runs multiple microservices that each use their own Amazon DynamoDB table. The “customers” microservice needs data that originates in the “orders” microservice.

What approach represents the SIMPLEST method for the “customers” table to get near real-time updates from the “orders” table?


Enable Amazon DynamoDB streams on the “orders” table, configure the “customers” microservice to read records from the stream

Use Amazon Kinesis Firehose to deliver all changes in the “orders” table to the “customers” table

Use Amazon CloudWatch Events to send notifications every time an item is added or modified in the “orders” table

Enable DynamoDB streams for the “customers” table, trigger an AWS Lambda function to read records from the stream and write them to the “orders” table

A

Enable Amazon DynamoDB streams on the “orders” table, configure the “customers” microservice to read records from the stream

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

An application running on Amazon EC2 generates a large number of small files (1KB each) containing personally identifiable information that must be converted to ciphertext. The data will be stored on a proprietary network-attached file system. What is the SAFEST way to encrypt the data using AWS KMS?


Create a data encryption key from a customer master key and encrypt the data with the data encryption key

Encrypt the data directly with a customer managed customer master key

Encrypt the data directly with an AWS managed customer master key

Create a data encryption key from a customer master key and encrypt the data with the customer master key

A

Encrypt the data directly with a customer managed customer master key

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A Developer is deploying an application using Docker containers running on the Amazon Elastic Container Service (ECS). The Developer is testing application latency and wants to capture trace information between the microservices.

Which solution will meet these requirements?


Create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to the Amazon ECS cluster.

Install the AWS X-Ray daemon on each of the Amazon ECS instances.

Install the Amazon CloudWatch agent on the container image. Use the CloudWatch SDK to publish custom metrics from each of the microservices.

Install the AWS X-Ray daemon locally on an Amazon EC2 instance and instrument the Amazon ECS microservices using the X-Ray SDK.

A

Create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to the Amazon ECS cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

A Developer needs to be notified by email for all new object creation events in a specific Amazon S3 bucket. Amazon SNS will be used for sending the messages. How can the Developer enable these notifications?


Create an event notification for all s3:ObjectCreated:Put API calls

Create an event notification for all s3:ObjectCreated:* API calls

Create an event notification for all s3:ObjectRestore:Post API calls

Create an event notification for all s3:ObjectRemoved:Delete API calls

A


Create an event notification for all s3:ObjectCreated:* API calls

INCORRECT: “Create an event notification for all s3:ObjectCreated:Put API calls” is incorrect as this will not capture all new object creation events (e.g. POST or COPY). The wildcard should be used instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A Developer is launching an application on Amazon ECS. The application should scale tasks automatically based on load and incoming connections must be spread across the containers.

How should the Developer configure the ECS cluster?


Write statements using the Cluster Query Language to scale the Docker containers

​Create an ECS Task Definition that uses Auto Scaling and Elastic Load Balancing

Create a capacity provider and configure cluster auto scaling

Create an ECS Service with Auto Scaling and attach an Elastic Load Balancer

A

Create an ECS Service with Auto Scaling and attach an Elastic Load Balancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A Developer is creating an application that will process some data and generate an image file from it. The application will use an AWS Lambda function which will require 150 MB of temporary storage while executing. The temporary files will not be needed after the function execution is completed.

What is the best location for the Developer to store the files?


Store the files in Amazon S3 and use a lifecycle policy to delete the files automatically

Store the files in the /tmp directory and delete the files when the execution completes

Store the files in an Amazon EFS filesystem and delete the files when the execution completes

Store the files in an Amazon Instance Store and delete the files when the execution completes

A

CORRECT: “Store the files in the /tmp directory and delete the files when the execution completes” is the correct answer.

The /tmp directory can be used for storing temporary files within the execution context. This can be used for storing static assets that can be used by subsequent invocations of the function. If the assets must be deleted before the function is invoked again the function code should take care of deleting them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

A new application will be deployed using AWS CodeDeploy to Amazon Elastic Container Service (ECS). What must be supplied to CodeDeploy to specify the ECS service to deploy?


The AppSpec file

The Policy file

The BuildSpec file

The Template file

A


The AppSpec file

INCORRECT: “The BuildSpec file” is incorrect as this is a file type that is used with AWS CodeBuild.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

A Developer implemented a static website hosted in Amazon S3 that makes web service requests hosted in Amazon API Gateway and AWS Lambda. The site is showing an error that reads:

“No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘null’ is therefore not allowed access.”

What should the Developer do to resolve this issue?

Enable cross-origin resource sharing (CORS) on the S3 bucket

Add the Access-Control-Request-Method header to the request

Enable cross-origin resource sharing (CORS) for the method in API Gateway

Add the Access-Control-Request-Headers header to the request

A


Enable cross-origin resource sharing (CORS) for the method in API Gateway

INCORRECT: “Enable cross-origin resource sharing (CORS) on the S3 bucket” is incorrect as CORS must be enabled on the requested endpoint which is API Gateway, not S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A company is developing a new online game that will run on top of Amazon ECS. Four distinct Amazon ECS services will be part of the architecture, each requiring specific permissions to various AWS services. The company wants to optimize the use of the underlying Amazon EC2 instances by bin packing the containers based on memory reservation.

Which configuration would allow the Development team to meet these requirements MOST securely


Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then, create an IAM group and configure the ECS cluster to reference that group

Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS task definition to reference the associated IAM role

Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS service to reference the associated IAM role

Create a new Identity and Access Management (IAM) instance profile containing the required permissions for the various ECS services, then associate that instance role with the underlying EC2 instances

A

Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS task definition to reference the associated IAM role

INCORRECT: “Create four distinct IAM roles, each containing the required permissions for the associated ECS services, then configure each ECS service to reference the associated IAM role” is incorrect as the reference should be made within the task definition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

A developer is troubleshooting problems with a Lambda function that is invoked by Amazon SNS and repeatedly fails. How can the developer save discarded events for further processing?

​
Enable Lambda streams
​
Configure a Dead Letter Queue (DLQ)
​
Enable SNS notifications for failed events
​
Enable CloudWatch Logs for the Lambda function
A

Configure a Dead Letter Queue (DLQ)

You can configure a dead letter queue (DLQ) on AWS Lambda to give you more control over message handling for all asynchronous invocations, including those delivered via AWS events (S3, SNS, IoT, etc.).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

A company needs to store sensitive documents on Amazon S3. The documents should be encrypted in transit using SSL/TLS and then be encrypted for storage at the destination. The company do not want to manage any of the encryption infrastructure or customer master keys and require the most cost-effective solution.

What is the MOST suitable option to encrypt the data?


Client-side encryption with Amazon S3 managed keys

Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS) using customer managed CMKs

Server-Side Encryption with Customer-Provided Keys (SSE-C)

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

A

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

A company has deployed a REST API using Amazon API Gateway with a Lambda authorizer. The company needs to log who has accessed the API and how the caller accessed the API. They also require logs that include errors and execution traces for the Lambda authorizer.

Which combination of actions should the Developer take to meet these requirements? (Select TWO.)

​
Enable API Gateway access logs.
​
Create an API Gateway usage plan.
​
Enable server access logging.

Enable API Gateway execution logging.

Enable detailed logging in Amazon CloudWatch.

A

CORRECT: “Enable API Gateway execution logging” is a correct answer.

CORRECT: “Enable API Gateway access logs” is also a correct answer.

There are two types of API logging in CloudWatch: execution logging and access logging. In execution logging, API Gateway manages the CloudWatch Logs. The process includes creating log groups and log streams, and reporting to the log streams any caller’s requests and responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

A company uses Amazon DynamoDB to store sensitive data that must be encrypted. The company security policy mandates that data must be encrypted before it is submitted to DynamoDB

How can a Developer meet these requirements?

Use the UpdateTable operation to switch to a customer managed customer master key (CMK).

Use AWS Certificate Manager (ACM) to create one certificate for each DynamoDB table.

Use the UpdateTable operation to switch to an AWS managed customer master key (CMK).

Use the DynamoDB Encryption Client to enable end-to-end protection using client-side encryption.

A

“Use the DynamoDB Encryption Client to enable end-to-end protection using client-side encryption” is the correct answer.

In addition to encryption at rest, which is a server-side encryption feature, AWS provides the Amazon DynamoDB Encryption Client. This client-side encryption library enables you to protect your table data before submitting it to DynamoDB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

A Developer has deployed an application that runs on an Auto Scaling group of Amazon EC2 instances. The application data is stored in an Amazon DynamoDB table and records are constantly updated by all instances. An instance sometimes retrieves old data. The Developer wants to correct this by making sure the reads are strongly consistent.

How can the Developer accomplish this?


Create a new DynamoDB Accelerator (DAX) table

Use the GetShardIterator command

Set consistency to strong when calling UpdateTable

Set ConsistentRead to true when calling GetItem

A

“Set ConsistentRead to true when calling GetItem” is the correct answer.

When you request a strongly consistent read, DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful.

The GetItem operation returns a set of attributes for the item with the given primary key. If there is no matching item, GetItem does not return any data and there will be no Item element in the response.

GetItem provides an eventually consistent read by default. If your application requires a strongly consistent read, set ConsistentRead to true. Although a strongly consistent read might take more time than an eventually consistent read, it always returns the last updated value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

A Developer created a new AWS account and must create a scalable AWS Lambda function that meets the following requirements for concurrent execution:

· Average execution time of 100 seconds
· 50 requests per second

Which step must be taken prior to deployment to prevent error?

Contact AWS Support to increase the concurrent execution limits

Implement error handling within the application code

Implement dead-letter queues to capture invocation errors

Add an event source from Amazon API Gateway to the Lambda function

A

“Contact AWS Support to increase the concurrent execution limits” is the correct answer.

The average execution time is 100 seconds and 50 requests are received per second. This means the concurrency requirement is 100 x 50 = 5,000. As 5,000 is well above the default allowed concurrency of 1,000 executions a second. Therefore, the Developer will need to contact AWS Support to increase the concurrent execution limits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

A developer is designing a web application that will run on Amazon EC2 Linux instances using an Auto Scaling Group. The application should scale based on a threshold for the number of users concurrently using the application.

How should the Auto Scaling Group be configured to scale out?


Use the Amazon CloudWatch metric “NetworkIn”

Use a target tracking scaling policy

Create a custom Amazon CloudWatch metric for memory usage

Create a custom Amazon CloudWatch metric for concurrent users

A

“Create a custom Amazon CloudWatch metric for concurrent users” is the correct answer.

You can create a custom CloudWatch metric for your EC2 Linux instance statistics by creating a script through the AWS Command Line Interface (AWS CLI). Then, you can monitor that metric by pushing it to CloudWatch. In this scenario you could then monitor the number of users currently logged in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

An application searches a DynamoDB table to return items based on primary key attributes. A developer noticed some ProvisionedThroughputExceeded exceptions being generated by DynamoDB.

How can the application be optimized to reduce the load on DynamoDB and use the LEAST amount of RCU?


Modify the application to issue scan API calls with eventual consistency reads

Modify the application to issue scan API calls with strong consistency reads

Modify the application to issue query API calls with eventual consistency reads

Modify the application to issue query API calls with strong consistency reads

A

“Modify the application to issue query API calls with eventual consistency reads” is the correct answer.

In general, Scan operations are less efficient than other operations in DynamoDB. A Scan operation always scans the entire table or secondary index. It then filters out values to provide the result you want, essentially adding the extra step of removing data from the result set.

If possible, you should avoid using a Scan operation on a large table or index with a filter that removes many results. Also, as a table or index grows, the Scan operation slows. The Scan operation examines every item for the requested values and can use up the provisioned throughput for a large table or index in a single operation. For faster response times, design your tables and indexes so that your applications can use Query instead of Scan. (For tables, you can also consider using the GetItem and BatchGetItem APIs.)

Additionally, eventual consistency consumes fewer RCUs than strong consistency. Therefore, the application should be refactored to use query APIs with eventual consistency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

A Developer is building a WebSocket API using Amazon API Gateway. The payload sent to this API is JSON that includes an action key which can have multiple values. The Developer must integrate with different routes based on the value of the action key of the incoming JSON payload.

How can the Developer accomplish this task with the LEAST amount of configuration?


Set the value of the route selection expression to $default.

Create a mapping template to map the action key to an integration request.

Create a separate stage for each possible value of the action key.

Set the value of the route selection expression to $request.body.action.

A

“Set the value of the route selection expression to $request.body.action” is the correct answer.

In your WebSocket API, incoming JSON messages are directed to backend integrations based on routes that you configure. (Non-JSON messages are directed to a $default route that you configure.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

An application running on Amazon EC2 generates a large number of small files (1KB each) containing personally identifiable information that must be converted to ciphertext. The data will be stored on a proprietary network-attached file system. What is the SAFEST way to encrypt the data using AWS KMS?


Create a data encryption key from a customer master key and encrypt the data with the customer master key

Create a data encryption key from a customer master key and encrypt the data with the data encryption key

Encrypt the data directly with a customer managed customer master key

Encrypt the data directly with an AWS managed customer master key

A

Encrypt the data directly with a customer managed customer master key

INCORRECT: “Encrypt the data directly with an AWS managed customer master key” is incorrect as the network-attached file system is proprietary and therefore will not be supported by AWS managed CMKs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

A company manages a web application that is deployed on AWS Elastic Beanstalk. A Developer has been instructed to update to a new version of the application code. There is no tolerance for downtime if the update fails and rollback should be fast.

What is the SAFEST deployment method to use?

All at once
​
Immutable
​
Rolling with Additional Batch
​
Rolling
A

CORRECT: “Immutable” is the correct answer.

INCORRECT: “Rolling with Additional Batch” is incorrect because it requires manual redeployment in the case of failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

A utilities company needs to ensure that documents uploaded by customers through a web portal are securely stored in Amazon S3 with encryption at rest. The company does not want to manage the security infrastructure in-house. However, the company still needs maintain control over its encryption keys due to industry regulations.

Which encryption strategy should a Developer use to meet these requirements?


Server-side encryption with customer-provided encryption keys (SSE-C)

​Server-side encryption with AWS KMS managed keys (SSE-KMS)

Server-side encryption with Amazon S3 managed keys (SSE-S3)

Client-side encryption

A

CORRECT: “Server-side encryption with customer-provided encryption keys (SSE-C)” is the correct answer.

Server-side encryption is about protecting data at rest. Server-side encryption encrypts only the object data, not object metadata. Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your own encryption keys.

With the encryption key you provide as part of your request, Amazon S3 manages the encryption as it writes to disks and decryption when you access your objects. Therefore, you don’t need to maintain any code to perform data encryption and decryption. The only thing you do is manage the encryption keys you provide.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

A developer is making updates to the code for a Lambda function. The developer is keen to test the code updates by directing a small amount of traffic to a new version. How can this BEST be achieved?


Create an alias that points to both the new and previous versions of the function code and assign a weighting for sending a portion of traffic to the new version

Create an API using API Gateway and use stage variables to point to different versions of the Lambda function

Create a new function using the new code and update the application to split requests between the new functions

Create two versions of the function code. Configure the application to direct a subset of requests to the new version

A

CORRECT: “Create an alias that points to both the new and previous versions of the function code and assign a weighting for sending a portion of traffic to the new version”

You can create one or more aliases for your AWS Lambda function. A Lambda alias is like a pointer to a specific Lambda function version. Users can access the function version using the alias ARN.

You can point an alias a multiple versions of your function code and then assign a weighting to direct certain amounts of traffic to each version. This enables a blue/green style of deployment and means it’s easy to roll back to the older version by simply updating the weighting if issues occur.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

A Developer is designing a cloud native application. The application will use several AWS Lambda functions that will process items that the functions read from an event source. Which AWS services are supported for Lambda event source mappings? (Select THREE.)

​
Amazon Simple Notification Service (SNS)
​
Amazon Simple Queue Service (SQS)
​
Another Lambda function
​
Amazon Kinesis
​
Amazon DynamoDB
​
Amazon Simple Storage Service (S3)
A

Amazon Simple Queue Service (SQS)
Amazon Kinesis​
Amazon DynamoDB

An event source mapping is an AWS Lambda resource that reads from an event source and invokes a Lambda function. You can use event source mappings to process items from a stream or queue in services that don’t invoke Lambda functions directly. Lambda provides event source mappings for the following services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

A developer is creating an Auto Scaling group of Amazon EC2 instances. The developer needs to publish a custom metric to Amazon CloudWatch. Which method would be the MOST secure way to authenticate a CloudWatch PUT request?


Create an IAM role with the PutMetricData permission and create a new Auto Scaling launch configuration to launch instances using that role

Create an IAM role with the PutMetricData permission and modify the Amazon EC2 instances to use that role

Modify the CloudWatch metric policies to allow the PutMetricData permission to instances from the Auto Scaling group

​Create an IAM user with the PutMetricData permission and modify the Auto Scaling launch configuration to inject the user credentials into the instance user data

A

CORRECT: “Create an IAM role with the PutMetricData permission and create a new Auto Scaling launch configuration to launch instances using that role” is the correct answer

INCORRECT: “Create an IAM role with the PutMetricData permission and modify the Amazon EC2 instances to use that role” is incorrect as you should create a new launch configuration for the Auto Scaling group rather than updating the instances manually.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

A company is creating an application that will require users to access AWS services and allow them to reset their own passwords. Which of the following would allow the company to manage users and authorization while allowing users to reset their own passwords?


Amazon Cognito user pools and identity pools

Amazon Cognito identity pools and AWS IAM

Amazon Cognito identity pools and AWS STS

Amazon Cognito user pools and AWS KMS

A

CORRECT: “Amazon Cognito user pools and identity pools” is the correct answer.

INCORRECT: “Amazon Cognito identity pools and AWS IAM” is incorrect as a Cognito user pool should be used as the directory source for creating and managing users. IAM is used for accounts that are used to administer AWS services, not for application user access.

The first requirement is provided by an Amazon Cognito User Pool. With a Cognito user pool you can add sign-up and sign-in to mobile and web apps and it also offers a user directory so user accounts can be created directly within the user pool. Users also have the ability to reset their passwords.

To access AWS services you need a Cognito Identity Pool. An identity pool can be used with a user pool and enables a user to obtain temporary limited-privilege credentials to access AWS services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

An application has been instrumented to use the AWS X-Ray SDK to collect data about the requests the application serves. The Developer has set the user field on segments to a string that identifies the user who sent the request.

How can the Developer search for segments associated with specific users?


Use a filter expression to search for the user field in the segment metadata

Use a filter expression to search for the user field in the segment annotations

By using the GetTraceGraph API with a filter expression

By using the GetTraceSummaries API with a filter expression

A

CORRECT: “By using the GetTraceSummaries API with a filter expression” is the correct answer.

A subset of segment fields are indexed by X-Ray for use with filter expressions. For example, if you set the user field on a segment to a unique identifier, you can search for segments associated with specific users in the X-Ray console or by using the GetTraceSummaries API.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

An application uses an Auto Scaling group of Amazon EC2 instances, an Application Load Balancer (ALB), and an Amazon Simple Queue Service (SQS) queue. An Amazon CloudFront distribution caches content for global users. A Developer needs to add in-transit encryption to the data by configuring end-to-end SSL between the CloudFront Origin and the end users.

How can the Developer meet this requirement? (Select TWO.)


Create an Origin Access Identity (OAI)

Configure the Origin Protocol Policy

Add a certificate to the Auto Scaling Group

Create an encrypted distribution

Configure the Viewer Protocol Policy

A

CORRECT: “Configure the Origin Protocol Policy” is a correct answer.

CORRECT: “Configure the Viewer Protocol Policy” is also a correct answer.

To enable SSL between the origin and the distribution the Developer can configure the Origin Protocol Policy. Depending on the domain name used (CloudFront default or custom), the steps are different. To enable SSL between the end-user and CloudFront the Viewer Protocol Policy should be configured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

A Development team has deployed several applications running on an Auto Scaling fleet of Amazon EC2 instances. The Operations team have asked for a display that shows a key performance metric for each application on a single screen for monitoring purposes.

What steps should a Developer take to deliver this capability using Amazon CloudWatch?


Create a custom dimension with a unique metric name for each application

Create a custom event with a unique metric name for each application

Create a custom alarm with a unique metric name for each application

Create a custom namespace with a unique metric name for each application

A

A namespace is a container for CloudWatch metrics. Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics.

Therefore, the Developer should create a custom namespace with a unique metric name for each application. This namespace will then allow the metrics for each individual application to be shown in a single view through CloudWatch.

CORRECT: “Create a custom namespace with a unique metric name for each application” is the correct answer.

INCORRECT: “Create a custom dimension with a unique metric name for each application” is incorrect as a dimension further clarifies what a metric is and what data it stores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

A batch job runs every 24 hours and writes around 1 million items into a DynamoDB table each day. The batch job completes quickly, and the items are processed within 2 hours and are no longer needed.

What’s the MOST efficient way to provide an empty table each day?


Use the BatchWriteItem API with a DeleteRequest

Use the BatchUpdateItem API with expressions

Issue an AWS CLI aws dynamodb delete-item command with a wildcard

Delete the entire table and recreate it each day

A

Any delete operation will consume RCUs to scan/query the table and WCUs to delete the items. It will be much cheaper and simpler to just delete the table and recreate it again ahead of the next batch job. This can easily be automated through the API.

INCORRECT: “Issue an AWS CLI aws dynamodb delete-item command with a wildcard” is incorrect as this operation deletes data from a table one item at a time, which is highly inefficient. You also must specify the item’s primary key values; you cannot use a wildcard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

The source code for an application is stored in a file named index.js that is in a folder along with a template file that includes the following code:

# AWSTemplateFormatVersion: '2010-09-09'
# Transform: 'AWS::Serverless-2016-10-31'
# Resources:
# LambdaFunctionWithAPI:
# Type: AWS::Serverless::Function
# Properties:
# Handler: index.handler
# Runtime: nodejs12.x

What does a Developer need to do to prepare the template so it can be deployed using an AWS CLI command?


Run the aws serverless create-package command to embed the source file directly into the existing CloudFormation template

Run the aws lambda zip command to package the source file together with the CloudFormation template and deploy the resulting zip archive

Run the aws cloudformation package command to upload the source code to an Amazon S3 bucket and produce a modified CloudFormation template

Run the aws cloudformation compile command to base64 encode and embed the source file into a modified CloudFormation template

A

CORRECT: “Run the aws cloudformation package command to upload the source code to an Amazon S3 bucket and produce a modified CloudFormation template” is the correct answer.

INCORRECT: “Run the aws serverless create-package command to embed the source file directly into the existing CloudFormation template” is incorrect as the Developer has the choice to run either “aws cloudformation package” or “sam package”, but not “aws serverless create-package”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

A Developer needs to create an instance profile for an Amazon EC2 instance using the AWS CLI. How can this be achieved? (Select THREE.)

​
Run the AddRoleToInstanceProfile API
​
Run the AssignInstanceProfile API
​
Run the aws iam add-role-to-instance-profile command

Run the aws ec2 associate-instance-profile command

Run the CreateInstanceProfile API

Run the aws iam create-instance-profile command

A

Run the aws iam create-instance-profile command
Run the aws iam add-role-to-instance-profile command
Run the aws ec2 associate-instance-profile command

To add a role to an Amazon EC2 instance using the AWS CLI you must first create an instance profile. Then you need to add the role to the instance profile and finally assign the instance profile to the Amazon EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

A development team is migrating data from various file shares to AWS from on-premises. The data will be migrated into a single Amazon S3 bucket. What is the SIMPLEST method to ensure the data is encrypted at rest in the S3 bucket?


Use SSL to transmit the data over the Internet

Ensure all requests use the x-amz-server-side​-encryption​-customer-key header

Ensure all requests use the x-amz-server-side-encryption header

Enable default encryption when creating the bucket

A

CORRECT: “Enable default encryption when creating the bucket” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

An application uses Amazon API Gateway, an AWS Lambda function and a DynamoDB table. The developer requires that another Lambda function is triggered when an item lifecycle activity occurs in the DynamoDB table.

How can this be achieved?


Configure an Amazon CloudWatch alarm that sends an Amazon SNS notification. Trigger the Lambda function asynchronously from the SNS notification

Configure an Amazon CloudTrail API alarm that sends a message to an Amazon SQS queue. Configure the Lambda function to poll the queue and invoke the function synchronously

Enable a DynamoDB stream and trigger the Lambda function asynchronously from the stream

Enable a DynamoDB stream and trigger the Lambda function synchronously from the stream

A

CORRECT: “Enable a DynamoDB stream and trigger the Lambda function synchronously from the stream” is the correct answer.

INCORRECT: “Enable a DynamoDB stream and trigger the Lambda function asynchronously from the stream” is incorrect as the invocation should be synchronous.

Immediately after an item in the table is modified, a new record appears in the table’s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

An application is using Amazon DynamoDB as its data store and needs to be able to read 100 items per second as strongly consistent reads. Each item is 5 KB in size.
What value should be set for the table’s provisioned throughput for reads?

​
250 Read Capacity Units​
500 Read Capacity Units​
200 Read Capacity Units​
50 Read Capacity Units
A

CORRECT: “200 Read Capacity Units” is the correct answer.

To determine the number of RCUs required to handle 100 strongly consistent reads per/second with an average item size of 5KB, perform the following steps:

  1. Determine the average item size by rounding up the next multiple of 4KB (5KB rounds up to 8KB).
  2. Determine the RCU per item by dividing the item size by 4KB (8KB/4KB = 2).
  3. Multiply the value from step 2 with the number of reads required per second (2x100 = 200).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

A company wants to implement authentication for its new REST service using Amazon API Gateway. To authenticate the calls, each request must include HTTP headers with a client ID and user ID. These credentials must be compared to authentication data in an Amazon DynamoDB table.

What MUST the company do to implement this authentication in API Gateway?


Implement an Amazon Cognito authorizer that references the DynamoDB authentication table

Create a model that requires the credentials, then grant API Gateway access to the authentication table

Implement an AWS Lambda authorizer that references the DynamoDB authentication table

Modify the integration requests to require the credentials, then grant API Gateway access to the authentication table

A

CORRECT: “Implement an AWS Lambda authorizer that references the DynamoDB authentication table” is the correct answer.

There are two types of Lambda authorizers:

  • A token-based Lambda authorizer (also called a TOKEN authorizer) receives the caller’s identity in a bearer token, such as a JSON Web Token (JWT) or an OAuth token.
  • A request parameter-based Lambda authorizer (also called a REQUEST authorizer) receives the caller’s identity in a combination of headers, query string parameters, stageVariables, and $context variables.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

An application will use AWS Lambda and an Amazon RDS database. The Developer needs to secure the database connection string and enable automatic rotation every 30 days. What is the SIMPLEST way to achieve this requirement?


Store a SecureString in Systems Manager Parameter Store and enable automatic rotation every 30 days

Store a secret in AWS Secrets Manager and enable automatic rotation every 30 days

Store the connection string in an encrypted Amazon S3 bucket and use a scheduled CloudWatch Event to update the connection string every 30 days

Store the connection string as an encrypted environment variable in Lambda and create a separate function that rotates the connection string every 30 days

A

CORRECT: “Store a secret in AWS Secrets Manager and enable automatic rotation every 30 days” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

A company needs to provide additional security for their APIs deployed on Amazon API Gateway. They would like to be able to authenticate their customers with a token. What is the SAFEST way to do this?


Setup usage plans and distribute API keys to the customers


Use AWS Single Sign-on to authenticate the customers


Create an Amazon Cognito identity pool


Create an API Gateway Lambda authorizer

A

CORRECT: “Create an API Gateway Lambda authorizer” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

A developer is creating a serverless application that will use a DynamoDB table. The average item size is 7KB. The application will make 3 strongly consistent reads/sec, and 1 standard write/sec. How many RCUs/WCUs are required?

​
12 RCU and 14 WCU
​
6 RCU and 7 WCU
​
6 RCU and 14 WCU

3 RCU and 7 WCU

A

6 RCU and 7 WCU

Read capacity unit (RCU):

  • Each API call to read data from your table is a read request.
  • Read requests can be strongly consistent, eventually consistent, or transactional.
  • For items up to 4 KB in size, one RCU can perform one strongly consistent read request per second.
  • Items larger than 4 KB require additional RCUs.
  • For items up to 4 KB in size, one RCU can perform two eventually consistent read requests per second.
  • Transactional read requests require two RCUs to perform one read per second for items up to 4 KB.
  • For example, a strongly consistent read of an 8 KB item would require two RCUs, an eventually consistent read of an 8 KB item would require one RCU, and a transactional read of an 8 KB item would require four RCUs.

Write capacity unit (WCU):

  • Each API call to write data to your table is a write request.
  • For items up to 1 KB in size, one WCU can perform one standard write request per second.
  • Items larger than 1 KB require additional WCUs.
  • Transactional write requests require two WCUs to perform one write per second for items up to 1 KB.
  • For example, a standard write request of a 1 KB item would require one WCU, a standard write request of a 3 KB item would require three WCUs, and a transactional write request of a 3 KB item would require six WCUs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

An AWS Lambda function requires several environment variables with secret values. The secret values should be obscured in the Lambda console and API output even for users who have permission to use the key.

What is the best way to achieve this outcome and MINIMIZE complexity and latency?


Encrypt the secret values client-side using encryption helpers

Store the encrypted values in an encrypted Amazon S3 bucket and reference them from within the code

Use an external encryption infrastructure to encrypt the values and add them as environment variables

Encrypt the secret values with a customer-managed CMK

A

Encrypt the secret values client-side using encryption helpers

• Encryption helpers – The Lambda console lets you encrypt environment variable values client side, before sending them to Lambda. This enhances security further by preventing secrets from being displayed unencrypted in the Lambda console, or in function configuration that’s returned by the Lambda API. The console also provides sample code that you can adapt to decrypt the values in your function handler.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

A development team are creating a mobile application that customers will use to receive notifications and special offers. Users will not be required to log in.

What is the MOST efficient method to grant users access to AWS resources?

Use Amazon Cognito to associate unauthenticated users with an IAM role that has limited access to resources

Use an IAM SAML 2.0 identity provider to establish trust

Embed access keys in the application that have limited access to resources

Use Amazon Cognito Federated Identities and setup authentication using a Cognito User Pool

A

Use Amazon Cognito to associate unauthenticated users with an IAM role that has limited access to resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

An organization has a new AWS account and is setting up IAM users and policies. According to AWS best practices, which of the following strategies should be followed? (Select TWO.)


Use user accounts to delegate permissions

Create standalone policies instead of using inline policies

Create user accounts that can be shared for efficiency

Always use customer managed policies instead of AWS managed policies

Use groups to assign permissions to users

A

Create standalone policies instead of using inline policies
Use groups to assign permissions to users

Explanation
AWS provide a number of best practices for AWS IAM that help you to secure your resources. The key best practices referenced in this scenario are as follows:

  • Use groups to assign permissions to users – this is correct as you should create permissions policies and assign them to groups. Users can be added to the groups to get the permissions they need to perform their jobs.
  • Create standalone policies instead of using inline policies (Use Customer Managed Policies Instead of Inline Policies in the AWS best practices) – this refers to creating your own policies that are standalone policies which can be reused multiple times (assigned to multiple entities such as groups, and users). This is better than using inline policies which are directly attached to a single entity.

INCORRECT: “Use user accounts to delegate permissions” is incorrect as you should use roles to delegate permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

A Developer is deploying an update to a serverless application that includes AWS Lambda using the AWS Serverless Application Model (SAM). The traffic needs to move from the old Lambda version to the new Lambda version gradually, within the shortest period of time.

Which deployment configuration is MOST suitable for these requirements?


CodeDeployDefault.HalfAtATime

CodeDeployDefault.LambdaCanary10Percent5Minutes

CodeDeployDefault.LambdaLinear10PercentEvery1Minute

CodeDeployDefault.LambdaLinear10PercentEvery2Minutes

A

Explanation
If you use AWS SAM to create your serverless application, it comes built-in with CodeDeploy to provide gradual Lambda deployments. With just a few lines of configuration, AWS SAM does the following for you:

  • Deploys new versions of your Lambda function, and automatically creates aliases that point to the new version.
  • Gradually shifts customer traffic to the new version until you’re satisfied that it’s working as expected, or you roll back the update.
  • Defines pre-traffic and post-traffic test functions to verify that the newly deployed code is configured correctly and your application operates as expected.
  • Rolls back the deployment if CloudWatch alarms are triggered.

There are several options for how CodeDeploy shifts traffic to the new Lambda version. You can choose from the following:

  • Canary: Traffic is shifted in two increments. You can choose from predefined canary options. The options specify the percentage of traffic that’s shifted to your updated Lambda function version in the first increment, and the interval, in minutes, before the remaining traffic is shifted in the second increment.
  • Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic that’s shifted in each increment and the number of minutes between each increment.

All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version at once.

Therefore CodeDeployDefault.LambdaCanary10Percent5Minutes is the best answer as this will shift 10 percent of the traffic and then after 5 minutes shift the remainder of the traffic. The entire deployment will take 5 minutes to cut over.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

A Developer is deploying an application using Docker containers on Amazon ECS. One of the containers runs a database and should be placed on instances in the “databases” task group.

What should the Developer use to control the placement of the database task?

​
ECS Container Agent
​
IAM Group
​
Task Placement Constraint
​
Cluster Query Language
A

A task placement constraint is a rule that is considered during task placement. Task placement constraints can be specified when either running a task or creating a new service. The task placement constraints can be updated for existing services as well.

Amazon ECS supports the following types of task placement constraints:

distinctInstance

Place each task on a different container instance. This task placement constraint can be specified when either running a task or creating a new service.

memberOf

Place tasks on container instances that satisfy an expression. For more information about the expression syntax for constraints, see Cluster Query Language.

The memberOf task placement constraint can be specified with the following actions:

  • Running a task
  • Creating a new service
  • Creating a new task definition
  • Creating a new revision of an existing task definition

The example task placement constraint below uses the memberOf constraint to place tasks on instances in the databases task group. It can be specified with the following actions: CreateService, UpdateService, RegisterTaskDefinition, and RunTask.

"placementConstraints": [
{
"expression": "task:group == databases",
"type": "memberOf"
}
]
The Developer should therefore use task placement constraints as in the above example to control the placement of the database task.

INCORRECT: “Cluster Query Language” is incorrect. Cluster queries are expressions that enable you to group objects. For example, you can group container instances by attributes such as Availability Zone, instance type, or custom metadata.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

A Developer must deploy a new AWS Lambda function using an AWS CloudFormation template.

Which procedures will deploy a Lambda function? (Select TWO.)


Upload a ZIP file containing the function code to Amazon S3, then add a reference to it in an AWS::Lambda::Function resource in the template

Create an AWS::Lambda::Function resource in the template, then write the code directly inside the CloudFormation template


Upload a ZIP file to AWS CloudFormation containing the function code, then add a reference to it in an AWS::Lambda::Function resource in the template


1. Upload the function code to a private Git repository, then add a reference to it in an AWS::Lambda::Function resource in the template


Upload the code to an AWS CodeCommit repository, then add a reference to it in an AWS::Lambda::Function resource in the template

A

Upload a ZIP file containing the function code to Amazon S3, then add a reference to it in an AWS::Lambda::Function resource in the template

Create an AWS::Lambda::Function resource in the template, then write the code directly inside the CloudFormation template

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

A developer is preparing the resources for creating a multicontainer Docker environment on AWS Elastic Beanstalk. How can the developer define the Docker containers?


Define the containers in the Dockerrun.aws.json file in JSON format and save at the root of the source directory

Define the containers in the Dockerrun.aws.json file in YAML format and save at the root of the source directory

Create a buildspec.yml file and save it at the root of the source directory

Create a Docker.config file and save it in the .ebextensions folder at the root of the source directory

A

Define the containers in the Dockerrun.aws.json file in JSON format and save at the root of the source directory

You can launch a cluster of multicontainer instances in a single-instance or autoscaling Elastic Beanstalk environment using the Elastic Beanstalk console. The single container and multicontainer Docker platforms for Elastic Beanstalk support the use of Docker images stored in a public or private online image repository.

You specify images by name in the Dockerrun.aws.json file and save it in the root of your source directory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

A mobile application is being developed that will use AWS Lambda, Amazon API Gateway and Amazon DynamoDB. A developer would like to securely authenticate the users of the mobile application and then grant them access to the API.

What is the BEST way to achieve this?


Create a COGNITO_USER_POOLS authorizer in API Gateway

Create an IAM authorizer in API Gateway

Create a Lambda authorizer in API Gateway

Create a COGNITO_IDENTITY_POOLS authorizer in API Gateway

A

CORRECT: “Create a COGNITO_USER_POOLS authorizer in API Gateway” is the correct answer.

To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user in to the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request’s Authorization header. The API call succeeds only if the required token is supplied and the supplied token is valid, otherwise, the client isn’t authorized to make the call because the client did not have credentials that could be authorized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

A serverless application is used to process customer information and outputs a JSON file to an Amazon S3 bucket. AWS Lambda is used for processing the data. The data is sensitive and should be encrypted.

How can a Developer modify the Lambda function to ensure the data is encrypted before it is uploaded to the S3 bucket?


Use the GenerateDataKey API, then use the data key to encrypt the file using the Lambda code

Enable server-side encryption on the S3 bucket and create a policy to enforce encryption

Use the S3 managed key and call the GenerateDataKey API to encrypt the file

Use the default KMS key for S3 and encrypt the file using the Lambda code

A

The GenerateDataKey API is used with the AWS KMS services and generates a unique symmetric data key. This operation returns a plaintext copy of the data key and a copy that is encrypted under a customer master key (CMK) that you specify. You can use the plaintext key to encrypt your data outside of AWS KMS and store the encrypted data key with the encrypted data.

For this scenario we can use GenerateDataKey to obtain an encryption key from KMS that we can then use within the function code to encrypt the file. This ensures that the file is encrypted BEFORE it is uploaded to Amazon S3.

CORRECT: “Use the GenerateDataKey API, then use the data key to encrypt the file using the Lambda code” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

A Developer is creating a service on Amazon ECS and needs to ensure that each task is placed on a different container instance.

How can this be achieved?


Create a service on Fargate

Use a task placement constraint

Create a cluster with multiple container instances

Use a task placement strategy

A

CORRECT: “Use a task placement constraint” is the correct answer.

INCORRECT: “Use a task placement strategy” is incorrect as this is used to select instances for task placement using the binpack, random and spread algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

A Developer received the following error when attempting to launch an Amazon EC2 instance using the AWS CLI.

An error occurred (UnauthorizedOperation) when calling the RunInstances operation: You are not authorized to perform this operation. Encoded authorization failure message: VNVaHFdCohROkbyT_rIXoRyNTp7vXFJCqnGiwPuyKnsSVf-WSSGK_06….

What action should the Developer perform to make this error more human-readable?


Use the AWS IAM decode-authorization-message API to decode this message

Use an open source decoding library to decode the message

Make a call to AWS KMS to decode the message

Use the AWS STS decode-authorization-message API to decode the message

A

CORRECT: “Use the AWS STS decode-authorization-message API to decode the message” is the correct answer.

Explanation
The AWS STS decode-authorization-message API decodes additional information about the authorization status of a request from an encoded message returned in response to an AWS request. The output is then decoded into a more human-readable output that can be viewed in a JSON editor.

The following example is the decoded output from the error shown in the question:

{
“DecodedMessage”: “{"allowed":false,"explicitDeny":false,"matchedStatements":{"items":[]},"failures":{"items":[]},"context":{"principal":{"id":"AIDAXP4J2EKU7YXXG3EJ4","name":"Paul","arn":"arn:aws:iam::515148227241:user/Paul"},"action":"ec2:RunInstances","resource":"arn:aws:ec2:ap-southeast-2:51514822724lu}]}}}”…..
}
Therefore, the best answer is to use the AWS STS decode-authorization-message API to decode the message.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

A small team of Developers require access to an Amazon S3 bucket. An admin has created a resource-based policy. Which element of the policy should be used to specify the ARNs of the user accounts that will be granted access?


Condition

Id

Principal

Sid

A

Use the Principal element in a policy to specify the principal that is allowed or denied access to a resource. You cannot use the Principal element in an IAM identity-based policy. You can use it in the trust policies for IAM roles and in resource-based policies. Resource-based policies are policies that you embed directly in an IAM resource.

CORRECT: “Principal” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

An Amazon DynamoDB table will store authentication credentials for a mobile app. The table must be secured so only a small group of Developers are able to access it.

How can table access be secured according to this requirement and following AWS best practice?


Attach a permissions policy to an IAM group containing the Developer’s IAM user accounts that grants access to the table

Create a shared user account and attach a permissions policy granting access to the table. Instruct the Developer’s to login with the user account

Create an AWS KMS resource-based policy to a CMK and grant the developer’s user accounts the permissions to decrypt data in the table using the CMK

Attach a resource-based policy to the table and add an IAM group containing the Developer’s IAM user accounts as a Principal in the policy

A

Explanation
Amazon DynamoDB supports identity-based policies only. The best practice method to assign permissions to the table is to create a permissions policy that grants access to the table and assigning that policy to an IAM group that contains the Developer’s user accounts.

This will provide all users with accounts in the IAM group with the access required to access the DynamoDB table.

CORRECT: “Attach a permissions policy to an IAM group containing the Developer’s IAM user accounts that grants access to the table” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

A Developer is working on an AWS Lambda function that accesses Amazon DynamoDB. The Lambda function must retrieve an item and update some of its attributes or create the item if it does not exist. The Lambda function has access to the primary key.

Which IAM permission should the Developer request for the Lambda function to achieve this functionality?


“dynamodb:UpdateItem”, “dynamodb:GetItem”, and “dynamodb:DescribeTable”

“dynamodb:GetRecords”, “dynamodb:PutItem”, and “dynamodb:UpdateTable”

“dynamodb:DeleteItem”, “dynamodb:GetItem”, and “dynamodb:PutItem”

“dynamodb:UpdateItem”, “dynamodb:GetItem”, and “dynamodb:PutItem”

A

Explanation
The Developer needs the permissions to retrieve items, update/modify items, and create items. Therefore permissions for the following API actions are required:

  • GetItem - The GetItem operation returns a set of attributes for the item with the given primary key.
  • UpdateItem - Edits an existing item’s attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values.
  • PutItem - Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

An application is running on a cluster of Amazon EC2 instances. The application has received an error when trying to read objects stored within an Amazon S3 bucket. The bucket is encrypted with server-side encryption and AWS KMS managed keys (SSE-KMS). The error is as follows:

Service: AWSKMS; Status Code: 400, Error Code: ThrottlingException

Which combination of steps should be taken to prevent this failure? (Select TWO.)


Contact AWS support to request an S3 rate limit increase

Import a customer master key (CMK) with a larger key size

Contact AWS support to request an AWS KMS rate limit increase

Perform error retries with exponential backoff in the application code

Use more than once customer master key (CMK) to encrypt S3 data

A

CORRECT: “Contact AWS support to request an AWS KMS rate limit increase” is a correct answer.

CORRECT: “Perform error retries with exponential backoff in the application code” is a correct answer.

AWS KMS establishes quotas for the number of API operations requested in each second. When you exceed an API request quota, AWS KMS throttles the request, that is, it rejects an otherwise valid request and returns a ThrottlingException error like the following one.

As the error indicates, one of the recommendations is to reduce the frequency of calls which can be implemented by using exponential backoff logic in the application code. It is also possible to contact AWS and request an increase in the quota.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

An application needs to read up to 100 items at a time from an Amazon DynamoDB. Each item is up to 100 KB in size and all attributes must be retrieved.

What is the BEST way to minimize latency?


Use BatchGetItem

Use a Query operation with a FilterExpression

Use GetItem and use a projection expression

Use a Scan operation with pagination

A

The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.

A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. In order to minimize response latency, BatchGetItem retrieves items in parallel.

By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

An application is running on a fleet of EC2 instances running behind an Elastic Load Balancer (ELB). The EC2 instances session data in a shared Amazon S3 bucket. Security policy mandates that data must be encrypted in transit.

How can the Developer ensure that all data that is sent to the S3 bucket is encrypted in transit?

​Create an S3 bucket policy that denies traffic where SecureTransport is true

Create an S3 bucket policy that denies traffic where SecureTransport is false

​Create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption

​Configure HTTP to HTTPS redirection on the Elastic Load Balancer

A

At the Amazon S3 bucket level, you can configure permissions through a bucket policy. For example, you can limit access to the objects in a bucket by IP address range or specific IP addresses. Alternatively, you can make the objects accessible only through HTTPS.

CORRECT: “Create an S3 bucket policy that denies traffic where SecureTransport is false” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

A Developer has joined a team and needs to connect to the AWS CodeCommit repository using SSH. What should the Developer do to configure access using Git?


On the Developer’s IAM account, under security credentials, choose to create an access key and secret ID

Create an account on Github and user those login credentials to login to AWS CodeCommit

Generate an SSH public and private key. Upload the public key to the Developer’s IAM account

On the Developer’s IAM account, under security credentials, choose to create HTTPS Git credentials for AWS CodeCommit

A

You need to configure your Git client to communicate with CodeCommit repositories. As part of this configuration, you provide IAM credentials that CodeCommit can use to authenticate you. IAM supports CodeCommit with three types of credentials:

  • Git credentials, an IAM -generated user name and password pair you can use to communicate with CodeCommit repositories over HTTPS.
  • SSH keys, a locally generated public-private key pair that you can associate with your IAM user to communicate with CodeCommit repositories over SSH.
  • AWS access keys, which you can use with the credential helper included with the AWS CLI to communicate with CodeCommit repositories over HTTPS.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

A Developer is writing an AWS Lambda function that processes records from an Amazon Kinesis Data Stream. The Developer must write the function so that it sends a notice to Administrators if it fails to process a batch of records.

How should the Developer write the function?


Configure an Amazon SNS topic as an on-failure destination

Separate the Lambda handler from the core logic

Use Amazon CloudWatch Events to send the processed data

Push the failed records to an Amazon SQS queue

A

With Destinations, you can route asynchronous function results as an execution record to a destination resource without writing additional code. An execution record contains details about the request and response in JSON format including version, timestamp, request context, request payload, response context, and response payload.

For each execution status such as Success or Failure you can choose one of four destinations: another Lambda function, SNS, SQS, or EventBridge. Lambda can also be configured to route different execution results to different destinations.

In this scenario the Developer can publish the processed data to an Amazon SNS topic by configuring an Amazon SNS topic as an on-failure destination.

CORRECT: “Configure an Amazon SNS topic as an on-failure destination” is the correct answer.

INCORRECT: “Push the failed records to an Amazon SQS queue” is incorrect as SQS will not notify the administrators, SNS should be used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

A Developer needs to restrict all users and roles from using a list of API actions within a member account in AWS Organizations. The Developer needs to deny access to a few specific API actions.

What is the MOST efficient way to do this?


Create a deny list and specify the API actions to deny

Create an IAM policy that allows only the unrestricted API actions

Create an IAM policy that denies the API actions for all users and roles

Create an allow list and specify the API actions to deny

A

Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines.

You can configure the SCPs in your organization to work as either of the following:

  • A deny list – actions are allowed by default, and you specify what services and actions are prohibited
  • An allow list – actions are prohibited by default, and you specify what services and actions are allowed

CORRECT: “Create a deny list and specify the API actions to deny” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

A Developer is deploying an Amazon ECS update using AWS CodeDeploy. In the appspec.yaml file, which of the following is a valid structure for the order of hooks that should be specified?


BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic

BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic

BeforeInstall > AfterInstall > ApplicationStart > ValidateService

BeforeAllowTraffic > AfterAllowTraffic

A

INCORRECT: “BeforeAllowTraffic > AfterAllowTraffic” is incorrect as this would be valid for AWS Lambda.

CORRECT: “BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

An application is being instrumented to send trace data using AWS X-Ray. A Developer needs to upload segment documents using JSON-formatted strings to X-Ray using the API. Which API action should the developer use?


The GetTraceSummaries API action

The PutTraceSegments API action

The UpdateGroup API action

The PutTelemetryRecords API action

A

You can send trace data to X-Ray in the form of segment documents. A segment document is a JSON formatted string that contains information about the work that your application does in service of a request. Your application can record data about the work that it does itself in segments, or work that uses downstream services and resources in subsegments.

Segments record information about the work that your application does. A segment, at a minimum, records the time spent on a task, a name, and two IDs. The trace ID tracks the request as it travels between services. The segment ID tracks the work done for the request by a single service.

CORRECT: “The PutTraceSegments API action” is the correct answer.

INCORRECT: “The PutTelemetryRecords API action” is incorrect as this is used by the AWS X-Ray daemon to upload telemetry.

INCORRECT: “The UpdateGroup API action” is incorrect as this updates a group resource.

INCORRECT: “The GetTraceSummaries API action” is incorrect as this retrieves IDs and annotations for traces available for a specified time frame using an optional filter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

A Developer created an AWS Lambda function for a serverless application. The Lambda function has been executing for several minutes and the Developer cannot find any log data in CloudWatch Logs.

What is the MOST likely explanation for this issue?


The Lambda function is missing a target CloudWatch Logs group

The execution role for the Lambda function is missing permissions to write log data to the CloudWatch Logs


The Lambda function does not have any explicit log statements for the log data to send it to CloudWatch Logs

The Lambda function is missing CloudWatch Logs as a source trigger to send log data

A

An AWS Lambda function’s execution role grants it permission to access AWS services and resources. You provide this role when you create a function, and Lambda assumes the role when your function is invoked. You can create an execution role for development that has permission to send logs to Amazon CloudWatch and upload trace data to AWS X-Ray.

The most likely cause of this issue is that the execution role assigned to the Lambda function does not have the permissions (shown above) to write to CloudWatch Logs.

CORRECT: “The execution role for the Lambda function is missing permissions to write log data to the CloudWatch Logs” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

An application running on Amazon EC2 is experiencing intermittent technical difficulties. The developer needs to find a solution for tracking the errors that occur in the application logs and setting up a notification when the error rate exceeds a certain threshold.

How can this be achieved with the LEAST complexity?


Use CloudTrail to monitor the application log files and send an SNS notification

Configure Amazon CloudWatch Events to monitor the EC2 instances and configure an SNS topic as a target

Configure the application to send logs to Amazon S3. Use Amazon Kinesis Analytics to analyze the log files and send an SES notification

Use CloudWatch Logs to track the number of errors that occur in the application logs and send an SNS notification

A

You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold you specify.

CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as “NullReferenceException”) or count the number of occurrences of a literal term at a particular position in log data (such as “404” status codes in an Apache access log).

When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. Log data is encrypted while in transit and while it is at rest.

CORRECT: “Use CloudWatch Logs to track the number of errors that occur in the application logs and send an SNS notification” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

An application collects data from sensors in a manufacturing facility. The data is stored in an Amazon SQS Standard queue by an AWS Lambda function and an Amazon EC2 instance processes the data and stores it in an Amazon RedShift data warehouse. A fault in the sensors’ software is causing occasional duplicate messages to be sent. Timestamps on the duplicate messages show they are generated within a few seconds of the primary message.

How a can a Developer prevent duplicate data being stored in the data warehouse?


Configure a redrive policy, specify a destination Dead-Letter queue, and set the maxReceiveCount to 1

Send a ChangeMessageVisibility call with VisibilityTimeout set to 30 seconds after the receipt of every message from the queue

Use a FIFO queue and configure the Lambda function to add a message group ID to the messages generated by each individual sensor

Use a FIFO queue and configure the Lambda function to add a message deduplication token to the message body

A

Use a FIFO queue and configure the Lambda function to add a message deduplication token to the message body

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

A Developer is deploying an Amazon EC2 update using AWS CodeDeploy. In the appspec.yml file, which of the following is a valid structure for the order of hooks that should be specified?


BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic

BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic

BeforeAllowTraffic > AfterAllowTraffic

BeforeInstall > AfterInstall > ApplicationStart > ValidateService

A

CORRECT: “BeforeInstall > AfterInstall > ApplicationStart > ValidateService” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

A Developer is creating a social networking app for games that uses a single Amazon DynamoDB table. All users’ saved game data is stored in the single table, but users should not be able to view each other’s data.

How can the Developer restrict user access so they can only view their own data?


Restrict access to specific items based on certain primary key values


Read records from DynamoDB and discard irrelevant data client-side

Use separate access keys for each user to call the API and restrict access to specific items based on access key ID

Use an identity-based policy that restricts read access to the table to specific principals

A

In DynamoDB, you have the option to specify conditions when granting permissions using an IAM policy. For example, you can:

  • Grant permissions to allow users read-only access to certain items and attributes in a table or a secondary index.
  • Grant permissions to allow users write-only access to certain attributes in a table, based upon the identity of that user.

To implement this kind of fine-grained access control, you write an IAM permissions policy that specifies conditions for accessing security credentials and the associated permissions. You then apply the policy to IAM users, groups, or roles that you create using the IAM console. Your IAM policy can restrict access to individual items in a table, access to the attributes in those items, or both at the same time.

You use the IAM Condition element to implement a fine-grained access control policy. By adding a Condition element to a permissions policy, you can allow or deny access to items and attributes in DynamoDB tables and indexes, based upon your particular business requirements. You can also grant permissions on a table, but restrict access to specific items in that table based on certain primary key values.

CORRECT: “Restrict access to specific items based on certain primary key values” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

A Developer is creating a serverless website with content that includes HTML files, images, videos, and JavaScript (client-side scripts).

Which combination of services should the Developer use to create the website?


Amazon ECS and Redis

Amazon EC2 and Amazon ElastiCache

AWS Lambda and Amazon API Gateway

Amazon S3 and Amazon CloudFront

A

Amazon S3 and Amazon CloudFront

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

An application that is being migrated to AWS and refactored requires a storage service. The storage service should provide a standards-based REST web service interface and store objects based on keys.

Which AWS service would be MOST suitable?


Amazon EFS

Amazon EBS

Amazon S3

Amazon DynamoDB

A

Explanation
Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. Amazon S3 uses standards-based REST and SOAP interfaces designed to work with any internet-development toolkit.

Amazon S3 is a simple key-based object store. The key is the name of the object and the value is the actual data itself. Keys can be any string, and they can be constructed to mimic hierarchical attributes.

CORRECT: “Amazon S3” is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

An Amazon ElastiCache cluster has been placed in front of a large Amazon RDS database. To reduce cost the ElastiCache cluster should only cache items that are actually requested. How should ElastiCache be optimized?


Use a lazy loading caching strategy

Use a write-through caching strategy

Only cache database writes

Enable a TTL on cached data

A

CORRECT: “Use a lazy loading caching strategy” is the correct answer.

There are two caching strategies available: Lazy Loading and Write-Through:

Lazy Loading

Loads the data into the cache only when necessary (if a cache miss occurs).

Lazy loading avoids filling up the cache with data that won’t be requested.

If requested data is in the cache, ElastiCache returns the data to the application.

If the data is not in the cache or has expired, ElastiCache returns a null.

The application then fetches the data from the database and writes the data received into the cache so that it is available for next time.

Data in the cache can become stale if Lazy Loading is implemented without other strategies (such as TTL).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

An organization has encrypted a large quantity of data. To protect their data encryption keys they are planning to use envelope encryption. Which of the following processes is a correct implementation of envelope encryption?


Encrypt plaintext data with a master key and then encrypt the master key with a top-level encrypted data key

Encrypt plaintext data with a master key and then encrypt the master key with a top-level plaintext data key

Encrypt plaintext data with a data key and then encrypt the data key with a top-level plaintext master key.

Encrypt plaintext data with a data key and then encrypt the data key with a top-level encrypted master key

A

CORRECT: “Encrypt plaintext data with a data key and then encrypt the data key with a top-level plaintext master key” is the correct answer.

Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

You can even encrypt the data encryption key under another encryption key and encrypt that encryption key under another encryption key. But, eventually, one key must remain in plaintext so you can decrypt the keys and your data. This top-level plaintext key encryption key is known as the master key.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

A company is building an application to track athlete performance using an Amazon DynamoDB table. Each item in the table is identified by a partition key (user_id) and a sort key (sport_name). The table design is shown below:

  • Partition key: user_id
  • Sort Key: sport_name
  • Attributes: score, score_datetime

A Developer is asked to write a leaderboard application to display the top performers (user_id) based on the score for each sport_name.

What process will allow the Developer to extract results MOST efficiently from the DynamoDB table?


Use a DynamoDB query operation with the key attributes of user_id and sport_name and order the results based on the score attribute

Create a global secondary index with a partition key of sport_name and a sort key of score, and get the results

Use a DynamoDB scan operation to retrieve scores and user_id based on sport_name, and order the results based on the score attribute

Create a local secondary index with a primary key of sport_name and a sort key of score and get the results based on the score attribute

A

CORRECT: “Create a global secondary index with a partition key of sport_name and a sort key of score, and get the results” is the correct answer.

INCORRECT: “Use a DynamoDB query operation with the key attributes of user_id and sport_name and order the results based on the score attribute” is incorrect as this is less efficient compared to using a GSI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

The manager of a development team is setting up a shared S3 bucket for team members. The manager would like to use a single policy to allow each user to have access to their objects in the S3 bucket. Which feature can be used to generalize the policy?


Condition

Variable

Principal

Resource

A

When this policy is evaluated, IAM replaces the variable ${aws:username}with the friendly name of the actual current user. This means that a single policy applied to a group of users can control access to a bucket by using the username as part of the resource’s name.

CORRECT: “Variable” is the correct answer.

INCORRECT: “Condition” is incorrect. The Condition element (or Condition block) lets you specify conditions for when a policy is in effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

A legacy application is being refactored into a microservices architecture running on AWS. The microservice will include several AWS Lambda functions. A Developer will use AWS Step Functions to coordinate function execution.

How should the Developer proceed?


Create a layer in AWS Lambda and add the functions to the layer

Create an AWS CloudFormation stack using a YAML-formatted template

Create a workflow using the StartExecution API action

Create a state machine using the Amazon States Language

A

CORRECT: “Create a state machine using the Amazon States Language” is the correct answer.

AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly.

The following are key features of AWS Step Functions:

  • Step Functions is based on the concepts of tasks and state machines.
  • You define state machines using the JSON-based Amazon States Language.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

A website delivers images stored in an Amazon S3 bucket. The site uses Amazon Cognito-enabled and guest users without logins need to be able to view the images from the S3 bucket..

How can a Developer enable access for guest users to the AWS resources?


Create a new user pool, enable access to unauthenticated identities, and grant access to AWS resources

Create a new identity pool, enable access to unauthenticated identities, and grant access to AWS resources

Create a blank user ID in a user pool, add to the user group, and grant access to AWS resources

Create a new user pool, disable authentication access, and grant access to AWS resources

A

Amazon Cognito identity pools support both authenticated and unauthenticated identities. Authenticated identities belong to users who are authenticated by any supported identity provider. Unauthenticated identities typically belong to guest users.

CORRECT: “Create a new identity pool, enable access to unauthenticated identities, and grant access to AWS resources” is the correct answer.

INCORRECT: “Create a new user pool, enable access to unauthenticated identities, and grant access to AWS resources” is incorrect as you must use identity pools for unauthenticated users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

A Development team are creating a financial trading application. The application requires sub-millisecond latency for processing trading requests. Amazon DynamoDB is used to store the trading data. During load testing the Development team found that in periods of high utilization the latency is too high and read capacity must be significantly over-provisioned to avoid throttling.

How can the Developers meet the latency requirements of the application?


Use exponential backoff in the application code for DynamoDB queries

​Store the trading data in Amazon S3 and use Transfer Acceleration

Create a Global Secondary Index (GSI) for the trading data

Use Amazon DynamoDB Accelerator (DAX) to cache the data

A

CORRECT: “Use Amazon DynamoDB Accelerator (DAX) to cache the data” is the correct answer.

INCORRECT: “Use exponential backoff in the application code for DynamoDB queries” is incorrect as this may reduce the requirement for over-provisioning reads but it will not solve the problem of reducing latency. With this solution the application performance will be worse, it’s a case of reducing cost along with performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

An e-commerce company has developed an API that is hosted on Amazon ECS. Variable traffic spikes on the application are causing order processing to take too long. The application processes orders using Amazon SQS queues. The ApproximateNumberOfMessagesVisible metric spikes at very high values throughout the day which triggers the CloudWatch alarm. Other ECS metrics for the API containers are well within limits.

As a Developer Associate, which of the following will you recommend for improving performance while keeping costs low?

​
Use ECS service scheduler
​
Use backlog per instance metric with target tracking scaling policy
​
Use ECS step scaling policy
​
Use Docker swarm
A

Use backlog per instance metric with target tracking scaling policy - If you use a target tracking scaling policy based on a custom Amazon SQS queue metric, dynamic scaling can adjust to the demand curve of your application more effectively.

Docker swarm - A Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines. A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services).

ECS service scheduler - Amazon ECS provides a service scheduler (for long-running tasks and applications), the ability to run tasks manually (for batch jobs or single run tasks), with Amazon ECS placing tasks on your cluster for you. You can specify task placement strategies and constraints that allow you to run tasks in the configuration you choose, such as spread out across Availability Zones. It is also possible to integrate with custom or third-party schedulers.

ECS step scaling policy - Although Amazon ECS Service Auto Scaling supports using Application Auto Scaling step scaling policies, AWS recommends using target tracking scaling policies instead. For example, if you want to scale your service when CPU utilization falls below or rises above a certain level, create a target tracking scaling policy based on the CPU utilization metric provided by Amazon ECS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

An application is hosted by a 3rd party and exposed at yourapp.3rdparty.com. You would like to have your users access your application using www.mydomain.com, which you own and manage under Route 53.

What Route 53 record should you create?

​
Create a PTR record
​
Create an Alias Record
​
Create a CNAME record

Create an A record

A

Create a CNAME record

A CNAME record maps DNS queries for the name of the current record, such as acme.example.com, to another domain (example.com or example.net) or subdomain (acme.example.com or zenith.example.org).

CNAME records can be used to map one domain name to another. Although you should keep in mind that the DNS protocol does not allow you to create a CNAME record for the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name example.com, the zone apex is example.com. You cannot create a CNAME record for example.com, but you can create CNAME records for www.example.com, newproduct.example.com, and so on.

Create an A record - Used to point a domain or subdomain to an IP address. ‘A record’ cannot be used to map one domain name to another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

As a developer, you are working on creating an application using AWS Cloud Development Kit (CDK).

Which of the following represents the correct order of steps to be followed for creating an app using AWS CDK?


Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account


Create the app from a template provided by AWS CloudFormation -> Add code to the app to create resources within stacks -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account -> Build the app

Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account -> Build the app

Create the app from a template provided by AWS CloudFormation -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account

A
  1. Create the app from a template provided by AWS CDK
  2. Add code to the app to create resources within stacks
  3. Build the app (optional)
  4. Synthesize one or more stacks in the app
  5. Deploy stack(s) to your AWS account
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

our global organization has an IT infrastructure that is deployed using CloudFormation on AWS Cloud. One employee, in us-east-1 Region, has created a stack ‘Application1’ and made an exported output with the name ‘ELBDNSName’. Another employee has created a stack for a different application ‘Application2’ in us-east-2 Region and also exported an output with the name ‘ELBDNSName’. The first employee wanted to deploy the CloudFormation stack ‘Application1’ in us-east-2, but it got an error. What is the cause of the error?


Exported Output Values in CloudFormation must have unique names within a single Region

Output Values in CloudFormation must have unique names within a single Region

Output Values in CloudFormation must have unique names across all Regions

Exported Output Values in CloudFormation must have unique names across all Regions

A

Exported Output Values in CloudFormation must have unique names within a single Region

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

A company has built its technology stack on AWS serverless architecture for managing all its business functions. To expedite development for a new business requirement, the company is looking at using pre-built serverless applications.

Which AWS service represents the easiest solution to address this use-case?

​
AWS Serverless Application Repository (SAR)
​
AWS Marketplace
​
AWS Service Catalog

AWS AppSync

A

AWS Serverless Application Repository (SAR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

A Developer has been entrusted with the job of securing certain S3 buckets that are shared by a large team of users. Last time, a bucket policy was changed, the bucket was erroneously available for everyone, outside the organization too.

Which feature/service will help the developer identify similar security issues with minimum effort?

​
S3 Object Lock
​
Access Advisor feature on IAM console
​
IAM Access Analyzer
​
S3 Analytics
A

IAM Access Analyzer - AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data, which is a security risk.

Access Advisor feature on IAM console - To help identify the unused roles, IAM reports the last-used timestamp that represents when a role was last used to make an AWS request. Your security team can use this information to identify, analyze, and then confidently remove unused roles. This helps improve the security posture of your AWS environments.

S3 Object Lock - S3 Object Lock enables you to store objects using a “Write Once Read Many” (WORM) model. S3 Object Lock can help prevent accidental or inappropriate deletion of data

S3 Analytics - By using Amazon S3 analytics Storage Class Analysis you can analyze storage access patterns to help you decide when to transition the right data to the right storage class. You cannot use S3 Analytics to identify unintended access to your S3 resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

You have deployed a Java application to an EC2 instance where it uses the X-Ray SDK. When testing from your personal computer, the application sends data to X-Ray but when the application runs from within EC2, the application fails to send data to X-Ray.

Which of the following does NOT help with debugging the issue?

​
EC2 X-Ray Daemon
​
EC2 Instance Role
​
CloudTrail
​
X-Ray sampling
A

X-Ray sampling

By customizing sampling rules, you can control the amount of data that you record, and modify sampling behavior on the fly without modifying or redeploying your code. Sampling rules tell the X-Ray SDK how many requests to record for a set of criteria. X-Ray SDK applies a sampling algorithm to determine which requests get traced however because our application is failing to send data to X-Ray it does not help in determining the cause of failure.

Incorrect options:

EC2 X-Ray Daemon - The AWS X-Ray daemon is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API. The daemon logs could help with figuring out the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

A development team lead is configuring policies for his team at an IT company.

Which of the following policy types only limit permissions but cannot grant permissions (Select two)?


Access control list (ACL)

Identity-based policy
​
Permissions boundary
​
Resource-based policy
​
AWS Organizations Service Control Policy (SCP)
A

AWS Organizations Service Control Policy (SCP) – Use an AWS Organizations Service Control Policy (SCP) to define the maximum permissions for account members of an organization or organizational unit (OU). SCPs limit permissions that identity-based policies or resource-based policies grant to entities (users or roles) within the account, but do not grant permissions.

Permissions boundary - Permissions boundary is a managed policy that is used for an IAM entity (user or role). The policy defines the maximum permissions that the identity-based policies can grant to an entity, but does not grant permissions.

115
Q

You are a developer for a web application written in .NET which uses the AWS SDK. You need to implement an authentication mechanism that returns a JWT (JSON Web Token).

Which AWS service will help you with token handling and management?


Cognito User Pools

Cognito Identity Pools

Cognito Sync

API Gateway

A

“Cognito User Pools”

After successful authentication, Amazon Cognito returns user pool tokens to your app. You can use the tokens to grant your users access to your own server-side resources, or to the Amazon API Gateway.

Amazon Cognito user pools implement ID, access, and refresh tokens as defined by the OpenID Connect (OIDC) open standard.

The ID token is a JSON Web Token (JWT) that contains claims about the identity of the authenticated user such as name, email, and phone_number. You can use this identity information inside your application. The ID token can also be used to authenticate users against your resource servers or server applications.

116
Q

You are creating a Cloud Formation template to deploy your CMS application running on an EC2 instance within your AWS account. Since the application will be deployed across multiple regions, you need to create a map of all the possible values for the base AMI.

How will you invoke the !FindInMap function to fulfill this use case?


!FindInMap [ MapName, TopLevelKey, SecondLevelKey ]

!FindInMap [ MapName ]

!FindInMap [ MapName, TopLevelKey, SecondLevelKey, ThirdLevelKey ]

!FindInMap [ MapName, TopLevelKey ]

A


!FindInMap [ MapName, TopLevelKey, SecondLevelKey ]

RegionMap:
us-east-1:
HVM64: “ami-0ff8a91507f77f867”

117
Q

Amazon Simple Queue Service (SQS) has a set of APIs for various actions supported by the service.

As a developer associate, which of the following would you identify as correct regarding the CreateQueue API? (Select two)


The length of time, in seconds, for which the delivery of all messages in the queue is delayed is configured using MessageRetentionPeriod attribute


You can’t change the queue type after you create it

The dead-letter queue of a FIFO queue must also be a FIFO queue. Whereas, the dead-letter queue of a standard queue can be a standard queue or a FIFO queue

The visibility timeout value for the queue is in seconds, which defaults to 30 seconds

Queue tags are case insensitive. A new tag with a key identical to that of an existing tag overwrites the existing tag

A

You can’t change the queue type after you create it

The visibility timeout value for the queue is in seconds, which defaults to 30 seconds

118
Q

An E-commerce business, has its applications built on a fleet of Amazon EC2 instances, spread across various Regions and AZs. The technical team has suggested using Elastic Load Balancers for better architectural design.

What characteristics of an Elastic Load Balancer make it a winning choice? (Select two)


Deploy EC2 instances across multiple AWS Regions


Build a highly available system

Separate public traffic from private traffic

The Load Balancer communicates with the underlying EC2 instances using their public IPs

Improve vertical scalability of the system

A

Separate public traffic from private traffic - The nodes of an internet-facing load balancer have public IP addresses. Load balancers route requests to your targets using private IP addresses. Therefore, your targets do not need public IP addresses to receive requests from users over the internet.

Build a highly available system - Elastic Load Balancing provides fault tolerance for your applications by automatically balancing traffic across targets – Amazon EC2 instances, containers, IP addresses, and Lambda functions – in multiple Availability Zones while ensuring only healthy targets receive traffic.

119
Q

A university has created a student portal that is accessible through a smartphone app and web application. The smartphone app is available in both Android and IOS and the web application works on most major browsers. Students will be able to do group study online and create forum questions. All changes made via smartphone devices should be available even when offline and should synchronize with other devices.

Which of the following AWS services will meet these requirements?


Cognito User Pools

Cognito Identity Pools
​
Cognito Sync
​
BeanStalk
A

Cognito Sync

Amazon Cognito Sync is an AWS service and client library that enables cross-device syncing of application-related user data. You can use it to synchronize user profile data across mobile devices and the web without requiring your own backend. The client libraries cache data locally so your app can read and write data regardless of device connectivity status.

120
Q

The development team at a retail company is gearing up for the upcoming Thanksgiving sale and wants to make sure that the application’s serverless backend running via Lambda functions does not hit latency bottlenecks as a result of the traffic spike.

As a Developer Associate, which of the following solutions would you recommend to address this use-case?


Configure Application Auto Scaling to manage Lambda provisioned concurrency on a schedule

Add an Application Load Balancer in front of the Lambda functions

Configure Application Auto Scaling to manage Lambda reserved concurrency on a schedule

No need to make any special provisions as Lambda is automatically scalable because of its serverless nature

A

Configure Application Auto Scaling to manage Lambda provisioned concurrency on a schedule

Due to a spike in traffic, when Lambda functions scale, this causes the portion of requests that are served by new instances to have higher latency than the rest. To enable your function to scale without fluctuations in latency, use provisioned concurrency. By allocating provisioned concurrency before an increase in invocations, you can ensure that all requests are served by initialized instances with very low latency

121
Q

You have launched several AWS Lambda functions written in Java. A new requirement was given that over 1MB of data should be passed to the functions and should be encrypted and decrypted at runtime.

Which of the following methods is suitable to address the given use-case?


Use Envelope Encryption and store as environment variable

Use KMS direct encryption and store as file

Use KMS Encryption and store as environment variable

Use Envelope Encryption and reference the data as file within the code

A

Use Envelope Encryption and reference the data as file within the code

While AWS KMS does support sending data up to 4 KB to be encrypted directly, envelope encryption can offer significant performance benefits. When you encrypt data directly with AWS KMS it must be transferred over the network. Envelope encryption reduces the network load since only the request and delivery of the much smaller data key go over the network. The data key is used locally in your application or encrypting AWS service, avoiding the need to send the entire block of data to AWS KMS and suffer network latency.

AWS Lambda environment variables can have a maximum size of 4 KB. Additionally, the direct ‘Encrypt’ API of KMS also has an upper limit of 4 KB for the data payload. To encrypt 1 MB, you need to use the Encryption SDK and pack the encrypted file with the lambda function.

122
Q

A company has a cloud system in AWS with components that send and receive messages using SQS queues. While reviewing the system you see that it processes a lot of information and would like to be aware of any limits of the system.

Which of the following represents the maximum number of messages that can be stored in an SQS queue?


10000000

10000

100000

no limit

A

“no limit”: There are no message limits for storing in SQS, but ‘in-flight messages’ do have limits. Make sure to delete messages after you have processed them. There can be a maximum of approximately 120,000 inflight messages (received from a queue by a consumer, but not yet deleted from the queue).

123
Q

A company uses AWS CodeDeploy to deploy applications from GitHub to EC2 instances running Amazon Linux. The deployment process uses a file called appspec.yml for specifying deployment hooks. A final lifecycle event should be specified to verify the deployment success.

Which of the following hook events should be used to verify the success of the deployment?


AllowTraffic

ApplicationStart

ValidateService

AfterInstall

A

ValidateService: ValidateService is the last deployment lifecycle event. It is used to verify the deployment was completed successfully.

Incorrect options:
AllowTraffic - During this deployment lifecycle event, internet traffic is allowed to access instances after a deployment. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts

124
Q

A diagnostic lab stores its data on DynamoDB. The lab wants to backup a particular DynamoDB table data on Amazon S3, so it can download the S3 backup locally for some operational use.

Which of the following options is NOT feasible?


Use AWS Glue to copy your table to Amazon S3 and download locally

Use Hive with Amazon EMR to export your data to an S3 bucket and download locally

Use AWS Data Pipeline to export your table to an S3 bucket in the account of your choice and download locally

Use the DynamoDB on-demand backup capability to write to Amazon S3 and download locally

A

Use the DynamoDB on-demand backup capability to write to Amazon S3 and download locally - This option is not feasible for the given use-case. DynamoDB has two built-in backup methods (On-demand, Point-in-time recovery) that write to Amazon S3, but you will not have access to the S3 buckets that are used for these backups.

125
Q

A developer is looking at establishing access control for an API that connects to a Lambda function downstream.

Which of the following represents a mechanism that CANNOT be used for authenticating with the API Gateway?

​
Cognito User Pools
​
Lambda Authorizer
​
AWS Security Token Service (STS)
​
Standard AWS IAM roles and policies
A

AWS Security Token Service (STS) - AWS Security Token Service (AWS STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). However, it is not supported by API Gateway.

Incorrect options:

Standard AWS IAM roles and policies - Standard AWS IAM roles and policies offer flexible and robust access controls that can be applied to an entire API or individual methods. IAM roles and policies can be used for controlling who can create and manage your APIs, as well as who can invoke them.

126
Q

A company runs its flagship application on a fleet of Amazon EC2 instances. After misplacing a couple of private keys from the SSH key pairs, they have decided to re-use their SSH key pairs for the different instances across AWS Regions.

As a Developer Associate, which of the following would you recommend to address this use-case?


It is not possible to reuse SSH key pairs across AWS Regions

Generate a public SSH key from a private SSH key. Then, import the key into each of your AWS Regions

Store the public and private SSH key pair in AWS Trusted Advisor and access it across AWS Regions

Encrypt the private SSH key and store it in the S3 bucket to be accessed from any AWS Region

A

Generate a public SSH key from a private SSH key. Then, import the key into each of your AWS Regions

Here is the correct way of reusing SSH keys in your AWS Regions:

Generate a public SSH key (.pub) file from the private SSH key (.pem) file.

Set the AWS Region you wish to import to.

Import the public SSH key into the new Region.

127
Q

A new recruit is trying to configure what an Amazon EC2 should do when it interrupts a Spot Instance.

Which of the below CANNOT be configured as an interruption behavior?

​
Stop the Spot Instance
​
Hibernate the Spot Instance
​
Terminate the Spot Instance
​
Reboot the Spot Instance
A

​Reboot the Spot Instance

128
Q

A data analytics company is processing real-time Internet-of-Things (IoT) data via Kinesis Producer Library (KPL) and sending the data to a Kinesis Data Streams driven application. The application has halted data processing because of a ProvisionedThroughputExceeded exception.

Which of the following actions would help in addressing this issue? (Select two)


Use Amazon SQS instead of Kinesis Data Streams

Use Amazon Kinesis Agent instead of Kinesis Producer Library (KPL) for sending data to Kinesis Data Streams

Increase the number of shards within your data streams to provide enough capacity

Configure the data producer to retry with an exponential backoff

Use Kinesis enhanced fan-out for Kinesis Data Streams

A

Increase the number of shards within your data streams to provide enough capacity

Configure the data producer to retry with an exponential backoff

Use Kinesis enhanced fan-out for Kinesis Data Streams - You should use enhanced fan-out if you have, or expect to have, multiple consumers retrieving data from a stream in parallel. Therefore, using enhanced fan-out will not help address the ProvisionedThroughputExceeded exception as the constraint is the capacity limit of the Kinesis Data Stream.

129
Q

A Developer has been entrusted with the job of securing certain S3 buckets that are shared by a large team of users. Last time, a bucket policy was changed, the bucket was erroneously available for everyone, outside the organization too.

Which feature/service will help the developer identify similar security issues with minimum effort?


S3 Analytics

S3 Object Lock

IAM Access Analyzer

Access Advisor feature on IAM console

A

IAM Access Analyzer - AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data, which is a security risk.

You can set the scope for the analyzer to an organization or an AWS account. This is your zone of trust. The analyzer scans all of the supported resources within your zone of trust. When Access Analyzer finds a policy that allows access to a resource from outside of your zone of trust, it generates an active finding.

130
Q

Which of the following best describes how KMS Encryption works?


KMS stores the CMK, and receives data from the clients, which it encrypts and sends back

KMS sends the CMK to the client, which performs the encryption and then deletes the CMK

KMS generates a new CMK for each Encrypt call and encrypts the data with it

KMS receives CMK from the client at every Encrypt call, and encrypts the data with that

A

KMS stores the CMK, and receives data from the clients, which it encrypts and sends back

A customer master key (CMK) is a logical representation of a master key. The CMK includes metadata, such as the key ID, creation date, description, and key state. The CMK also contains the key material used to encrypt and decrypt data. You can generate CMKs in KMS, in an AWS CloudHSM cluster, or import them from your key management infrastructure.

AWS KMS supports symmetric and asymmetric CMKs. A symmetric CMK represents a 256-bit key that is used for encryption and decryption. An asymmetric CMK represents an RSA key pair that is used for encryption and decryption or signing and verification (but not both), or an elliptic curve (ECC) key pair that is used for signing and verification.

131
Q

A company is setting up a Lambda function that will process events from a DynamoDB stream. The Lambda function has been created and a stream has been enabled. What else needs to be done for this solution to work?

An event-source mapping must be created on the Lambda side to associate the DynamoDB stream with the Lambda function

An alarm should be created in CloudWatch that sends a notification to Lambda when a new entry is added to the DynamoDB stream

An event-source mapping must be created on the DynamoDB side to associate the DynamoDB stream with the Lambda function

Update the CloudFormation template to map the DynamoDB stream to the Lambda function

A

Explanation
An event source mapping is an AWS Lambda resource that reads from an event source and invokes a Lambda function. You can use event source mappings to process items from a stream or queue in services that don’t invoke Lambda functions directly. Lambda provides event source mappings for the following services.

Services That Lambda Reads Events From

Amazon Kinesis

Amazon DynamoDB

Amazon Simple Queue Service

132
Q

A Developer is creating a web application that will be used by employees working from home. The company uses a SAML directory on-premises for storing user information. The Developer must integrate with the SAML directory and authorize each employee to access only their own data when using the application.

Which approach should the Developer take?

Create a unique IAM role for each employee and have each employee assume the role to access the application so they can access their personal data only.

Use an Amazon Cognito identity pool, federate with the SAML provider, and use a trust policy with an IAM condition key to limit employee access.

Use Amazon Cognito user pools, federate with the SAML provider, and use user pool groups with an IAM policy.

Create the application within an Amazon VPC and use a VPC endpoint with a trust policy to grant access to the employees.

A

CORRECT: “Use an Amazon Cognito identity pool, federate with the SAML provider, and use a trust policy with an IAM condition key to limit employee access” is the correct answer.

133
Q

A Developer is setting up a code update to Amazon ECS using AWS CodeDeploy. The Developer needs to complete the code update quickly. Which of the following deployment types should the Developer use?

In-place

Canary

Linear

Blue/green

A

Explanation
CodeDeploy provides two deployment type options – in-place and blue/green. Note that AWS Lambda and Amazon ECS deployments cannot use an in-place deployment type.

134
Q

A company is creating an application that will require users to access AWS services and allow them to reset their own passwords. Which of the following would allow the company to manage users and authorization while allowing users to reset their own passwords?


Amazon Cognito identity pools and AWS IAM

Amazon Cognito user pools and identity pools

Amazon Cognito identity pools and AWS STS

Amazon Cognito user pools and AWS KMS

A

CORRECT: “Amazon Cognito user pools and identity pools” is the correct answer.

Explanation
There are two key requirements in this scenario. Firstly the company wants to manage user accounts using a system that allows users to reset their own passwords. The company also wants to authorize users to access AWS services.

The first requirement is provided by an Amazon Cognito User Pool. With a Cognito user pool you can add sign-up and sign-in to mobile and web apps and it also offers a user directory so user accounts can be created directly within the user pool. Users also have the ability to reset their passwords.

To access AWS services you need a Cognito Identity Pool. An identity pool can be used with a user pool and enables a user to obtain temporary limited-privilege credentials to access AWS services.

135
Q

A development team is migrating data from various file shares to AWS from on-premises. The data will be migrated into a single Amazon S3 bucket. What is the SIMPLEST method to ensure the data is encrypted at rest in the S3 bucket?


Ensure all requests use the x-amz-server-side​-encryption​-customer-key header

Ensure all requests use the x-amz-server-side-encryption header

Use SSL to transmit the data over the Internet

Enable default encryption when creating the bucket

A

CORRECT: “Enable default encryption when creating the bucket” is the correct answer.

Explanation
Amazon S3 default encryption provides a way to set the default encryption behavior for an S3 bucket. You can set default encryption on a bucket so that all new objects are encrypted when they are stored in the bucket. The objects are encrypted using server-side encryption with either Amazon S3-managed keys (SSE-S3) or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS).

136
Q

company uses an Amazon S3 bucket to store a large number of sensitive files relating to eCommerce transactions. The company has a policy that states that all data written to the S3 bucket must be encrypted.

How can a Developer ensure compliance with this policy?


Create a bucket policy that denies the S3 PutObject request with the attribute x-amz-acl having values public-read, public-read-write, or authenticated-read

​Create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption

Create an Amazon CloudWatch alarm that notifies an administrator if unencrypted objects are uploaded to the S3 bucket

Enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) on the Amazon S3 bucket

A

Therefore, we need to create an S3 bucket policy that denies any S3 Put request that do not include the x-amz-server-side-encryption header. There are two possible values for the x-amz-server-side-encryption header: AES256, which tells S3 to use S3-managed keys, and aws:kms, which tells S3 to use AWS KMS–managed keys.

INCORRECT: “Enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) on the Amazon S3 bucket” is incorrect as this will enable default encryption but will not enforce encryption on the S3 bucket. You do still need to enable default encryption on the bucket, but this alone will not enforce encryption.

137
Q

A developer is preparing the resources for creating a multicontainer Docker environment on AWS Elastic Beanstalk. How can the developer define the Docker containers?

Define the containers in the Dockerrun.aws.json file in YAML format and save at the root of the source directory

Define the containers in the Dockerrun.aws.json file in JSON format and save at the root of the source directory

Create a buildspec.yml file and save it at the root of the source directory

Create a Docker.config file and save it in the .ebextensions folder at the root of the source directory

A

Explanation
You can launch a cluster of multicontainer instances in a single-instance or autoscaling Elastic Beanstalk environment using the Elastic Beanstalk console. The single container and multicontainer Docker platforms for Elastic Beanstalk support the use of Docker images stored in a public or private online image repository.

You specify images by name in the Dockerrun.aws.json file and save it in the root of your source directory.

CORRECT: “Define the containers in the Dockerrun.aws.json file in JSON format and save at the root of the source directory” is the correct answer.

138
Q

A mobile application is being developed that will use AWS Lambda, Amazon API Gateway and Amazon DynamoDB. A developer would like to securely authenticate the users of the mobile application and then grant them access to the API.

What is the BEST way to achieve this?


Create a Lambda authorizer in API Gateway

Create an IAM authorizer in API Gateway

Create a COGNITO_USER_POOLS authorizer in API Gateway

Create a COGNITO_IDENTITY_POOLS authorizer in API Gateway

A

CORRECT: “Create a COGNITO_USER_POOLS authorizer in API Gateway” is the correct answer.

A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign into your web or mobile app through Amazon Cognito. Your users can also sign in through social identity providers like Google, Facebook, Amazon, or Apple, and through SAML identity providers. Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through a Software Development Kit (SDK).

139
Q

A mobile application has hundreds of users. Each user may use multiple devices to access the application. The Developer wants to assign unique identifiers to these users regardless of the device they use.

Which of the following methods should be used to obtain unique identifiers?


Assign IAM users and roles to the users. Use the unique IAM resource ID as the unique identifier

Implement developer-authenticated identities by using Amazon Cognito, and get credentials for these identities

Create a user table in Amazon DynamoDB as key-value pairs of users and their devices. Use these keys as unique identifiers

Use IAM-generated access key IDs for the users as the unique identifier, but do not store secret keys

A

CORRECT: “Implement developer-authenticated identities by using Amazon Cognito, and get credentials for these identities” is the correct answer.

Amazon Cognito supports developer authenticated identities, in addition to web identity federation through Facebook (Identity Pools), Google (Identity Pools), Login with Amazon (Identity Pools), and Sign in with Apple (identity Pools).

With developer authenticated identities, you can register and authenticate users via your own existing authentication process, while still using Amazon Cognito to synchronize user data and access AWS resources.

Using developer authenticated identities involves interaction between the end user device, your backend for authentication, and Amazon Cognito.

Therefore, the Developer can implement developer-authenticated identities by using Amazon Cognito, and get credentials for these identities.

140
Q

A company is using Amazon CloudFront to provide low-latency access to a web application to its global users. The organization must encrypt all traffic between users and CloudFront, and all traffic between CloudFront and the web application.

How can these requirements be met? (Select TWO.)


Set the Origin Protocol Policy to “HTTPS Only”

Set the Origin’s HTTP Port to 443

Use AWS KMS to encrypt traffic between CloudFront and the web application

Enable the CloudFront option Restrict Viewer Access

Set the Viewer Protocol Policy to “HTTPS Only” or “Redirect HTTP to HTTPS”

A

CORRECT: “Set the Origin Protocol Policy to “HTTPS Only”” is a correct answer.

CORRECT: “Set the Viewer Protocol Policy to “HTTPS Only” or “Redirect HTTP to HTTPS”” is also a correct answer.

141
Q

An application is running on a fleet of EC2 instances running behind an Elastic Load Balancer (ELB). The EC2 instances session data in a shared Amazon S3 bucket. Security policy mandates that data must be encrypted in transit.

How can the Developer ensure that all data that is sent to the S3 bucket is encrypted in transit?

Configure HTTP to HTTPS redirection on the Elastic Load Balancer

Create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption

Create an S3 bucket policy that denies traffic where SecureTransport is false

Create an S3 bucket policy that denies traffic where SecureTransport is true

A

CORRECT: “Create an S3 bucket policy that denies traffic where SecureTransport is false” is the correct answer.

INCORRECT: “Create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption” is incorrect. This will ensure that the data is encrypted at rest, but not in-transit.

142
Q

A Development team are deploying an AWS Lambda function that will be used by a production application. The function code will be updated regularly, and new versions will be published. The development team do not want to modify application code to point to each new version.

How can the Development team setup a static ARN that will point to the latest published version?


Setup an Alias that will point to the latest version

Publish a mutable version and point it to the $LATEST version

Setup a Route 53 Alias record that points to the published version

Use an unqualified ARN

A

CORRECT: “Setup an Alias that will point to the latest version” is the correct answer.

INCORRECT: “Publish a mutable version and point it to the $LATEST version” is incorrect as all published versions are immutable (cannot be modified) and you cannot modify a published version to point to the $LATEST version.

143
Q

A monitoring application that keeps track of a large eCommerce website uses Amazon Kinesis for data ingestion. During periods of peak data rates, the producers are not making best use of the available shards.
What step will allow the producers to better utilize the available shards and increase write throughput to the Kinesis data stream?


Install the Kinesis Producer Library (KPL) for ingesting data into the stream

Increase the shard count of the stream using UpdateShardCount

Create an SQS queue and decouple the producers from the Kinesis data stream

Ingest multiple records into the stream in a single call using BatchWriteItem

A

CORRECT: “Install the Kinesis Producer Library (KPL) for ingesting data into the stream” is the correct answer.

An Amazon Kinesis Data Streams producer is an application that puts user data records into a Kinesis data stream (also called data ingestion). The Kinesis Producer Library (KPL) simplifies producer application development, allowing developers to achieve high write throughput to a Kinesis data stream.

The KPL is an easy-to-use, highly configurable library that helps you write to a Kinesis data stream. It acts as an intermediary between your producer application code and the Kinesis Data Streams API actions. The KPL performs the following primary tasks:

  • Writes to one or more Kinesis data streams with an automatic and configurable retry mechanism
  • Collects records and uses PutRecords to write multiple records to multiple shards per request
  • Aggregates user records to increase payload size and improve throughput
  • Integrates seamlessly with the Kinesis Client Library (KCL) to de-aggregate batched records on the consumer
  • Submits Amazon CloudWatch metrics on your behalf to provide visibility into producer performance
144
Q

The development team at a company creates serverless solutions using AWS Lambda. Functions are invoked by clients via AWS API Gateway which anyone can access. The team lead would like to control access using a 3rd party authorization mechanism.

As a Developer Associate, which of the following options would you recommend for the given use-case?

​
Lambda Authorizer
​
API Gateway User Pools
​
IAM permissions with sigv4

Cognito User Pools

A

“Lambda Authorizer”

An Amazon API Gateway Lambda authorizer (formerly known as a custom authorizer) is a Lambda function that you provide to control access to your API. A Lambda authorizer uses bearer token authentication strategies, such as OAuth or SAML. Before creating an API Gateway Lambda authorizer, you must first create the AWS Lambda function that implements the logic to authorize and, if necessary, to authenticate the caller.

145
Q

A development team lead is responsible for managing access for her IAM principals. At the start of the cycle, she has granted excess privileges to users to keep them motivated for trying new things. She now wants to ensure that the team has only the minimum permissions required to finish their work.

Which of the following will help her identify unused IAM roles and remove them without disrupting any service?


Access Advisor feature on IAM console

IAM Access Analyzer
​
AWS Trusted Advisor
​
Amazon Inspector
A

Access Advisor feature on IAM console

To help identify the unused roles, IAM reports the last-used timestamp that represents when a role was last used to make an AWS request. Your security team can use this information to identify, analyze, and then confidently remove unused roles. This helps improve the security posture of your AWS environments. Additionally, by removing unused roles, you can simplify your monitoring and auditing efforts by focusing only on roles that are in use.

146
Q

A business has their test environment built on Amazon EC2 configured on General purpose SSD volume.

At which gp2 volume size will their test environment hit the max IOPS?


2.7 TiB

16 TiB

5.3 TiB

10.6 TiB

A

5.3 TiB - General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size.

147
Q

A development team has configured their Amazon EC2 instances for Auto Scaling. A Developer during routine checks has realized that only basic monitoring is active, as opposed to detailed monitoring.

Which of the following represents the best root-cause behind the issue?


The default configuration for Auto Scaling was not set

AWS CLI was used to create the launch configuration

SDK was used to create the launch configuration

AWS Management Console might have been used to create the launch configuration

A

AWS Management Console might have been used to create the launch configuration - By default, basic monitoring is enabled when you create a launch template or when you use the AWS Management Console to create a launch configuration. This could be the reason behind only the basic monitoring taking place.

148
Q

The development team at a multi-national retail company wants to support trusted third-party authenticated users from the supplier organizations to create and update records in specific DynamoDB tables in the company’s AWS account.

As a Developer Associate, which of the following solutions would you suggest for the given use-case?


Create a new IAM group in the company’s AWS account for each of the third-party authenticated users from the supplier organizations. The users can then use the IAM group credentials to access DynamoDB

Create a new IAM user in the company’s AWS account for each of the third-party authenticated users from the supplier organizations. The users can then use the IAM user credentials to access DynamoDB

Use Cognito Identity pools to enable trusted third-party authenticated users to access DynamoDB

Use Cognito User pools to enable trusted third-party authenticated users to access DynamoDB

A

Use Cognito Identity pools to enable trusted third-party authenticated users to access DynamoDB

Amazon Cognito identity pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. With an identity pool, you can obtain temporary, limited-privilege AWS credentials to access other AWS services. Amazon Cognito identity pools support the following identity providers:

Public providers: Login with Amazon (Identity Pools), Facebook (Identity Pools), Google (Identity Pools), Sign in with Apple (Identity Pools).

Amazon Cognito User Pools

Open ID Connect Providers (Identity Pools)

SAML Identity Providers (Identity Pools)

Developer Authenticated Identities (Identity Pools)

https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

149
Q

A team is checking the viability of using AWS Step Functions for creating a banking workflow for loan approvals. The web application will also have human approval as one of the steps in the workflow.

As a developer associate, which of the following would you identify as the key characteristics for AWS Step Function? (Select two)


You should use Express Workflows for workloads with high event rates and short duration

Standard Workflows on AWS Step Functions are suitable for long-running, durable, and auditable workflows that can also support any human approval steps

Express Workflows have a maximum duration of five minutes and Standard workflows have a maximum duration of 180 days or 6 months

Standard Workflows on AWS Step Functions are suitable for long-running, durable, and auditable workflows that do not support any human approval steps

A

Standard Workflows on AWS Step Functions are suitable for long-running, durable, and auditable workflows that can also support any human approval steps

You should use Express Workflows for workloads with high event rates and short duration

Express Workflows have a maximum duration of five minutes and Standard workflows have a maximum duration of one year.

150
Q

You have been asked by your Team Lead to enable detailed monitoring of the Amazon EC2 instances your team uses. As a Developer working on AWS CLI, which of the below command will you run?


aws ec2 run-instances –image-id ami-09092360 –monitoring Enabled=true

aws ec2 run-instances –image-id ami-09092360 –monitoring State=enabled

aws ec2 monitor-instances –instance-ids i-1234567890abcdef0

aws ec2 monitor-instances –instance-id i-1234567890abcdef0

A

Correct option:

aws ec2 monitor-instances –instance-ids i-1234567890abcdef0 - This enables detailed monitoring for a running instance.

151
Q

A multi-national enterprise uses AWS Organizations to manage its users across different divisions. Even though CloudTrail is enabled on the member accounts, managers have noticed that access issues to CloudTrail logs across different divisions and AWS Regions is becoming a bottleneck in troubleshooting issues. They have decided to use the organization trail to keep things simple.

What are the important points to remember when configuring an organization trail? (Select two)

Member accounts will be able to see the Organization trail, but cannot modify or delete it

Member accounts do not have access to organization trail, neither do they have access to the Amazon S3 bucket that logs the files

There is nothing called Organization Trail. The master account can, however, enable CloudTrail logging, to keep track of all activities across AWS accounts

By default, CloudTrail event log files are not encrypted


By default, CloudTrail tracks only bucket-level actions. To track object-level actions, you need to enable Amazon S3 data events

A


By default, CloudTrail tracks only bucket-level actions. To track object-level actions, you need to enable Amazon S3 data events

Member accounts will be able to see the organization trail, but cannot modify or delete it -
Organization trails must be created in the master account, and when specified as applying to an organization, are automatically applied to all member accounts in the organization

If you have created an organization in AWS Organizations, you can also create a trail that will log all events for all AWS accounts in that organization. This is referred to as an organization trail.

152
Q

An organization is moving its on-premises resources to the cloud. Source code will be moved to AWS CodeCommit and AWS CodeBuild will be used for compiling the source code using Apache Maven as a build tool. The organization wants the build environment should allow for scaling and running builds in parallel.

Which of the following options should the organization choose for their requirement?


Run CodeBuild in an Auto Scaling group

Choose a high-performance instance type for your CodeBuild instances

CodeBuild scales automatically, the organization does not have to do anything for scaling or for parallel builds

Enable CodeBuild Auto Scaling

A

CodeBuild scales automatically, the organization does not have to do anything for scaling or for parallel builds - AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your own build tools. CodeBuild scales automatically to meet peak build requests.

153
Q

A development team uses shared Amazon S3 buckets to upload files. Due to this shared access, objects in S3 buckets have different owners making it difficult to manage the objects.

As a developer associate, which of the following would you suggest to automatically make the S3 bucket owner, also the owner of all objects in the bucket, irrespective of the AWS account used for uploading the objects?


Use S3 CORS to make the S3 bucket owner, the owner of all objects in the bucket

Use S3 Access Analyzer to identify the owners of all objects and change the ownership to the bucket owner

Use Bucket Access Control Lists (ACLs) to control access on S3 bucket and then define its owner

Use S3 Object Ownership to default bucket owner to be the owner of all objects in the bucket

A

Use S3 Object Ownership to default bucket owner to be the owner of all objects in the bucket

S3 Object Ownership is an Amazon S3 bucket setting that you can use to control ownership of new objects that are uploaded to your buckets. By default, when other AWS accounts upload objects to your bucket, the objects remain owned by the uploading account. With S3 Object Ownership, any new objects that are written by other accounts with the bucket-owner-full-control canned access control list (ACL) automatically become owned by the bucket owner, who then has full control of the objects.

154
Q

A new member of your team is working on creating Dead Letter Queue (DLQ) for AWS Lambda functions.

As a Developer Associate, can you help him identify the use cases, wherein AWS Lambda will add a message into a DLQ after being processed? (Select two)


The Lambda function invocation is asynchronous

The event has been processed successfully

The Lambda function invocation failed only once but succeeded thereafter

The event fails all processing attempts

The Lambda function invocation is synchronous

A

The Lambda function invocation is asynchronous - When an asynchronous invocation event exceeds the maximum age or fails all retry attempts, Lambda discards it. Or sends it to dead-letter queue if you have configured one.

The event fails all processing attempt - A dead-letter queue acts the same as an on-failure destination in that it is used when an event fails all processing attempts or expires without being processed.

155
Q

Your web application architecture consists of multiple Amazon EC2 instances running behind an Elastic Load Balancer with an Auto Scaling group having the desired capacity of 5 EC2 instances. You would like to integrate AWS CodeDeploy for automating application deployment. The deployment should re-route traffic from your application’s original environment to the new environment.

Which of the following options will meet your deployment criteria?


Opt for In-place deployment

Opt for Rolling deployment

Opt for Immutable deployment

Opt for Blue/Green deployment

A

Opt for Blue/Green deployment

Incorrect options:

Opt for Rolling deployment - This deployment type is present for AWS Elastic Beanstalk and not for EC2 instances directly.

Opt for Immutable deployment - This deployment type is present for AWS Elastic Beanstalk and not for EC2 instances directly.

156
Q

A junior developer working on ECS instances terminated a container instance in Amazon Elastic Container Service (Amazon ECS) as per instructions from the team lead. But the container instance continues to appear as a resource in the ECS cluster.

As a Developer Associate, which of the following solutions would you recommend to fix this behavior?


A custom software on the container instance could have failed and resulted in the container hanging in an unhealthy state till restarted again

You terminated the container instance while it was in RUNNING state, that lead to this synchronization issues

The container instance has been terminated with AWS CLI, whereas, for ECS instances, Amazon ECS CLI should be used to avoid any synchronization issues

You terminated the container instance while it was in STOPPED state, that lead to this synchronization issues

A

You terminated the container instance while it was in STOPPED state, that lead to this synchronization issues -

If you terminate a container instance while it is in the STOPPED state, that container instance isn’t automatically removed from the cluster. You will need to deregister your container instance in the STOPPED state by using the Amazon ECS console or AWS Command Line Interface. Once deregistered, the container instance will no longer appear as a resource in your Amazon ECS cluster.

157
Q

A large firm stores its static data assets on Amazon S3 buckets. Each service line of the firm has its own AWS account. For a business use case, the Finance department needs to give access to their S3 bucket’s data to the Human Resources department.

Which of the below options is NOT feasible for cross-account access of S3 bucket objects?


Use IAM roles and resource-based policies delegate access across accounts within different partitions via programmatic access only

Use Access Control List (ACL) and IAM policies for programmatic-only access to S3 bucket objects

Use Resource-based policies and AWS Identity and Access Management (IAM) policies for programmatic-only access to S3 bucket objects

Use Cross-account IAM roles for programmatic and console access to S3 bucket objects

A

Use IAM roles and resource-based policies delegate access across accounts within different partitions via programmatic access only -

This statement is incorrect and hence the right choice for this question. IAM roles and resource-based policies delegate access across accounts only within a single partition. For example, assume that you have an account in US West (N. California) in the standard aws partition. You also have an account in China (Beijing) in the aws-cn partition. You can’t use an Amazon S3 resource-based policy in your account in China (Beijing) to allow access for users in your standard AWS account.

158
Q

You are working for a shipping company that is automating the creation of ECS clusters with an Auto Scaling Group using an AWS CloudFormation template that accepts cluster name as its parameters. Initially, you launch the template with input value ‘MainCluster’, which deployed five instances across two availability zones. The second time, you launch the template with an input value ‘SecondCluster’. However, the instances created in the second run were also launched in ‘MainCluster’ even after specifying a different cluster name.

What is the root cause of this issue?


The EC2 instance is missing IAM permissions to join the other clusters

The ECS agent Docker image must be re-built to connect to the other clusters

The security groups on the EC2 instance are pointing to the wrong ECS cluster

The cluster name Parameter has not been updated in the file /etc/ecs/ecs.config during bootstrap

A

The cluster name Parameter has not been updated in the file /etc/ecs/ecs.config during bootstrap - In the ecs.config file you have to configure the parameter ECS_CLUSTER=’your_cluster_name’ to register the container instance with a cluster named ‘your_cluster_name’.

159
Q

You have migrated an on-premise SQL Server database to an Amazon Relational Database Service (RDS) database attached to a VPC inside a private subnet. Also, the related Java application, hosted on-premise, has been moved to an Amazon Lambda function.

Which of the following should you implement to connect AWS Lambda function to its RDS instance?

Configure Lambda to connect to VPC with private subnet and Security Group needed to access RDS

Configure lambda to connect to the public subnet that will give internet access and use Security Group to access RDS inside the private subnet

Use Lambda layers to connect to the internet and RDS separately

Use Environment variables to pass in the RDS connection string

A

Configure Lambda to connect to VPC with private subnet and Security Group needed to access RDS -

You can configure a Lambda function to connect to private subnets in a virtual private cloud (VPC) in your account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a private network for resources such as databases, cache instances, or internal services. Connect your lambda function to the VPC to access private resources during execution. When you connect a function to a VPC, Lambda creates an elastic network interface for each combination of the security group and subnet in your function’s VPC configuration. This is the right way of giving RDS access to Lambda.

160
Q

A development team has deployed a REST API in Amazon API Gateway to two different stages - a test stage and a prod stage. The test stage is used as a test build and the prod stage as a stable build. After the updates have passed the test, the team wishes to promote the test stage to the prod stage.

Which of the following represents the optimal solution for this use-case?


API performance is optimized in a different way for prod environments. Hence, promoting test to prod is not correct. The promotion should be done by redeploying the API to the prod stage

Deploy the API without choosing a stage. This way, the working deployment will be updated in all stages

Delete the existing prod stage. Create a new stage with the same name (prod) and deploy the tested version on this stage

Update stage variable value from the stage name of test to that of prod

A

Update stage variable value from the stage name of test to that of prod

After creating your API, you must deploy it to make it callable by your users. To deploy an API, you create an API deployment and associate it with a stage. A stage is a logical reference to a lifecycle state of your API (for example, dev, prod, beta, v2). API stages are identified by the API ID and stage name. They’re included in the URL that you use to invoke the API. Each stage is a named reference to a deployment of the API and is made available for client applications to call.

161
Q

You have a workflow process that pulls code from AWS CodeCommit and deploys to EC2 instances associated with tag group ProdBuilders. You would like to configure the instances to archive no more than two application revisions to conserve disk space.

Which of the following will allow you to implement this?

​
Have a load balancer in front of your instances
​
Integrate with AWS CodePipeline
​
CodeDeploy Agent
​
AWS CloudWatch Log Agent
A

Correct option:

“CodeDeploy Agent”

The CodeDeploy agent is a software package that, when installed and configured on an instance, makes it possible for that instance to be used in CodeDeploy deployments. The CodeDeploy agent archives revisions and log files on instances. The CodeDeploy agent cleans up these artifacts to conserve disk space. You can use the :max_revisions: option in the agent configuration file to specify the number of application revisions to the archive by entering any positive integer. CodeDeploy also archives the log files for those revisions. All others are deleted, except for the log file of the last successful deployment.

162
Q

A company follows collaborative development practices. The engineering manager wants to isolate the development effort by setting up simulations of API components owned by various development teams.

Which API integration type is best suited for this requirement?

AWS_PROXY
​
MOCK
​
HTTP
​
HTTP_PROXY
A

MOCK

This type of integration lets API Gateway return a response without sending the request further to the backend. This is useful for API testing because it can be used to test the integration setup without incurring charges for using the backend and to enable collaborative development of an API.

In collaborative development, a team can isolate their development effort by setting up simulations of API components owned by other teams by using the MOCK integrations. It is also used to return CORS-related headers to ensure that the API method permits CORS access. In fact, the API Gateway console integrates the OPTIONS method to support CORS with a mock integration.

163
Q

A financial services company wants to ensure that the customer data is always kept encrypted on Amazon S3 but wants a fully managed solution to create, rotate and remove the encryption keys.

As a Developer Associate, which of the following would you recommend to address the given use-case?


Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

Server-Side Encryption with Secrets Manager

Server-Side Encryption with Customer-Provided Keys (SSE-C)

A

Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)

You have the following options for protecting data at rest in Amazon S3:

Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.

Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

When you use server-side encryption with AWS KMS (SSE-KMS), you can use the default AWS managed CMK, or you can specify a customer-managed CMK that you have already created.

Creating your own customer-managed CMK gives you more flexibility and control over the CMK. For example, you can create, rotate, and disable customer-managed CMKs. You can also define access controls and audit the customer-managed CMKs that you use to protect your data.

164
Q

A company has a workload that requires 14,000 consistent IOPS for data that must be durable and secure. The compliance standards of the company state that the data should be secure at every stage of its lifecycle on all of the EBS volumes they use.

Which of the following statements are true regarding data security on EBS?

EBS volumes support in-flight encryption but does not support encryption at rest

EBS volumes do not support in-flight encryption but do support encryption at rest using KMS

EBS volumes support both in-flight encryption and encryption at rest using KMS

EBS volumes don’t support any encryption

A

EBS volumes support both in-flight encryption and encryption at rest using KMS

165
Q

A leading financial services company offers data aggregation services for Wall Street trading firms. The company bills its clients based on per unit of clickstream data provided to the clients. As the company operates in a regulated industry, it needs to have the same ordered clickstream data available for auditing within a window of 7 days.

As a Developer Associate, which of the following AWS services do you think provides the ability to run the billing process and auditing process on the given clickstream data in the same order?


AWS Kinesis Data Streams

Amazon SQS

AWS Kinesis Data Firehose

​AWS Kinesis Data Analytics

A

AWS Kinesis Data Streams

You can raise this limit to up to 7 days by enabling extended data retention or up to 365 days by enabling long-term data retention

166
Q

To meet compliance guidelines, a company needs to ensure replication of any data stored in its S3 buckets.

Which of the following characteristics are correct while configuring an S3 bucket for replication? (Select two)


Object tags cannot be replicated across AWS Regions using Cross-Region Replication

Once replication is enabled on a bucket, all old and new objects will be replicated

Same-Region Replication (SRR) and Cross-Region Replication (CRR) can be configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags

S3 lifecycle actions are not replicated with S3 replication

Replicated objects do not retain metadata

A

Same-Region Replication (SRR) and Cross-Region Replication (CRR) can be configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags -
Amazon S3 Replication (CRR and SRR) is configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags. You add a replication configuration on your source bucket by specifying a destination bucket in the same or different AWS region for replication.

S3 lifecycle actions are not replicated with S3 replication -
With S3 Replication (CRR and SRR), you can establish replication rules to make copies of your objects into another storage class, in the same or a different region. Lifecycle actions are not replicated, and if you want the same lifecycle configuration applied to both source and destination buckets, enable the same lifecycle configuration on both.

167
Q

You are a developer working with the AWS CLI to create Lambda functions that contain environment variables. Your functions will require over 50 environment variables consisting of sensitive information of database table names.

What is the total set size/number of environment variables you can create for AWS Lambda?


The total size of all environment variables shouldn’t exceed 8 KB. There is no limit on the number of variables

The total size of all environment variables shouldn’t exceed 8 KB. The maximum number of variables that can be created is 50

The total size of all environment variables shouldn’t exceed 4 KB. There is no limit on the number of variables

The total size of all environment variables shouldn’t exceed 4 KB. The maximum number of variables that can be created is 35

A

The total size of all environment variables shouldn’t exceed 4 KB. There is no limit on the number of variables

168
Q

An IT company has migrated to a serverless application stack on the AWS Cloud with the compute layer being implemented via Lambda functions. The engineering managers would like to actively troubleshoot any failures in the Lambda functions.

As a Developer Associate, which of the following solutions would you suggest for this use-case?


Use CloudWatch Events to identify and notify any failures in the Lambda code

Use CodeCommit to identify and notify any failures in the Lambda code

The developers should insert logging statements in the Lambda function code which are then available via CloudWatch logs

Use CodeDeploy to identify and notify any failures in the Lambda code

A

“The developers should insert logging statements in the Lambda function code which are then available via CloudWatch logs”

When you invoke a Lambda function, two types of error can occur. Invocation errors occur when the invocation request is rejected before your function receives it. Function errors occur when your function’s code or runtime returns an error. Depending on the type of error, the type of invocation, and the client or service that invokes the function, the retry behavior, and the strategy for managing errors varies.

169
Q

You work for a travel company that books accommodation for customers. The company has decided to release a new feature that will allow customers to book accommodation in real-time through their API. As their developer, you have been asked to deploy this new feature. How will you test this new API feature with minimal impact to customers?

A. Create a stage and inform your pilot customers to change their endpoint.
B. Create a stage and inform your pilot customers to change their endpoint and attach a usage plan.
C. Create a stage and enable canary release.
D. Create a stage and promote a canary release.

A

Option C is CORRECT as enabling canary release will allow the developer to route a % of the traffic to the new API in a random order ensuring no one customer is affected too long. Canary release is a part of a stage in the API Gateway console.

Option D is incorrect because promoting a canary release will make the deployment release, thereby having no delta between the canary and base stage. If this option is selected, all customers will be affected.

170
Q

You are a lead developer for an application that uses WebSockets through API Gateway to push payloads between the clients and server. Your API has a proxy integration with Lambda. When the client connects for the first time, it receives a preflight error message. Which steps will you take to resolve this issue?

A. Enable CORS using the API Gateway console.
B. Setup the OPTIONS method and set up the required OPTIONS response headers in API Gateway.
C. Make changes to your backend to return “Access-Control-Allow-Headers” and “Access-Control-Allow-Origin” headers.
D. A and B.
E. A, B and C.

A

Option E is CORRECT. When using a proxy integration with Lambda, it is necessary to add “Access-Control-Allow-Headers” and “Access-Control-Allow-Origin” headers the response in the Lambda function as a proxy integration will not return an integration response. All three steps need to be used when enabling CORS for an API with a Lambda proxy integration.

171
Q

You are an API developer for a large corporation and have been asked to investigate a public API’s latency problem in API Gateway. Upon investigation, you realize that all clients making requests to the API are invalidating the cache using the Cache-Control header, which has caused this latency. How will you resolve the latency problem with the least interruption to any services?

A. Flush entire cache.
B. Disable API caching.
C. Attach an InvalidateCache policy to the resource.
D. Check Require Authorization box.

A

Option D is CORRECT as only authorized users can invalidate cache.

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html

172
Q

You are a developer at a company that has built a serverless application that allows users to get NBA stats. The application consists of three different levels of subscription Free, Premium and Enterprise, implemented using stages in API Gateway. The Free level allows developers to access stats up to 5 games per player, and premium and enterprise get full access to the database. Your manager has asked you to limit the free level to 100 concurrent requests at any given point in time. How can this be achieved?

A. Under usage plan for the stage change Burst to 50 and Rate to 50.
B. Under usage plan for the stage change Burst to 100 and Rate to 100.
C. Under usage plan for the stage change Burst to 50 and Rate to 100.
D. All of the above.

A

Option B is CORRECT as changing Burst to 100 and Rate to 100 will allow 100 concurrent requests to be made and processed at any given point in time.

Option A is incorrect as changing Burst to 50 and Rate to 50 will mean that 50 concurrent requests can be made and processed at any given point in time.

Option C is incorrect as changing Burst to 50 and Rate to 100 will allow 50 concurrent requests to be made but 100 processed at any given point in time. The remaining 50 requests will be a 429 error and will have to retry after an interval.

173
Q

You are an API developer for a large manufacturing company. You have developed an API resource that adds new products to the distributor’s inventory using a POST HTTP request. It includes an Origin header and accepts application/x-www-form-encoded as request content type. Which response header will allow access to this resource from another origin?

A. Access-Control-Request-Originright
B. Access-Control-Request-Method
C. Access-Control-Request-Headers

A

Option A is CORRECT as the POST request satisfies the condition for a simple cross-origin request. So allowing the Access-Control-Request-Origin header will make it so that it can be accessed from other origins.

174
Q

A developer is configuring an Application Load Balancer (ALB) to direct traffic to the application’s EC2 instances and Lambda functions.

Which of the following characteristics of the ALB can be identified as correct? (Select two)


An ALB has three possible target types: Hostname, IP and Lambda

You can not specify publicly routable IP addresses to an ALB

If you specify targets using an instance ID, traffic is routed to instances using any private IP address from one or more network interfaces

An ALB has three possible target types: Instance, IP and Lambda

A

An ALB has three possible target types: Instance, IP and Lambda

When the target type is IP, you can specify IP addresses from specific CIDR blocks only. You can’t specify publicly routable IP addresses.

175
Q

A company wants to add geospatial capabilities to the cache layer, along with query capabilities and an ability to horizontally scale. The company uses Amazon RDS as the database tier.

Which solution is optimal for this use-case?


Leverage the capabilities offered by ElastiCache for Redis with cluster mode enabled


Use CloudFront caching to cater to demands of increasing workloads

Migrate to Amazon DynamoDB to utilize the automatically integrated DynamoDB Accelerator (DAX) along with query capability features

Leverage the capabilities offered by ElastiCache for Redis with cluster mode disabled

A

Leverage the capabilities offered by ElastiCache for Redis with cluster mode enabled

You can leverage ElastiCache for Redis with cluster mode enabled to enhance reliability and availability with little change to your existing workload. Cluster Mode comes with the primary benefit of horizontal scaling up and down of your Redis cluster, with almost zero impact on the performance of the cluster.

Enabling Cluster Mode provides a number of additional benefits in scaling your cluster. In short, it allows you to scale in or out the number of shards (horizontal scaling) versus scaling up or down the node type (vertical scaling). This means that Cluster Mode can scale to very large amounts of storage (potentially 100s of terabytes) across up to 90 shards, whereas a single node can only store as much data in memory as the instance type has capacity for.

Cluster Mode also allows for more flexibility when designing new workloads with unknown storage requirements or heavy write activity. In a read-heavy workload, one can scale a single shard by adding read replicas, up to five, but a write-heavy workload can benefit from additional write endpoints when cluster mode is enabled.

176
Q

A development team is considering Amazon ElastiCache for Redis as its in-memory caching solution for its relational database.

Which of the following options are correct while configuring ElastiCache? (Select two)


If you have no replicas and a node fails, you experience no loss of data when using Redis with cluster mode enabled

All the nodes in a Redis cluster must reside in the same region

While using Redis with cluster mode enabled, you cannot manually promote any of the replica nodes to primary

While using Redis with cluster mode enabled, asynchronous replication mechanisms are used to keep the read replicas synchronized with the primary. If cluster mode is disabled, the replication mechanism is done synchronously

A

All the nodes in a Redis cluster must reside in the same region

While using Redis with cluster mode enabled, you cannot manually promote any of the replica nodes to primary

While using Redis with cluster mode enabled, there are some limitations:

  • You cannot manually promote any of the replica nodes to primary.
  • Multi-AZ is required.
  • You can only change the structure of a cluster, the node type, and the number of nodes by restoring from a backup.
177
Q

You have uploaded a zip file to AWS Lambda that contains code files written in Node.Js. When your function is executed you receive the following output, ‘Error: Memory Size: 10,240 MB Max Memory Used’.

Which of the following explains the problem?


Your Lambda function ran out of RAM

The uncompressed zip file exceeds AWS Lambda limits

You have uploaded a zip file larger than 50 MB to AWS Lambda

Your zip file is corrupt

A

Your Lambda function ran out of RAM

The maximum amount of memory available to the Lambda function at runtime is 10,240 MB. Your Lambda function was deployed with 10,240 MB of RAM, but it seems your code requested or used more than that, so the Lambda function failed.

178
Q

A cybersecurity company is publishing critical log data to a log group in Amazon CloudWatch Logs, which was created 3 months ago. The company must encrypt the log data using an AWS KMS customer master key (CMK), so any future data can be encrypted to meet the company’s security guidelines.

How can the company address this use-case?


Enable the encrypt feature on the log group via the CloudWatch Logs console

Use the AWS CLI create-log-group command and specify the KMS key ARN

Use the AWS CLI associate-kms-key command and specify the KMS key ARN

Use the AWS CLI describe-log-groups command and specify the KMS key ARN

A

Use the AWS CLI associate-kms-key command and specify the KMS key ARN

Log group data is always encrypted in CloudWatch Logs. You can optionally use AWS AWS Key Management Service for this encryption. If you do, the encryption is done using an AWS KMS (AWS KMS) customer master key (CMK). Encryption using AWS KMS is enabled at the log group level, by associating a CMK with a log group, either when you create the log group or after it exists.

After you associate a CMK with a log group, all newly ingested data for the log group is encrypted using the CMK. This data is stored in an encrypted format throughout its retention period. CloudWatch Logs decrypts this data whenever it is requested. CloudWatch Logs must have permissions for the CMK whenever encrypted data is requested.

To associate the CMK with an existing log group, you can use the associate-kms-key command.

179
Q

A developer is creating access credentials for an Amazon EC2 instance that hosts the web application using AWS SDK for Java.

If the default credentials provider chain is used on the instance, which parameter will be checked first for the required credentials?


The system environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are checked first

Parameters aws.accessKeyId and aws.secretKey will be checked in the Java system properties

Credentials delivered through the Amazon EC2 container service

Instance profile credentials, which exist within the instance metadata associated with the IAM role for the EC2 instance

A

In the Java system properties: aws.accessKeyId and aws.secretKey.

In system environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

In the default credentials file (the location of this file varies by platform).

Credentials delivered through the Amazon EC2 container service if the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable is set and security manager has permission to access the variable.

In the instance profile credentials, which exist within the instance metadata associated with the IAM role for the EC2 instance.

Web Identity Token credentials from the environment or container.

180
Q

Your company leverages Amazon CloudFront to provide content via the internet to customers with low latency. Aside from latency, security is another concern and you are looking for help in enforcing end-to-end connections using HTTPS so that content is protected.

Which of the following options is available for HTTPS in AWS CloudFront?

Between clients and CloudFront as well as between CloudFront and backend

Neither between clients and CloudFront nor between CloudFront and backend

Between clients and CloudFront only

Between CloudFront and backend only

A

Between clients and CloudFront as well as between CloudFront and backend

For web distributions, you can configure CloudFront to require that viewers use HTTPS to request your objects, so connections are encrypted when CloudFront communicates with viewers.

You also can configure CloudFront to use HTTPS to get objects from your origin, so connections are encrypted when CloudFront communicates with your origin.

181
Q

You work as a developer doing contract work for the government on AWS gov cloud. Your applications use Amazon Simple Queue Service (SQS) for its message queue service. Due to recent hacking attempts, security measures have become stricter and require you to store data in encrypted queues.

Which of the following steps can you take to meet your requirements without making changes to the existing code?


Use Client side encryption

Use the SSL endpoint

Use Secrets Manager

Enable SQS KMS encryption

A

Enable SQS KMS encryption

Server-side encryption (SSE) lets you transmit sensitive data in encrypted queues. SSE protects the contents of messages in queues using keys managed in AWS Key Management Service (AWS KMS).

AWS KMS combines secure, highly available hardware and software to provide a key management system scaled for the cloud. When you use Amazon SQS with AWS KMS, the data keys that encrypt your message data are also encrypted and stored with the data they protect.

You can choose to have SQS encrypt messages stored in both Standard and FIFO queues using an encryption key provided by AWS Key Management Service (KMS).

182
Q

The development team at a company wants to insert vendor records into an Amazon DynamoDB table as soon as the vendor uploads a new file into an Amazon S3 bucket.

As a Developer Associate, which set of steps would you recommend to achieve this?

Set up an event with Amazon CloudWatch Events that will monitor the S3 bucket and then insert the records into DynamoDB

Develop a Lambda function that will poll the S3 bucket and then insert the records into DynamoDB

Write a cron job that will execute a Lambda function at a scheduled time and insert the records into DynamoDB

Create an S3 event to invoke a Lambda function that inserts records into DynamoDB

A

Create an S3 event to invoke a Lambda function that inserts records into DynamoDB

The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. You store this configuration in the notification subresource that is associated with a bucket.

183
Q

A .NET developer team works with many ASP.NET web applications that use EC2 instances to host them on IIS. The deployment process needs to be configured so that multiple versions of the application can run in AWS Elastic Beanstalk. One version would be used for development, testing, and another version for load testing.

Which of the following methods do you recommend?


Define a dev environment with a single instance and a ‘load test’ environment that has settings close to production environment

Use only one Beanstalk environment and perform configuration changes using an Ansible script

You cannot have multiple development environments in Elastic Beanstalk, just one development and one production environment

Create an Application Load Balancer to route based on hostname so you can pass on parameters to the development Elastic Beanstalk environment. Create a file in .ebextensions/ to know how to handle the traffic coming from the ALB

A

Define a dev environment with a single instance and a ‘load test’ environment that has settings close to production environment

AWS Elastic Beanstalk makes it easy to create new environments for your application. You can create and manage separate environments for development, testing, and production use, and you can deploy any version of your application to any environment. Environments can be long-running or temporary. When you terminate an environment, you can save its configuration to recreate it later.

It is common practice to have many environments for the same application. You can deploy multiple environments when you need to run multiple versions of an application. So for the given use-case, you can set up ‘dev’ and ‘load test’ environment.

184
Q

You are working on a project that has over 100 dependencies. Every time your AWS CodeBuild runs a build step it has to resolve Java dependencies from external Ivy repositories which take a long time. Your manager wants to speed this process up in AWS CodeBuild.

Which of the following will help you do this with minimal effort?

Ship all the dependencies as part of the source code

Cache dependencies on S3

Use Instance Store type of EC2 instances to facilitate internal dependency cache

Reduce the number of dependencies

A

Cache dependencies on S3

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your build servers.

Downloading dependencies is a critical phase in the build process. These dependent files can range in size from a few KBs to multiple MBs. Because most of the dependent files do not change frequently between builds, you can noticeably reduce your build time by caching dependencies in S3.

185
Q

A development team had enabled and configured CloudTrail for all the Amazon S3 buckets used in a project. The project manager owns all the S3 buckets used in the project. However, the manager noticed that he did not receive any object-level API access logs when the data was read by another AWS account.

What could be the reason for this behavior/error?


The bucket owner also needs to be object owner to get the object access logs

CloudTrail needs to be configured on both the AWS accounts for receiving the access logs in cross-account access

The meta-data of the bucket is in an invalid state and needs to be corrected by the bucket owner from AWS console to fix the issue

CloudTrail always delivers object-level API access logs to the requester and not to object owner

A

The bucket owner also needs to be object owner to get the object access logs

If the bucket owner is also the object owner, the bucket owner gets the object access logs. Otherwise, the bucket owner must get permissions, through the object ACL, for the same object API to get the same object-access API logs.

186
Q

An organization recently began using AWS CodeCommit for its source control service. A compliance security team visiting the organization was auditing the software development process and noticed developers making many git push commands within their development machines. The compliance team requires that encryption be used for this activity.

How can the organization ensure source code is encrypted in transit and at rest?


Use AWS Lambda as a hook to encrypt the pushed code

Enable KMS encryption

Use a git command line hook to encrypt the code client side

Repositories are automatically encrypted at rest

A

Repositories are automatically encrypted at rest

187
Q

You are getting ready for an event to show off your Alexa skill written in JavaScript. As you are testing your voice activation commands you find that some intents are not invoking as they should and you are struggling to figure out what is happening. You included the following code console.log(JSON.stringify(this.event)) in hopes of getting more details about the request to your Alexa skill.

You would like the logs stored in an Amazon Simple Storage Service (S3) bucket named MyAlexaLog. How do you achieve this?


Use CloudWatch integration feature with Lambda

Use CloudWatch integration feature with Glue

Use CloudWatch integration feature with S3

Use CloudWatch integration feature with Kinesis

A

Use CloudWatch integration feature with S3

You can export log data from your CloudWatch log groups to an Amazon S3 bucket and use this data in custom processing and analysis, or to load onto other systems.

188
Q

Your Lambda function must use the Node.js drivers to connect to your RDS PostgreSQL database in your VPC.

How do you bundle your Lambda function to add the dependencies?


Zip the function as-is with a package.json file so that AWS Lambda can resolve the dependencies for you

Zip the function and the dependencies separately and upload them in AWS Lambda as two parts

Upload the code through the AWS console and upload the dependencies as a zip

Put the function and the dependencies in one folder and zip them together

A

Put the function and the dependencies in one folder and zip them together

There’s only one way of deploying a Lambda function, which is to provide the zip file with all dependencies that it’ll need

189
Q

You would like your Elastic Beanstalk environment to expose an HTTPS endpoint and an HTTP endpoint. The HTTPS endpoint should be used to get in-flight encryption between your clients and your web servers, while the HTTP endpoint should only be used to redirect traffic to HTTPS and support URLs starting with http://.

What must be done to configure this setup? (Select three)


Configure your EC2 instances to redirect HTTPS traffic to HTTP

​Open up port 80 & port 443

Configure your EC2 instances to redirect HTTP traffic to HTTPS

Assign an SSL certificate to the Load Balancer

Only open up port 443

Only open up port 80

A

​Open up port 80 & port 443

Configure your EC2 instances to redirect HTTP traffic to HTTPS

Assign an SSL certificate to the Load Balancer

190
Q

You would like to paginate the results of an S3 List to show 100 results per page to your users and minimize the number of API calls that you will use.

Which CLI options should you use? (Select two)

  • -starting-token
  • -page-size
  • -next-token​
  • -limit​
  • -max-items
A
  • -starting-token

- -max-items

191
Q

You have created a test environment in Elastic Beanstalk and as part of that environment, you have created an RDS database.

How can you make sure the database can be explored after the environment is destroyed?

Make a selective delete in Elastic Beanstalk

Change the Elastic Beanstalk environment variables

Make a snapshot of the database before it gets deleted

Convert the Elastic Beanstalk environment to a worker environment

A

Make a snapshot of the database before it gets deleted

192
Q

You would like to run the X-Ray daemon for your Docker containers deployed using AWS Fargate.

What do you need to do to ensure the setup will work? (Select two)


Provide the correct IAM instance role to the EC2 instance


Deploy the X-Ray daemon agent as a process on your EC2 instance

Deploy the X-Ray daemon agent as a daemon set on ECS

​Provide the correct IAM task role to the X-Ray container

Deploy the X-Ray daemon agent as a sidecar container

A

​Provide the correct IAM task role to the X-Ray container

Deploy the X-Ray daemon agent as a sidecar container

193
Q

Your organization has set up a full CI/CD pipeline leveraging CodePipeline and the deployment is done on Elastic Beanstalk. This pipeline has worked for over a year now but you are approaching the limits of Elastic Beanstalk in terms of how many versions can be stored in the service.

How can you remove older versions that are not used by Elastic Beanstalk so that new versions can be created for your applications?


Define a Lambda function

Use a Lifecycle Policy

Use Worker Environments

Setup an .ebextensions file

A

Use a Lifecycle Policy

Each time you upload a new version of your application with the Elastic Beanstalk console or the EB CLI, Elastic Beanstalk creates an application version. If you don’t delete versions that you no longer use, you will eventually reach the application version limit and be unable to create new versions of that application.

You can avoid hitting the limit by applying an application version lifecycle policy to your applications. A lifecycle policy tells Elastic Beanstalk to delete old application versions or to delete application versions when the total number of versions for an application exceeds a specified number.

194
Q

A company has recently launched a new gaming application that the users are adopting rapidly. The company uses RDS MySQL as the database. The development team wants an urgent solution to this issue where the rapidly increasing workload might exceed the available database storage.

As a developer associate, which of the following solutions would you recommend so that it requires minimum development effort to address this requirement?


Migrate RDS MySQL database to Aurora which offers storage auto-scaling

Migrate RDS MySQL database to DynamoDB which automatically allocates storage space when required

Create read replica for RDS MySQL

Enable storage auto-scaling for RDS MySQL

A

Enable storage auto-scaling for RDS MySQL

If your workload is unpredictable, you can enable storage autoscaling for an Amazon RDS DB instance. With storage autoscaling enabled, when Amazon RDS detects that you are running out of free database space it automatically scales up your storage. Amazon RDS starts a storage modification for an autoscaling-enabled DB instance when these factors apply:

Free available space is less than 10 percent of the allocated storage.

The low-storage condition lasts at least five minutes.

At least six hours have passed since the last storage modification.

The maximum storage threshold is the limit that you set for autoscaling the DB instance. You can’t set the maximum storage threshold for autoscaling-enabled instances to a value greater than the maximum allocated storage.

195
Q

Your company wants to move away from manually managing Lambda in the AWS console and wants to upload and update them using AWS CloudFormation.

How do you declare an AWS Lambda function in CloudFormation? (Select two)


Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function block as long as there are no third-party dependencies

Upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function block

Upload all the code to CodeCommit and refer to the CodeCommit Repository in AWS::Lambda::Function block

Upload all the code as a folder to S3 and refer the folder in AWS::Lambda::Function block

Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function block and reference the dependencies as a zip file stored in S3

A

Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function block as long as there are no third-party dependencies

Upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function block

196
Q

As part of your video processing application, you are looking to perform a set of repetitive and scheduled tasks asynchronously. Your application is deployed on Elastic Beanstalk.

Which Elastic Beanstalk environment should you set up for performing the repetitive tasks?


Setup a Web Server environment and a .ebextensions file

Setup a Web Server environment and a cron.yaml file

Setup a Worker environment and a cron.yaml file

Setup a Worker environment and a .ebextensions file

A

Setup a Worker environment and a cron.yaml file

197
Q

You are using AWS SQS FIFO queues to get the ordering of messages on a per user_id basis.

As a developer, which message parameter should you set the value of user_id to guarantee the ordering?


MessageOrderId

MessageGroupId

MessageDeduplicationId

MessageHash

A

MessageGroupId

The message group ID is the tag that specifies that a message belongs to a specific message group. Messages that belong to the same message group are always processed one by one, in a strict order relative to the message group (however, messages that belong to different message groups might be processed out of order).

198
Q

A security company is requiring all developers to perform server-side encryption with customer-provided encryption keys when performing operations in AWS S3. Developers should write software with C# using the AWS SDK and implement the requirement in the PUT, GET, Head, and Copy operations.

Which of the following encryption methods meets this requirement?

​
SSE-C
Client-Side Encryption​
SSE-KMS​
SSE-S3
A

SSE-C

For the given use-case, the company wants to manage the encryption keys via its custom application and let S3 manage the encryption, therefore you must use Server-Side Encryption with Customer-Provided Keys (SSE-C).

199
Q

You would like to deploy a Lambda function globally so that requests are filtered at the AWS edge locations.

Which Lambda deployment mode do you need?

​
Deploy Lambda in a Global VPC​
Deploy Lambda in S3​
Use a Lambda@Edge​
Use a Global DynamoDB table as a Lambda source
A

Use a Lambda@Edge

Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency. With Lambda@Edge, you don’t have to provision or manage infrastructure in multiple locations around the world. You pay only for the compute time you consume - there is no charge when your code is not running.

200
Q

You would like to retrieve a subset of your dataset stored in S3 with the CSV format. You would like to retrieve a month of data and only 3 columns out of the 10.

You need to minimize compute and network costs for this, what should you use?

​
S3 Analytics
S3 Inventory
S3 Select
3 Access Logs
A

S3 Select

S3 Select enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data needed by your application, you can achieve drastic performance increases in many cases you can get as much as a 400% improvement

201
Q

Your company is shifting towards Elastic Container Service (ECS) to deploy applications. The process should be automated using the AWS CLI to create a service where at least ten instances of a task definition are kept running under the default cluster.

Which of the following commands should be executed?


aws ecs create-service –service-name ecs-simple-service –task-definition ecs-demo –desired-count 10

docker-compose create ecs-simple-service

aws ecs run-task –cluster default –task-definition ecs-demo

aws ecr create-service –service-name ecs-simple-service –task-definition ecs-demo –desired-count 10

A

aws ecs create-service –service-name ecs-simple-service –task-definition ecs-demo –desired-count 10

202
Q

Which environment variable can be used by AWS X-Ray SDK to ensure that the daemon is correctly discovered on ECS?

​
AWS_XRAY_TRACING_NAME
AWS_XRAY_DEBUG_MODE​
AWS_XRAY_DAEMON_ADDRESS
AWS_XRAY_CONTEXT_MISSING
A

AWS_XRAY_DAEMON_ADDRESS

203
Q

As an AWS Certified Developer Associate, you have been hired to consult with a company that uses the NoSQL database for mobile applications. The developers are using DynamoDB to perform operations such as GetItem but are limited in knowledge. They would like to be more efficient with retrieving some attributes rather than all.

Which of the following recommendations would you provide?

​
Use a Scan​
Use the --query parameter​
Specify a ProjectionExpression​
Use a FilterExpression
A

Specify a ProjectionExpression: A projection expression is a string that identifies the attributes you want. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated.

204
Q

An organization with high data volume workloads have successfully moved to DynamoDB after having many issues with traditional database systems. However, a few months into production, DynamoDB tables are consistently recording high latency.

As a Developer Associate, which of the following would you suggest to reduce the latency? (Select two)


Use DynamoDB Accelerator (DAX) for businesses with heavy write-only workloads

​Use eventually consistent reads in place of strongly consistent reads whenever possible

Increase the request timeout settings, so the client gets enough time to complete the requests, thereby reducing retries on the system

Consider using Global tables if your application is accessed by globally distributed users

Reduce connection pooling, which keeps the connections alive even when user requests are not present, thereby, blocking the services

A

Consider using Global tables if your application is accessed by globally distributed users

​Use eventually consistent reads in place of strongly consistent reads whenever possible

205
Q

A multi-national company maintains separate AWS accounts for different verticals in their organization. The project manager of a team wants to migrate the Elastic Beanstalk environment from Team A’s AWS account into Team B’s AWS account. As a Developer, you have been roped in to help him in this process.

Which of the following will you suggest?


Create an export configuration from the Elastic Beanstalk console from Team A’s account. This configuration has to be shared with the IAM Role of Team B’s account. The import option of Team B’s account will show the saved configuration, that can be used to create a new Beanstalk application


Create a saved configuration in Team A’s account and configure it to Export. Now, log into Team B’s account and choose the Import option. Here, you need to specify the name of the saved configuration and allow the system to create the new application. This takes a little time based on the Regions the two accounts belong to

Create a saved configuration in Team A’s account and download it to your local machine. Make the account-specific parameter changes and upload to the S3 bucket in Team B’s account. From Elastic Beanstalk console, create an application from ‘Saved Configurations’

It is not possible to migrate Elastic Beanstalk environment from one AWS account to the other

A

Create a saved configuration in Team A’s account and download it to your local machine. Make the account-specific parameter changes and upload to the S3 bucket in Team B’s account. From Elastic Beanstalk console, create an application from ‘Saved Configurations’

206
Q

The development team at an IT company has configured an Application Load Balancer (ALB) with a Lambda function A as the target but the Lambda function A is not able to process any request from the ALB. Upon investigation, the team finds that there is another Lambda function B in the AWS account that is exceeding the concurrency limits.

How can the development team address this issue?


Use a Cloudfront Distribution instead of an Application Load Balancer (ALB) for Lambda function A

Set up reserved concurrency for the Lambda function B so that it throttles if it goes above a certain concurrency limit

Use an API Gateway instead of an Application Load Balancer (ALB) for Lambda function A

Set up provisioned concurrency for the Lambda function B so that it throttles if it goes above a certain concurrency limit

A

Set up reserved concurrency for the Lambda function B so that it throttles if it goes above a certain concurrency limit

To ensure that a function can always reach a certain level of concurrency, you can configure the function with reserved concurrency. When a function has reserved concurrency, no other function can use that concurrency.

Incorrect options:

Set up provisioned concurrency for the Lambda function B so that it throttles if it goes above a certain concurrency limit - You should use provisioned concurrency to enable your function to scale without fluctuations in latency. By allocating provisioned concurrency before an increase in invocations, you can ensure that all requests are served by initialized instances with very low latency. Provisioned concurrency is not used to limit the maximum concurrency for a given Lambda function, so this option is incorrect.

207
Q

Your team-mate has configured an Amazon S3 event notification for an S3 bucket that holds sensitive audit data of a firm. As the Team Lead, you are receiving the SNS notifications for every event in this bucket. After validating the event data, you realized that few events are missing.

What could be the reason for this behavior and how to avoid this in the future?


If two writes are made to a single non-versioned object at the same time, it is possible that only a single event notification will be sent

Someone could have created a new notification configuration and that has overridden your existing configuration

Versioning is enabled on the S3 bucket and event notifications are getting fired for only one version

Your notification action is writing to the same bucket that triggers the notification

A

If two writes are made to a single non-versioned object at the same time, it is possible that only a single event notification will be sent

208
Q

As an AWS Certified Developer Associate, you are writing a CloudFormation template in YAML. The template consists of an EC2 instance creation and one RDS resource. Once your resources are created you would like to output the connection endpoint for the RDS database.

Which intrinsic function returns the value needed?

!GetAtt​
!Sub​
!FindInMap​
!Ref

A

!GetAtt​

!GetAtt - The Fn::GetAtt intrinsic function returns the value of an attribute from a resource in the template. This example snippet returns a string containing the DNS name of the load balancer with the logical name myELB - YML : !GetAtt myELB.DNSName JSON : “Fn::GetAtt” : [ “myELB” , “DNSName” ]

Incorrect options:

!Sub - The intrinsic function Fn::Sub substitutes variables in an input string with values that you specify. In your templates, you can use this function to construct commands or outputs that include values that aren’t available until you create or update a stack.

!Ref - The intrinsic function Ref returns the value of the specified parameter or resource.

!FindInMap - The intrinsic function Fn::FindInMap returns the value corresponding to keys in a two-level map that is declared in the Mappings section. For example, you can use this in the Mappings section that contains a single map, RegionMap, that associates AMIs with AWS regions.

209
Q

A photo-sharing application manages its EC2 server fleet running behind an Application Load Balancer and the traffic is fronted by a CloudFront distribution. The development team wants to decouple the user authentication process for the application so that the application servers can just focus on the business logic.

As a Developer Associate, which of the following solutions would you recommend to address this use-case with minimal development effort?


Use Cognito Authentication via Cognito User Pools for your CloudFront distribution

Use Cognito Authentication via Cognito Identity Pools for your CloudFront distribution

Use Cognito Authentication via Cognito User Pools for your Application Load Balancer

Use Cognito Authentication via Cognito Identity Pools for your Application Load Balancer

A

Use Cognito Authentication via Cognito User Pools for your Application Load Balancer

Application Load Balancer can be used to securely authenticate users for accessing your applications. This enables you to offload the work of authenticating users to your load balancer so that your applications can focus on their business logic.

210
Q

A developer in your company has configured a build using AWS CodeBuild. The build fails and the developer needs to quickly troubleshoot the issue to see which commands or settings located in the BuildSpec file are causing an issue.

Which approach will help them accomplish this?


SSH into the CodeBuild Docker container
Enable detailed monitoring
Run AWS CodeBuild locally using CodeBuild Agent​
Freeze the CodeBuild during its next execution

A

Run AWS CodeBuild locally using CodeBuild Agent​

With the Local Build support for AWS CodeBuild, you just specify the location of your source code, choose your build settings, and CodeBuild runs build scripts for compiling, testing, and packaging your code. You can use the AWS CodeBuild agent to test and debug builds on a local machine.

By building an application on a local machine you can:

Test the integrity and contents of a buildspec file locally.

Test and build an application locally before committing.

Identify and fix errors quickly from your local development environment

211
Q

The development team at a retail organization wants to allow a Lambda function in its AWS Account A to access a DynamoDB table in another AWS Account B.

As a Developer Associate, which of the following solutions would you recommend for the given use-case?


Create a clone of the Lambda function in AWS Account B so that it can access the DynamoDB table in the same account


Create an IAM role in Account B with access to DynamoDB. Modify the trust policy of the execution role in Account A to allow the execution role of Lambda to assume the IAM role in Account B. Update the Lambda function code to add the AssumeRole API call


Create an IAM role in Account B with access to DynamoDB. Modify the trust policy of the role in Account B to allow the execution role of Lambda to assume this role. Update the Lambda function code to add the AssumeRole API call


Add a resource policy to the DynamoDB table in AWS Account B to give access to the Lambda function in Account A

A

Create an IAM role in Account B with access to DynamoDB. Modify the trust policy of the role in Account B to allow the execution role of Lambda to assume this role. Update the Lambda function code to add the AssumeRole API call

212
Q

An e-commerce application writes log files into Amazon S3. The application also reads these log files in parallel on a near real-time basis. The development team wants to address any data discrepancies that might arise when the application overwrites an existing log file and then tries to read that specific log file.

Which of the following options BEST describes the capabilities of Amazon S3 relevant to this scenario?


A process replaces an existing object and immediately tries to read it. Until the change is fully propagated, Amazon S3 might return the new data

A process replaces an existing object and immediately tries to read it. Amazon S3 always returns the latest version of the object

A process replaces an existing object and immediately tries to read it. Until the change is fully propagated, Amazon S3 might return the previous data

A process replaces an existing object and immediately tries to read it. Until the change is fully propagated, Amazon S3 does not return any data

A

A process replaces an existing object and immediately tries to read it. Amazon S3 always returns the latest version of the object

213
Q


A snapshot of an encrypted volume can be encrypted or unencrypted

Encryption by default is a Region-specific setting. If you enable it for a Region, you cannot disable it for individual volumes or snapshots in that Region

You can encrypt an existing unencrypted volume or snapshot by using AWS Key Management Service (KMS) AWS SDKs

A volume restored from an encrypted snapshot, or a copy of an encrypted snapshot is always encrypted

Encryption by default is an AZ specific setting. If you enable it for an AZ, you cannot disable it for individual volumes or snapshots in that AZ

A

A volume restored from an encrypted snapshot, or a copy of an encrypted snapshot is always encrypted

Encryption by default is a Region-specific setting. If you enable it for a Region, you cannot disable it for individual volumes or snapshots in that Region

214
Q

A development team has noticed that one of the EC2 instances has been wrongly configured with the ‘DeleteOnTermination’ attribute set to True for its root EBS volume.

As a developer associate, can you suggest a way to disable this flag while the instance is still running?

Set the DeleteOnTermination attribute to False using the command line

Set the DisableApiTermination attribute of the instance using the API

Update the attribute using AWS management console. Select the EC2 instance and then uncheck the Delete On Termination check box for the root EBS volume

The attribute cannot be updated when the instance is running. Stop the instance from Amazon EC2 console and then update the flag

A

Set the DeleteOnTermination attribute to False using the command line

Incorrect options:

Update the attribute using AWS management console. Select the EC2 instance and then uncheck the Delete On Termination check box for the root EBS volume - You can set the DeleteOnTermination attribute to False when you launch a new instance. It is not possible to update this attribute of a running instance from the AWS console.

215
Q

An IT company has a HealthCare application with data security requirements such that the encryption key must be stored in a custom application running on-premises. The company wants to offload the data storage as well as the encryption process to Amazon S3 but continue to use the existing encryption keys.

Which of the following S3 encryption options allows the company to leverage Amazon S3 for storing data with given constraints?


Client-Side Encryption with data encryption is done on the client-side before sending it to Amazon S3

Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

Server-Side Encryption with Customer-Provided Keys (SSE-C)

A

Server-Side Encryption with Customer-Provided Keys (SSE-C)

Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.

Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

For the given use-case, the company wants to manage the encryption keys via its custom application and let S3 manage the encryption, therefore you must use Server-Side Encryption with Customer-Provided Keys (SSE-C).

216
Q

A retail company manages its IT infrastructure on AWS Cloud via Elastic Beanstalk. The development team at the company is planning to deploy the next version with MINIMUM application downtime and the ability to rollback quickly in case deployment goes wrong.

As a Developer Associate, which of the following options would you recommend to the development team?


Deploy the new application version using ‘Rolling’ deployment policy

Deploy the new version to a separate environment via Blue/Green Deployment, and then swap Route 53 records of the two environments to redirect traffic to the new version

Deploy the new application version using ‘Rolling with additional batch’ deployment policy

Deploy the new application version using ‘All at once’ deployment policy

A

Deploy the new version to a separate environment via Blue/Green Deployment, and then swap Route 53 records of the two environments to redirect traffic to the new version

Incorrect:
Deploy the new application version using ‘Rolling’ deployment policy - This policy avoids downtime and minimizes reduced availability, at a cost of a longer deployment time. However rollback process is via manual redeploy, so it’s not as quick as the Blue/Green deployment.

217
Q

An investment firm wants to continuously generate time-series analytics of the stocks being purchased by its customers. The firm wants to build a live leaderboard with real-time analytics for these in-demand stocks.

Which of the following represents a fully managed solution to address this use-case?


Use Kinesis Firehose to ingest data and Amazon Athena to generate leaderboard scores and time-series analytics

Use Kinesis Firehose to ingest data and Kinesis Data Analytics to generate leaderboard scores and time-series analytics

Use Kinesis Data Streams to ingest data and Amazon Kinesis Client Library to the application logic to generate leaderboard scores and time-series analytics

Use Kinesis Data Streams to ingest data and Kinesis Data Analytics to generate leaderboard scores and time-series analytics

A

Use Kinesis Firehose to ingest data and Kinesis Data Analytics to generate leaderboard scores and time-series analytics

Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services. It can capture, transform, and deliver streaming data to Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, transform, and encrypt your data streams before loading, minimizing the amount of storage used and increasing security.

Amazon Kinesis Data Analytics is the easiest way to transform and analyze streaming data in real-time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Kinesis Data Analytics reduces the complexity of building, managing, and integrating Apache Flink applications with other AWS services.

218
Q

A company has deployed a new serverless Single Page Application (SPA) on AWS. The application ran smoothly in the first few weeks until it got featured on a popular television show. As it gained popularity, the number of users getting a 503 error also increased. The developer found out that this is due to the throttling of the Lambda function.

What can the developer do to troubleshoot this issue? (Select THREE.)

​
Configure reserved concurrency
​
Increase Lambda function timeout
​
Request a service quota increase
​
Deploy the Lambda function in VPC

Use exponential backoff in the application.

Use a compiled language like GoLang to improve the function’s performance

A
  • Use exponential backoff in your app
  • Configure reserved concurrency
  • Request a service quota increase.
219
Q

A full-stack developer has developed an application written in Node.js to host an upcoming mobile game tournament. The developer has decided to deploy the application using AWS Elastic Beanstalk because of its ease-of-use. Upon experimenting, he learned that he could configure the webserver environment with several resources.

Which of the following services can the developer configure with Elastic Beanstalk? (Select THREE.)

​Amazon Athena
Amazon CloudWatch
AWS Lambda
Application Load Balancer
Amazon EC2 Instance
​Amazon CloudFront
A

Amazon CloudWatch
Application Load Balancer
Amazon EC2 Instance

220
Q

A transcoding media service is being developed on Amazon Cloud. Photos uploaded to Amazon S3 will trigger a Lambda function. The Lambda function will cause the Step Functions to coordinate a series of processes that will do the image analysis tasks. The input of each function should be preserved on the result to conform to the application’s logic flow.

What should the developer do?


Declare an InputPath field filter on the Amazon States Language specification.

Declare a Parameters field filter on the Amazon States Language specification.

Declare a ResultPath field filter on the Amazon States Language specification.

Declare an OutputPath field filter on the Amazon States Language specification.

A

Declare a ResultPath field filter on the Amazon States Language specification.

In the Amazon States Language, these fields filter and control the flow of JSON from state to state:

  • InputPath
  • OutputPath
  • ResultPath
  • Parameters

Out of these field filters, the ResultPath field filter is the only one that can control input values and its previous results to be passed to the state output. Hence, the correct answer is: Declare a ResultPath field filter on the Amazon States Language specification.

The option that says: Declare an OutputPath field filter on the Amazon State Language specification is incorrect because it just operates on the output level. It is used to filter out unwanted information and pass only the portion of JSON that you care about that will be passed onto the next state.

221
Q

A Lambda function is sending data to an Aurora MySQL DB Instance in your VPC. However, you are getting a MySQL: ERROR 1040: Too many connections error whenever there is a surge in incoming traffic. Upon investigation, you found that your function is always creating a new database connection whenever it is invoked.

Which of the following is the MOST suitable and scalable solution to overcome this problem?

​Increase the value of the max_connections parameter of the Aurora MySQL DB Instance.

Use unreserved account concurrency.

Use the execution context in your function and add logic in your code to check if a connection exists before creating one.

Increase the allocated memory of your function.

A

Use the execution context in your function and add logic in your code to check if a connection exists before creating one.

After a Lambda function is executed, AWS Lambda maintains the execution context for some time in anticipation of another Lambda function invocation. In effect, the service freezes the execution context after a Lambda function completes, and thaws the context for reuse, if AWS Lambda chooses to reuse the context when the Lambda function is invoked again.

222
Q

A recently deployed Lambda function has an intermittent issue in processing customer data. You enabled the active tracing option in order to detect, analyze, and optimize performance issues of your function using the X-Ray service.

Which of the following environment variables are used by AWS Lambda to facilitate communication with X-Ray? (Select TWO.)

​
AWS_XRAY_DEBUG_MODE
AUTO_INSTRUMENT
_X_AMZN_TRACE_ID
AWS_XRAY_TRACING_NAME
​AWS_XRAY_CONTEXT_MISSING
A

​AWS_XRAY_CONTEXT_MISSING
_X_AMZN_TRACE_ID

AWS Lambda uses environment variables to facilitate communication with the X-Ray daemon and configure the X-Ray SDK.

_X_AMZN_TRACE_ID: Contains the tracing header, which includes the sampling decision, trace ID, and parent segment ID. If Lambda receives a tracing header when your function is invoked, that header will be used to populate the _X_AMZN_TRACE_ID environment variable. If a tracing header was not received, Lambda will generate one for you.

AWS_XRAY_CONTEXT_MISSING: The X-Ray SDK uses this variable to determine its behavior in the event that your function tries to record X-Ray data, but a tracing header is not available. Lambda sets this value to LOG_ERROR by default.

AWS_XRAY_DAEMON_ADDRESS: This environment variable exposes the X-Ray daemon’s address in the following format: IP_ADDRESS:PORT. You can use the X-Ray daemon’s address to send trace data to the X-Ray daemon directly, without using the X-Ray SDK.

223
Q

A website is hosted in an Auto Scaling group of EC2 instances behind an Application Load Balancer. It also uses CloudFront with a default domain name to distribute its static assets and dynamic contents. However, the website has a poor search ranking as it doesn’t use a secure HTTPS/SSL on its site.

Which are the possible solutions that the developer can implement in order to set up HTTPS communication between the viewers and CloudFront? (Select TWO.)


Set the Viewer Protocol Policy to use HTTPS Only.

Use a self-signed certificate in the ALB.

Configure the ALB to use its default SSL/TLS certificate.

Use a self-signed SSL/TLS certificate in the ALB which is stored in a private S3 bucket.

Set the Viewer Protocol Policy to use Redirect HTTP to HTTPS.

A

Set the Viewer Protocol Policy to use HTTPS Only.
Set the Viewer Protocol Policy to use Redirect HTTP to HTTPS.

Configuring the ALB to use its default SSL/TLS certificate is incorrect because there is no default SSL certificate in ELB, unlike what we have in CloudFront.

224
Q

You developed a shell script which uses AWS CLI to create a new Lambda function. However, you received an InvalidParameterValueException after running the script.

What is the MOST likely cause of this issue?

The AWS Lambda service encountered an internal error.

You provided an IAM role in the CreateFunction API which AWS Lambda is unable to assume.

You have exceeded your maximum total code size per account.

The resource already exists.

A

You provided an IAM role in the CreateFunction API which AWS Lambda is unable to assume.

225
Q

An internal web application is hosted in a custom VPC with multiple private subnets only. Every EC2 instance that will be provisioned on this VPC will require access to an S3 bucket to pull configuration files as well as to push application logs.

Which of the following options is the most suitable solution to use in this scenario?


Create an IAM Role and attach it to each EC2 instance.

Create a VPC endpoint for S3.

Store the IAM user and password in the application code to access the S3 bucket.

Use the AWS SDK for your application and issue the aws configure CLI command to store your access keys, which will be referred to by the SDK.

A

Create a VPC endpoint for S3.

An internal web application

226
Q

The users of a social media website must be authenticated using social identity providers such as Twitter, Facebook, and Google. Users can login to the site which will allow them to upload their selfies, memes, and other media files in an S3 bucket. As an additional feature, you should also enable guest user access to certain sections of the website.

Which of the following should you do to accomplish this task?


Create an Identity Pool in Amazon Cognito and enable access to unauthenticated identities.

Create a User Pool in Amazon Cognito and enable access to unauthenticated identities.

Create a custom identity broker which integrates with the AWS Security Token Service and supports unauthenticated access.

Integrate AWS Single Sign-On with your website.

A

Create an Identity Pool in Amazon Cognito and enable access to unauthenticated identities.

(you should also enable guest user access)

227
Q

You want to update a Lambda function on your production environment and ensure that when you publish the updated version, you still have a quick way to roll back to the older version in case you encountered a problem. To prevent any sudden user interruptions, you want to gradually increase the traffic going to the new version.

Which of the following implementation is the BEST option to use?

Use stage variables in your Lambda function.

Use Route 53 weighted routing to two Lambda functions.

Use Traffic Shifting with Lambda Aliases.

Use ELB to route traffic to both Lambda functions.

A

Use Traffic Shifting with Lambda Aliases.

228
Q

An API gateway with a Lambda proxy integration takes a long time to complete its processing. There were also occurrences where some requests timed out. You want to monitor the responsiveness of your API calls as well as the underlying Lambda function.

Which of the following CloudWatch metrics should you use to troubleshoot this issue? (Select TWO.)
​
CacheHitCount
IntegrationLatency
CacheMissCount
Count
Latency
A

Latency
IntegrationLatency

  • Monitor the IntegrationLatency metrics to measure the responsiveness of the backend.
  • Monitor the Latency metrics to measure the overall responsiveness of your API calls.
  • Monitor the CacheHitCount and CacheMissCount metrics to optimize cache capacities to achieve a desired performance.
229
Q

You are building a distributed system using KMS where you need to encrypt data at a later time. An API must be called that returns only the encrypted copy of the data key which you will use for encryption. After an hour, you will decrypt the data key by calling the Decrypt API then using the returned plaintext data key to finally encrypt the data.

Which is the MOST suitable KMS API that the system should use to securely implement the requirements described above?

​GenerateDataKeyWithoutPlaintext
GenerateDataKey
GenerateRandom
Encrypt

A

​GenerateDataKeyWithoutPlaintext

GenerateDataKeyWithoutPlaintext is identical to GenerateDataKey except that it returns only the encrypted copy of the data key.

This operation is useful for systems that need to encrypt data at some point, but not immediately. When you need to encrypt the data, you call the Decrypt operation on the encrypted copy of the key.

230
Q

Your customers require access to the REST APIs of your web application which is hosted on EC2 instances behind a load balancer in your VPC. To accommodate this request, your web services should be integrated with API Gateway that has a custom data mapping. You need to specify how the incoming request data is mapped to the integration request and how the resulting integration response data is mapped to the method response.

Which of the following integration types is the MOST suitable one to use in API Gateway to meet this requirement?

​AWS
HTTP_PROXY
AWS_PROXY
HTTP

A

HTTP

HTTP_PROXY is incorrect because this type is only used for HTTP proxy integration where you don’t need to do data mapping for your request and response data.

231
Q

A company has a global multi-player game with a multi-master DynamoDB database topology which stores data in multiple AWS regions. You were assigned to develop a real-time data analytics application which will track and store the recent changes on all the tables from various regions. Only the new data of the recently updated item is needed to be tracked by your application.

Which of the following is the MOST suitable way to configure the data analytics application to detect and retrieve the updated database entries automatically?

Enable DynamoDB Streams and set the value of StreamViewType to NEW_IMAGE. Create a trigger in AWS Lambda to capture stream data and forward it to your application.

Enable DynamoDB Streams and set the value of StreamViewType to NEW_IMAGE. Use Kinesis Adapter in the application to consume streams from DynamoDB.

Enable DynamoDB Streams and set the value of StreamViewType to NEW_AND_OLD_IMAGE. Use Kinesis Adapter in the application to consume streams from DynamoDB.

Enable DynamoDB Streams and set the value of StreamViewType to NEW_AND_OLD_IMAGE. Create a trigger in AWS Lambda to capture stream data and forward it to your application.

A

Enable DynamoDB Streams and set the value of StreamViewType to NEW_IMAGE. Use Kinesis Adapter in the application to consume streams from DynamoDB.

DynamoDB Streams provides a time-ordered sequence of item level changes in any DynamoDB table. The changes are de-duplicated and stored for 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near real time.

Using the Kinesis Adapter is the recommended way to consume Streams from DynamoDB. The DynamoDB Streams API is intentionally similar to that of Kinesis Streams, a service for real-time processing of streaming data at massive scale.

The option that says: Enable DynamoDB Streams and set the value of StreamViewType to NEW_IMAGE. Create a trigger in AWS Lambda to capture stream data and forward it to your application is incorrect because just like what is mentioned above, it is better to use Kinesis instead of Lambda for the real-time data analytics application.

232
Q

An application in your development account is running in an AWS Elastic Beanstalk environment which has an attached Amazon RDS database. You noticed that if you terminate the environment, it also brings down the database which hinders you from performing seamless updates with blue-green deployments. This also poses a critical security risk if the company decides to deploy the application in production.

In this scenario, how can you decouple your database instance from your environment without having any data loss?

​​
Use the blue / green deployment strategy to decouple the Amazon RDS instance from your Elastic Beanstalk environment. Create an RDS DB snapshot of the database and enable deletion protection. Create a new Elastic Beanstalk environment with the necessary information to connect to the Amazon RDS instance and delete the old environment.


Use the blue / green deployment strategy to decouple the Amazon RDS instance from your Elastic Beanstalk environment. Create an RDS DB snapshot of the database and enable deletion protection. Create a new Elastic Beanstalk environment with the necessary information to connect to the Amazon RDS instance. Before terminating the old Elastic Beanstalk environment, remove its security group rule first before proceeding.

A

Use the blue / green deployment strategy to decouple the Amazon RDS instance from your Elastic Beanstalk environment. Create an RDS DB snapshot of the database and enable deletion protection. Create a new Elastic Beanstalk environment with the necessary information to connect to the Amazon RDS instance. Before terminating the old Elastic Beanstalk environment, remove its security group rule first before proceeding.

…is incorrect because although the deployment strategy being used here is valid, the existing security group rule is not yet removed which hinders the deletion of the old environment.

233
Q

A data analytics company has installed sensors to track the number of people that goes to the mall. The data sets are collected in real-time by an Amazon Kinesis Data Stream which has a consumer that is configured to process data every other day and store the results to S3. Your team noticed that your S3 bucket is only receiving half of the data that is being sent to the Kinesis stream but after checking, you have verified that the sensors are properly sending the data to Amazon Kinesis in real-time without any issues.

Which of the following is the MOST likely root cause of this issue?


The Amazon Kinesis Data Stream automatically deletes duplicate data.

The Amazon Kinesis Data Stream has too many open shards.

By default, the data records are only accessible for 24 hours from the time they are added to a Kinesis stream.

The sensors are having intermittent connection issues.

A

By default, the data records are only accessible for 24 hours from the time they are added to a Kinesis stream.

In this scenario, the consumer of the data stream is configured to process the data every other day. Since the default data retention of Kinesis data stream is only 24 hours, the data from the day before is already lost prior to the scheduled processing. Hence, the root cause of the problem in this scenario is because by default, the data records are only accessible for 24 hours from the time they are added to a Kinesis stream.

234
Q

A company has a central data repository in Amazon S3 that needs to be accessed by developers belonging to different AWS accounts. The required IAM role has been created with the appropriate S3 permissions.

Given that the developers mostly interact with S3 via APIs, which API should the developers call to use the IAM role?

​
AssumeRoleWithSAML​
AssumeRole​
GetSessionToken
AssumeRoleWithWebIdentity
A

AssumeRole​

developers belonging to different AWS accounts

235
Q

Your team is developing a new feature on your application which is already hosted in Elastic Beanstalk. After several weeks, the new version of the application is ready to be deployed and you were instructed to handle the deployment.

What is the correct way to deploy the new version to Elastic Beanstalk via the CLI?


Package your application as a zip file and deploy it using the aws elasticbeanstalk update-application command.

Package your application as a tar file and deploy it using the eb deploy command.

Package your application as a tar file and deploy it using the aws elasticbeanstalk update-application command.

Package your application as a zip file and deploy it using the eb deploy command.+

A

Package your application as a zip file and deploy it using the eb deploy command.

236
Q

You recently deployed an application to a newly created AWS account, which uses two identical Lambda functions to process ad-hoc requests. The first function processes incoming requests efficiently but the second one has a longer processing time even though both of the functions have exactly the same code. Based on your monitoring, the Throttles metric of the second function is greater than the first one in Amazon CloudWatch.

Which of the following are possible solutions that you can implement to fix this issue? (Select TWO.)


Decrease the concurrency execution limit of the first function.

Configure the second function to use an unreserved account concurrency.

Set the concurrency execution limit of the second function to 0.

Set the concurrency execution limit of both functions to 450.

Set the concurrency execution limit of both functions to 500.

A

​Set the concurrency execution limit of both functions to 450.

Decrease the concurrency execution limit of the first function.

Setting the concurrency execution limit of the second function to 0 is incorrect because this will throttle all future invocations of this function and will make the problem worse.

Setting the concurrency execution limit of both functions to 500 is incorrect because by default, a newly created AWS account has a concurrent execution limit of only 1000. Take note that AWS Lambda will keep the unreserved concurrency pool at a minimum of 100 concurrent executions so that functions that do not have specific limits set can still process requests. Hence, you can only allocate a concurrent execution limit of 900 for a single Lambda function or 450 for two functions.

237
Q

The source code of an application is hosted in CodeCommit which has a single master branch. A developer requires certain permissions in order to pull and push code to the repository using the git fetch, git clone, and git push commands. To improve security, the developer should be granted only the permissions required to perform these tasks.

Which of the following permissions should the developer have in order to properly access the repository? (Select TWO.)

codecommit: GitPush
codecommit: UpdateDefaultBranch​
codecommit: GetBranch
codecommit: *
codecommit: GitPull

A

codecommit: GitPull
codecommit: GitPush

The codecommit:GitPull permission is required to pull information from a CodeCommit repository to a local repo and conversely,

the codecommit:GitPush permission is required to push information from a local repo to a CodeCommit repository. This is an IAM policy permission only, not an API action. Hence, these two options are the required permissions needed by the developer in this scenario.

238
Q

You are developing a serverless application in AWS which is composed of several Lambda functions and a DynamoDB database. The requirement is to process the requests asynchronously.

Which of the following is the MOST suitable way to accomplish this?

Use the InvokeAsync API to call the Lambda function and set the invocation type request parameter to Event.

Use the Invoke API to call the Lambda function and set the invocation type request parameter to RequestResponse.

Use the InvokeAsync API to call the Lambda function and set the invocation type request parameter to RequestResponse.

Use the Invoke API to call the Lambda function and set the invocation type request parameter to Event.

A

Use the Invoke API to call the Lambda function and set the invocation type request parameter to Event.

239
Q

In order to quickly troubleshoot their systems, your manager instructed you to record the calls that your application makes to all AWS services and resources. You developed a custom code that will send the segment documents directly to X-Ray by using the PutTraceSegments API.

What should you include in your segment document to meet the above requirement?

metadata
annotations​
tracing header​
subsegments

A

subsegments

For services that don’t send their own segments like Amazon DynamoDB, X-Ray uses subsegments to generate inferred segments and downstream nodes on the service map. This lets you see all of your downstream dependencies, even if they don’t support tracing, or are external.

Subsegments represent your application’s view of a downstream call as a client. If the downstream service is also instrumented, the segment that it sends replaces the inferred segment generated from the upstream client’s subsegment. The node on the service graph always uses information from the service’s segment, if it’s available, while the edge between the two nodes uses the upstream service’s subsegment.

240
Q

A developer is instrumenting an application which will be hosted in a large On-Demand EC2 instance in AWS. Which of the following are valid considerations in X-Ray that the developer should follow? (Select TWO.)


Set the annotations object with any additional custom data that you want to store in the segment.

Set the metadata object with key-value pairs that you want X-Ray to index for search.

Set the namespace subsegment field to aws for AWS SDK calls and remote for other downstream calls.

Set the metadata object with any additional custom data that you want to store in the segment.

Set the namespace subsegment field to remote for AWS SDK calls and aws for other downstream calls.

A

Set the namespace subsegment field to aws for AWS SDK calls and remote for other downstream calls.

Set the metadata object with any additional custom data that you want to store in the segment.

annotations - annotations object with key-value pairs that you want X-Ray to index for search.

metadata - metadata object with any additional data that you want to store in the segment.

241
Q

A Docker application hosted on an ECS cluster has been encountering intermittent unavailability issues and time outs. The lead DevOps engineer instructed you to instrument the application to detect where high latencies are occurring and to determine the specific services and paths impacting application performance.

Which of the following steps should you do to properly accomplish this task? (Select TWO.)


Configure the port mappings and network mode settings in the container agent to allow traffic on TCP port 2000.

Add the xray-daemon.config configuration file in your Docker image.

Create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to your Amazon ECS cluster.

Manually install the X-Ray daemon to the instances via a user data script.

Configure the port mappings and network mode settings in your task definition file to allow traffic on UDP port 2000.

A

​Configure the port mappings and network mode settings in your task definition file to allow traffic on UDP port 2000.

Create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to your Amazon ECS cluster.

Adding the xray-daemon.config configuration file in your Docker image is incorrect because this step is not suitable for ECS. The xray-daemon.config configuration file is primarily used in Elastic Beanstalk.

242
Q

Due to the popularity of serverless computing, your manager instructed you to share your technical expertise to the whole software development department of your company. You are planning to deploy a simple Node.js ‘Hello World’ Lambda function to AWS using CloudFormation.

Which of the following is the EASIEST way of deploying the function to AWS?


Upload the code in S3 as a ZIP file then specify the S3 path in the ZipFile parameter of the AWS::Lambda::Function resource in the CloudFormation template.

Include your function source inline in the ZipFile parameter of the AWS::Lambda::Function resource in the CloudFormation template.

Include your function source inline in the Code parameter of the AWS::Lambda::Function resource in the CloudFormation template.

Upload the code in S3 then specify the S3Key and S3Bucket parameters under the AWS::Lambda::Function resource in the CloudFormation template.

A

​Include your function source inline in the ZipFile parameter of the AWS::Lambda::Function resource in the CloudFormation template.

243
Q

A developer has a set of EC2 instances which runs the Amazon Kinesis Client Library to process a data stream in AWS. Based on the custom metrics, it shows that the instances are maxing out their CPU Utilization, and there are insufficient Kinesis shards to handle the rate of data flowing through the stream.

Which of the following is the BEST course of action that the developer should take to solve this issue and prevent this situation from re-occurring in the future?


Increase both the instance size and the number of open shards.

Increase the instance size to a larger type.

Increase the number of instances up to the number of open shards.

Increase the number of shards.

A

Increase both the instance size and the number of open shards.

By increasing the instance size and number of shards in your Kinesis stream, the developer allows the instances to handle more record processors, which are running in parallel within the instance. It also allows the stream to properly accommodate the rate of data being sent in. The data capacity of your stream is a function of the number of shards that you specify for the stream. The total capacity of the stream is the sum of the capacities of its shards.

244
Q

Your manager assigned you a task of implementing server-side encryption with customer-provided encryption keys (SSE-C) to your S3 bucket, which will allow you to set your own encryption keys. Amazon S3 will manage both the encryption and decryption process using your key when you access your objects, which will remove the burden of maintaining any code to perform data encryption and decryption.

To properly upload data to this bucket, which of the following headers must be included in your request?


x-amz-server-side-encryption, x-amz-server-side-encryption-customer-key and x-amz-server-side-encryption-customer-key-MD5 headers

x-amz-server-side-encryption and x-amz-server-side-encryption-aws-kms-key-id headers

x-amz-server-side-encryption-customer-key header only

x-amz-server-side​-encryption​-customer-algorithm, x-amz-server-side-encryption-customer-key and x-amz-server-side-encryption-customer-key-MD5 headers

A


x-amz-server-side​-encryption​-customer-algorithm,
x-amz-server-side-encryption-customer-key and
x-amz-server-side-encryption-customer-key-MD5 headers

245
Q

A developer is working on an application that will process encrypted files. The application will use AWS KMS to decrypt the files locally before it can proceed with the processing of the files.

Which of the following are valid and secure steps in decrypting data? (Select TWO.)


Use the Decrypt operation to decrypt the plaintext data key.

Use the plaintext data key to decrypt data locally, then erase the encrypted data key from memory.

Use the plaintext data key to decrypt data locally, then erase the plaintext data key from memory.

Use the Decrypt operation to decrypt the encrypted data key.

Use the encrypted data key to decrypt data locally, then erase the encrypted data key from memory.

A


Use the plaintext data key to decrypt data locally, then erase the plaintext data key from memory.

It is recommended that you use the following pattern to encrypt data locally in your application:

  1. Use the GenerateDataKey operation to get a data encryption key.
  2. Use the plaintext data key (returned in the Plaintext field of the response) to encrypt data locally, then erase the plaintext data key from memory.
  3. Store the encrypted data key (returned in the CiphertextBlob field of the response) alongside the locally encrypted data.

To decrypt data locally:

  1. Use the Decrypt operation to decrypt the encrypted data key. The operation returns a plaintext copy of the data key.
  2. Use the plaintext data key to decrypt data locally, then erase the plaintext data key from memory.
246
Q

For application deployments, a company is using CloudFormation templates, which are regularly updated to map the latest AMI IDs. A developer was assigned to automate the process since the current set up takes a lot of time to execute on a regular basis.

Which of the following is the MOST suitable solution that the developer should implement to satisfy this requirement?


Set up your Systems Manager State Manager to store the latest AMI IDs and integrate it with your CloudFormation template. Call the update-stack API in CloudFormation whenever you decide to update the EC2 instances in your CloudFormation template.

​Integrate CloudFormation with AWS Service Catalog to fetch the latest AMI IDs and automatically use them for succeeding deployments.

​Set up CloudFormation with Systems Manager Parameter Store to retrieve the latest AMI IDs for your template. Whenever you decide to update the EC2 instances, call the update-stack API in CloudFormation in your CloudFormation template.

Integrate AWS Service Catalog with AWS Config to automatically fetch the latest AMI and use it for succeeding deployments.

A

Set up CloudFormation with Systems Manager Parameter Store to retrieve the latest AMI IDs for your template. Whenever you decide to update the EC2 instances, call the update-stack API in CloudFormation in your CloudFormation template.

You can use the existing Parameters section of your CloudFormation template to define Systems Manager parameters, along with other parameters. Systems Manager parameters are a unique type that is different from existing parameters because they refer to actual values in the Parameter Store. The value for this type of parameter would be the Systems Manager (SSM) parameter key instead of a string or other value.

247
Q

You are using an AWS Lambda function to process records in an Amazon Kinesis Data Streams stream which has 100 active shards. The Lambda function takes an average of 10 seconds to process the data and the stream is receiving 50 new items per second.

Which of the following statements are TRUE regarding this scenario?


There will be at most 100 Lambda function invocations running concurrently.

The Lambda function has 500 concurrent executions.

The Lambda function will throttle the incoming requests due to the excessive number of Kinesis shards.

The Kinesis shards must be merged to increase the data capacity of the stream as well as the concurrency execution of the Lambda function.

A

There will be at most 100 Lambda function invocations running concurrently.

For Lambda functions that process Kinesis or DynamoDB streams, the number of shards is the unit of concurrency. If your stream has 100 active shards, there will be at most 100 Lambda function invocations running concurrently. This is because Lambda processes each shard’s events in sequence.

Hence, the correct answer in this scenario is that: there will be at most 100 Lambda function invocations running concurrently.

The option that says: the Lambda function has 500 concurrent executions is incorrect because the number of concurrent executions for poll-based event sources is different from push-based event sources. This number of concurrent executions would have been correct if the Lambda function is integrated with a push-based even source such as API Gateway or Amazon S3 Events. Remember that the Kinesis and Lambda integration is using a poll-based event source, which means that the number of shards is the unit of concurrency for the function.

248
Q

You are developing a new batch job for the enterprise application suite in your company, which is hosted in an Auto Scaling group of EC2 instances behind an ELB. The application is using an S3 bucket configured with Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS). The batch job must upload files to the bucket using the default AWS KMS key to protect the data at rest.

What should you do to satisfy this requirement with the LEAST amount of configuration?


Include the x-amz-server-side​-encryption​-customer-algorithm, x-amz-server-side-encryption-customer-key, and x-amz-server-side-encryption-customer-key-MD5 headers with appropriate values in the upload request.

​Include the x-amz-server-side-encryption header with a value of AES256 in your upload request.

​Include the x-amz-server-side-encryption header with a value of aws:kms in your upload request.

Include the x-amz-server-side-encryption header with a value of aws:kms as well as the x-amz-server-side-encryption-aws-kms-key-id header containing the ID of the default AWS KMS key in your upload request.

A

Include the x-amz-server-side-encryption header with a value of aws:kms in your upload request.

Including the x-amz-server-side-encryption header with a value of aws:kms as well as the x-amz-server-side-encryption-aws-kms-key-id header containing the ID of the default AWS KMS key in your upload request is incorrect because although this is a valid option, you actually don’t need to add the x-amz-server-side-encryption-aws-kms-key-id header if you will be using the default AWS KMS key. Take note that the scenario explicitly mentioned to provide a solution with the LEAST amount of configuration.

249
Q

A developer will be building a game data feed application which will continuously collect data about player-game interactions and feed the data into your gaming platform. The application uses the Kinesis Client Library to process the data stream from the Amazon Kinesis Data Streams and stores the data to Amazon DynamoDB. It is required that the system should have enough shards and EC2 instances in order to handle failover and adequately process the amount of data coming in and out of the stream.

Which of the following ratio of the number of Kinesis shards to EC2 worker instances should the developer implement to achieve the above requirement in the most cost-effective and highly available way?

​
4 shards : 8 instances
4 shards : 2 instances
1 shard : 6 instances
6 shards : 1 instance
A

4 shards : 2 instances

The Kinesis Client Library (KCL) ensures that for every shard there is a record processor running and processing that shard. It also tracks the shards in the stream using an Amazon DynamoDB table.

Since the question requires the system to smoothly process streaming data, a fair number of shards and instances are required. By launching 4 shards, the stream will have more capacity for reading and writing data. By launching 2 instances, each instance will focus on processing two shards. It also provides high availability in the event that one instance goes down. Therefore, the ratio of 4 shards : 2 instances is the correct answer.

250
Q

A developer is designing an application which will be hosted in ECS and uses an EC2 launch type. You need to group your container instances by certain attributes such as Availability Zone, instance type, or custom metadata. After you have defined a group of container instances, you will need to customize Amazon ECS to place tasks on container instances based on the group you specified.

Which of the following ECS features provides you with expressions that you can use to group container instances by a specific attribute?

​Task Placement Strategies
Task Placement Constraints
Task Groups
Cluster Query Language

A

Cluster Query Language

Hence, the correct ECS feature which provides you with expressions that you can use to group container instances by a specific attribute is Cluster Query Language.

Task Group is incorrect because this is just a set of related tasks. This does not provide expressions that enable you to group objects. All tasks with the same task group name are considered as a set when performing spread placement.

Task Placement Constraint is incorrect because it is just a rule that is considered during task placement. Although it uses cluster queries when you are placing tasks on container instances based on a specific expression, it does not provide the actual expressions which are used to group those container instances.

251
Q

A developer runs a shell script that uses the AWS CLI to upload a large file to an S3 bucket, which includes an AWS KMS key. An Access Denied error always shows up whenever the developer uploads a file with a size of 100 GB or more. However, when he tried to upload a smaller file with the KMS key, the upload succeeds.

Which of the following are possible reasons why this issue is happening? (Select TWO.)


The maximum size that can be encrypted in KMS is only 100 GB.

​The developer does not have the kms:Decrypt permission.

The developer’s IAM permission has an attached inline policy that restricts him from uploading a file to S3 with a size of 100 GB or more.

The AWS CLI S3 commands perform a multipart upload when the file is large.

The developer does not have the kms:Encrypt permission.

A

The developer does not have the kms:Decrypt permission.


The AWS CLI S3 commands perform a multipart upload when the file is large.

To perform a multipart upload with encryption using an AWS Key Management Service (AWS KMS) customer master key (CMK), the requester must have permission to the kms:Decrypt and kms:GenerateDataKey* actions on the key. These permissions are required because Amazon S3 must decrypt and read data from the encrypted file parts before it completes the multipart upload.

Hence, the correct answers in this scenario are:

  • The AWS CLI S3 commands perform a multipart upload when the file is large.
  • The developer does not have the kms:Decrypt permission
252
Q

A Development team is building a fault-tolerant solution for a web application hosted on Amazon EC2. The session data is stored globally but it is cached in the instance’s memory for better performance. The solution aims to ensure that no user requests are lost during a session in case an EC2 instance is terminated or has failed a health check.

Which solution best fits the requirement with the least effort?


Use an Elastic Load Balancer and configure connection draining.

Use an Elastic Load Balancer and configure sticky sessions.

Create an SQS queue to store session data.

Use the DynamoDB Session Handler to save session data.

A

Use an Elastic Load Balancer and configure sticky sessions.

If an instance fails or becomes unhealthy, the load balancer stops routing requests to that instance and chooses a new healthy instance based on the existing load balancing algorithm. The load balancer treats the session as now “stuck” to the new healthy instance, and continues routing requests to that instance even if the failed instance comes back.

253
Q

An application has a feature that displays GIFs based on keyword inputs. The code streams random GIF links from an external API to your local machine. When run, the application’s process takes longer than expected. You are suspecting that the new function sendRequest() you added is the culprit.

Which of the following actions should you do to determine the latency of the function?


Use CloudTrail to record and store event logs for actions made by your function.

​Using CloudWatch, troubleshoot the issue by checking the logs.

​Using AWS X-Ray, disable sampling to efficiently trace all requests for calls.

Using AWS X-Ray, define an arbitrary subsegment inside the code to instrument the function.

A

​Using AWS X-Ray, define an arbitrary subsegment inside the code to instrument the function.

A subsegment can contain additional details about a call to an AWS service, an external HTTP API, or an SQL database. You can define arbitrary subsegments to instrument specific functions or lines of code in your application.

254
Q

A developer is building an application that uses Amazon CloudFront to distribute thousands of images stored in an S3 bucket. The developer needs a fast and cost-efficient solution that will allow him to update the images immediately without waiting for the object’s expiration date.

Which solution meets the requirements?


Update the images by invalidating them from the edge caches.

Upload the new images in the S3 bucket and wait for the objects in the edge locations to expire to reflect the changes.

Disable the CloudFront distribution and re-enable it to update the images in all edge locations.

​Update the images by using versioned file names.

A

​Update the images by using versioned file names.

When you update existing files in a CloudFront distribution, AWS recommends that you include some sort of version identifier either in your file names or in your directory names to give yourself better control over your content. This identifier might be a date-time stamp, a sequential number, or some other method of distinguishing two versions of the same object.

255
Q

A developer wants to cut down the execution time of the scan operation to a DynamoDB table during periods of low demand without interfering with typical workloads. The operation consumes half of the strongly consistent read capacity units within regular operating hours.

How can the developer improve this scan operation?


Use a parallel scan operation.

​Use eventually consistent reads for the scan operation instead of strongly consistent reads.

Perform a rate-limited parallel scan operation.

Perform a rate-limited sequential scan operation.

A

Perform a rate-limited parallel scan operation.

256
Q

A developer is building a serverless NodeJs application consisting of an API Gateway and AWS Lambda. The developer wants to log certain events tagged with a unique identifier of the Lambda functions’ invocation request.

Which approach should the developer take?


Get the awsRequestId from the event object and log it to the console.

Get the awsRequestId from the context object and log it to the console.

​Get the awsRequestId from the context object and log it to a file.

Get the awsRequestId from the event object and log it to a file.

A


Get the awsRequestId from the context object and log it to a file.

The second argument is the context object. A context object is passed to your function by Lambda at runtime. This object provides methods and properties that provide information about the invocation, function, and runtime environment.

The request ID of all invocation requests is automatically logged in CloudWatch Logs, but you might want to get it from the Lambda context object if you have a need for custom logging such as logging key events with an associated request identifier. In this case, you can access the request ID from context.awsRequestID and write to a separate log file.

Hence, the correct answer is: Get the awsRequestId from the context object and log it to a file.

257
Q

A developer has an application that stores sensitive data to an Amazon DynamoDB table. AWS KMS must be used to encrypt the data before sending it to the table and to manage the encryption keys.

Which of the following features are supported when using AWS KMS? (Select TWO.)


Automatic key rotation for CMKs in custom key stores

Creation of symmetric and asymmetric keys

Import your own key material to an asymmetric CMK

Re-enabling disabled keys

Use AWS Certificate Manager as a custom key store

A

Re-enabling disabled keys
Creation of symmetric and asymmetric keys

The option that says: Import your own key material to an asymmetric CMK is incorrect because you can only import your own key material to symmetric CMKs and not for asymmetric types.

The option that says: Automatic key rotation for CMKs in custom key stores is incorrect because automatic key rotation is only supported in symmetric CMKs. Automatic key rotation is not available for asymmetric CMKs, CMKs in custom key stores, and CMKs with imported key material.

258
Q

A developer needs to view the percentage of used memory and the number of TCP connections of instances inside an Auto Scaling Group. To achieve this, the developer must send the metrics to Amazon CloudWatch.

Which approach provides the MOST secure way of authenticating a CloudWatch PUT request?


Modify the existing Auto Scaling launch configuration to use an IAM role with the cloudwatch:PutMetricData permission for the instances.

Create an IAM user with programmatic access. Attach a cloudwatch:PutMetricDatapermission and store the access key and secret key in the instance’s configuration file.

Create an IAM role with cloudwatch:PutMetricData permission for the new Auto Scaling launch configuration from which you launch instances.

Create an IAM user with programmatic access. Attach a cloudwatch:PutMetricData permission and update the Auto Scaling launch configuration to insert the access key and secret key into the instance user data.

A

​Create an IAM role with cloudwatch:PutMetricData permission for the new Auto Scaling launch configuration from which you launch instances.

The option that says: Modify the existing Auto Scaling launch configuration to use an IAM role with the cloudwatch:PutMetricData permission for the instances is incorrect because modifying an existing launch configuration is not possible.

259
Q

A developer uses Amazon ECS to orchestrate two Docker containers. He needs to configure ECS to allow the two containers to share log data.

Which configuration should the developer do?

​Specify the containers in a single pod specification and configure EFS as its volume type.

Use two task definitions for each container and mount an EFS volume between the tasks.

Use two pod specifications for each container and mount an EFS volume between the pods.

​Specify the containers in a single task definition and configure EFS as its volume type.

A

​Specify the containers in a single task definition and configure EFS as its volume type.

260
Q

A development team has migrated an existing Git repository to a CodeCommit repository. One of the developers was given an HTTPS clone URL of their new repository. The developer must be able to clone the repository using his access key credentials.

What must the developer do before he can proceed?


Generate an HTTPS Git credential for AWS CodeCommit. Configure the Git credential helper with the AWS credential profile.

Generate an RSA key pair to use with AWS CodeCommit using AWS KMS.

Configure the Git credential helper with the AWS credential profile.

Import an SSL/TLS certificate into the AWS Certificate Manager.

A

Configure the Git credential helper with the AWS credential profile.

The option that says: Generate an HTTPS Git credential for AWS CodeCommit. Configure the Git credential helper with the AWS credential profile is incorrect. Although this solution works, you still don’t have to create HTTPS GIT credentials since you’re already using the access key credentials to authenticate with CodeCommit.

261
Q

A San Francisco-based tech startup is building a cross-platform mobile app that can notify the user of upcoming astronomical events. Your mobile app authenticates with the Identity Provider (IdP) using the provider’s SDK and Amazon Cognito. Once the end-user is authenticated with the IdP, the OAuth or OpenID Connect token returned from the IdP is passed by your app to Amazon Cognito.

Which of the following is returned for the user to provide a set of temporary, limited-privilege AWS credentials?

​Cognito API
Cognito ID
Cognito Key Pair
Cognito SDK

A

Cognito ID

Cognito Key Pair is incorrect because this is not a unique Amazon Cognito identifier but a cryptography key.

262
Q

A developer is writing a web application that will allow users to save and retrieve images in an Amazon S3 bucket. The users are required to register and log in to access the application.

Which combination of AWS Services should the Developer utilize for implementing the user authentication module of the application?


Amazon Cognito Identity Pools and IAM Role.

Amazon Cognito Identity Pools and User Pools.

Amazon Cognito User Pools and AWS Key Management Service (KMS)

Amazon User Pools and AWS Security Token Service (STS)

A

Amazon Cognito Identity Pools and User Pools.

A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito. Your users can also sign in through social identity providers like Google, Facebook, Amazon, or Apple, and through SAML identity providers.

Amazon User Pools and AWS Security Token Service (STS) are incorrect. While it is true that you need AWS STS to allow users to access Amazon S3, it is already abstracted by the Amazon Cognito Identity Pools. That being said, you have to configure an Identity Pool to accept users federated with your Cognito User Pool.

263
Q

A developer uses AWS Serverless Application Model (SAM) in a local machine to create a serverless Python application. After defining the required dependencies in the requirements.txt file, the developer is now ready to test and deploy.

What are the steps to successfully deploy the application?


Build the SAM template in the local machine. Run the sam deploy command to package and deploy the SAM template from AWS CodeCommit.

​Run the sam init command. Build the SAM template in the local machine and call the sam deploy command to package and deploy the SAM template from an S3 bucket.

​Upload and build the SAM template in an EC2 instance. Run the sam deploy command to package and deploy the SAM template.

Build the SAM template in the local machine and call the sam deploy command to package and deploy the SAM template from an S3 bucket.

A

​Build the SAM template in the local machine and call the sam deploy command to package and deploy the SAM template from an S3 bucket.


sam init - Initializes a serverless application with an AWS SAM template. The template provides a folder structure for your Lambda functions and is connected to an event source such as APIs, S3 buckets, or DynamoDB tables. This application includes everything you need to get started and to eventually extend it into a production-scale application.

264
Q

A Lamba function has multiple sub-functions that are chained together to process large data synchronously. When invoked, the function tends to exceed its maximum timeout limit. This has prompted the developer to break the Lambda function into manageable coordinated states using Step Functions, enabling each sub-function to run in separate processes.

Which of the following type of states should the developer use to run processes?

​Pass State
Parallel State
​Task State
Wait State

A

Out of all the types of State, only the Task State and the Parallel State can be used to run processes in the state machine. In the given scenario, the application logic inside the Lambda function process data synchronously. In this case, Task State should be used.

States can perform a variety of functions in your state machine:

Task State - Do some work in your state machine

Choice State - Make a choice between branches of execution

Fail or Succeed State - Stop execution with failure or success

Pass State - Simply pass its input to its output or inject some fixed data, without performing work.

Wait State - Provide a delay for a certain amount of time or until a specified time/date.

Parallel State - Begin parallel branches of execution.

Map State - Dynamically iterate steps.

265
Q

A developer needs to configure the environment name, solution stack, and environment links of his application environment which will be hosted in Elastic Beanstalk. Which configuration file should the developer add in the source bundle to meet the above requirement?

​
env.yaml
​env.config
Dockerrun.aws.json
cron.yaml
A

env.yaml

266
Q

An online forum requires a new table in DynamoDB named Thread in which the partition key is ForumName and the sort key is Subject.

For reporting purposes, the application needs to find all of the threads that have been posted in a particular forum within the last three months. Which of the following is the MOST effective solution that you should implement?

Configure the application to Scan the entire Thread table and discard any posts that were not within the specified time frame.

Create a global secondary index and use the Query operation to utilize the LastPostDateTime attribute as the sort key.

Configure the application to Query the entire Thread table and discard any posts that were not within the specified time frame.

Add a local secondary index while creating the new Thread table. Use the Query operation to utilize the LastPostDateTime attribute as the sort key.

A

Add a local secondary index while creating the new Thread table. Use the Query operation to utilize the LastPostDateTime attribute as the sort key

Creating a global secondary index and using the Query operation to utilize the LastPostDateTime attribute as the sort key is incorrect because using a local secondary index is a more appropriate solution to be used in this scenario. Take note that in this scenario, it is still using the same partition key (ForumName), but with an alternate sort key (LastPostDateTime) which warrants the use of a local secondary index.

267
Q

A clickstream application is using Amazon Kinesis Data Stream for real-time processing. PutRecord API calls are being used by the producer to send data to the stream. However, there are cases where the producer was intermittently restarted while doing the processing, which resulted to sending the same data twice to the stream. This inadvertently causes duplication of entries in the data stream which affects the processing of the consumers.

Which of the following should you implement to resolve this issue?

​
Merge shards of the data stream.
Add more shards.
Embed a primary key within the record.
​Split shards of the data stream.
A

Embed a primary key within the record.

Applications that need strict guarantees should embed a primary key within the record to remove duplicates later when processing

268
Q

You are planning to launch a Lambda function integrated with API Gateway. It is required to specify how the incoming request data is mapped to the integration request and how the resulting integration response data is mapped to the method response.

Which of the following options is the MOST appropriate method use to meet this requirement?

​
Lambda proxy integration
​HTTP custom integration
​Lambda custom integration
​HTTP Proxy integration
A

​Lambda custom integration

Lambda proxy integration is incorrect as this type of integration is the one where you do not have to configure both the integration request and integration response.

269
Q

A developer is using AWS X-Ray to create a visualization scheme to monitor the requests that go through their enterprise web application. There are different services that communicate with the application and all these requests should be traced in X-Ray, including all the downstream calls made by the application to AWS resources.

Which of the following actions should the developer implement for this scenario?


Pass multiple trace segments as a parameter of PutTraceSegments API.

Install AWS X-Ray on the different services that communicate with the application including the AWS resources that the application calls.

Use X-Ray SDK to generate segment documents with subsegments and send them to the X-Ray daemon, which will buffer them and upload to the X-Ray API in batches.

Use AWS X-Ray SDK to upload a trace segment by executing PutTraceSegments API.

A

Use X-Ray SDK to generate segment documents with subsegments and send them to the X-Ray daemon, which will buffer them and upload to the X-Ray API in batches.

270
Q

A company has an application hosted in an ECS Cluster which is heavily using an RDS database. A developer needs to closely monitor how the different processes on a DB instance use the CPU such as the percentage of the CPU bandwidth or the total memory consumed by each process to ensure application performance.

Which of the following is the MOST suitable solution that the developer should implement?

​Develop a shell script that collects and publishes custom metrics to CloudWatch which tracks the real-time CPU Utilization of the RDS instance.

Track the CPU% and MEM% metrics which are readily available in the Amazon RDS console.

Use CloudWatch to track the CPU Utilization of your database.

Use Enhanced Monitoring in RDS.

A

Use Enhanced Monitoring in RDS.

271
Q

You are developing an online game where the app preferences and game state of the player must be synchronized across devices. It should also allow multiple users to synchronize and collaborate shared data in real time.

Which of the following is the MOST appropriate solution that you should implement in this scenario?


Integrate Amazon Pinpoint to your mobile app.

Integrate Amazon Cognito Sync to your mobile app.

Integrate AWS Amplify to your mobile app.

Integrate AWS AppSync to your mobile app.

A

Integrate AWS AppSync to your mobile app.

AWS AppSync is quite similar with Amazon Cognito Sync which is also a service for synchronizing application data across devices. It enables user data like app preferences or game state to be synchronized as well however, the key difference is that, it also extends these capabilities by allowing multiple users to synchronize and collaborate in real time on shared data.

272
Q

A startup has recently launched their new mobile game and is gaining a lot of new users everyday. The founders plan to add a new feature which will enable cross-device syncing of user profile data across mobile devices to improve the user experience.

Which of the following services should they use to meet this requirement?

​
Cognito User Pools
AWS Amplify
Cognito Identity Pools
Cognito Sync
A

Cognito Sync

Amazon Cognito Sync is an AWS service and client library that enables cross-device syncing of application-related user data. You can use it to synchronize user profile data across mobile devices and the web without requiring your own backend.

273
Q

A developer uses AWS X-Ray to create a trace on an instrumented web application and gain insights on how to better optimize its performance. The segment documents being sent by the application contain annotations which the developer wants to utilize in order to identify and filter out specific data from the trace.

Which of the following should the developer do in order to satisfy this requirement with minimal configuration? (Select TWO.)


Fetch the trace IDs and annotations using the GetTraceSummaries API.

Configure Sampling Rules in the AWS X-Ray Console.

Use filter expressions via the X-Ray console.

Fetch the data using the BatchGetTraces API.

Send trace results to an S3 bucket then query the trace output using Amazon Athena.

A


Use filter expressions via the X-Ray console.
Fetch the trace IDs and annotations using the GetTraceSummaries API.

274
Q

A developer wants to perform additional processing on newly inserted items in Amazon DynamoDB using AWS Lambda. In order to implement this requirement, the developer will have to use DynamoDB Streams to automatically send the new items in the table to a Lambda function for processing.

Given the scenario, what steps should be performed by the developer to integrate his/her DynamoDB to his/her Lambda functions? (Select TWO.)


Create a trigger for a Kinesis Data Firehose delivery stream that uses a Lambda function for data processing.

Create an SNS topic to capture new records from DynamoDB.

Create an event source mapping in Lambda to send records from your stream to a Lambda function.

Select AWSLambdaBasicExecutionRole managed policy as the function’s execution role.

Select AWSLambdaDynamoDBExecutionRole managed policy as the function’s execution role.

A

Select AWSLambdaDynamoDBExecutionRole managed policy as the function’s execution role.

Create an event source mapping in Lambda to send records from your stream to a Lambda function.

Select AWSLambdaBasicExecutionRole managed policy with full access to DynamoDB as the function’s execution role is incorrect because it lacks the necessary permissions. This role only provides Lambda permissions to upload logs to CloudWatch.

275
Q

You are a software developer for a multinational investment bank which has a hybrid cloud architecture with AWS. To improve the security of their applications, they decided to use AWS Key Management Service (KMS) to create and manage their encryption keys across a wide range of AWS services. You were given the responsibility to integrate AWS KMS with the financial applications of the company.

Which of the following are the recommended steps to locally encrypt data using AWS KMS that you should follow? (Select TWO.)


Use the GenerateDataKey operation to get a data encryption key then use the plaintext data key in the response to encrypt data locally.

Encrypt data locally using the Encrypt operation.

Use the GenerateDataKeyWithoutPlaintext operation to get a data encryption key then use the plaintext data key in the response to encrypt data locally.

Erase the encrypted data key from memory and store the plaintext data key alongside the locally encrypted data.

Erase the plaintext data key from memory and store the encrypted data key alongside the locally encrypted data.

A


Use the GenerateDataKey operation to get a data encryption key then use the plaintext data key in the response to encrypt data locally.

Erase the plaintext data key from memory and store the encrypted data key alongside the locally encrypted data.

276
Q

A web application is currently using an on-premises Microsoft SQL Server 2017 Enterprise Edition database. Your manager instructed you to migrate the application to Elastic Beanstalk and the database to RDS. For additional security, you must configure your database to automatically encrypt data before it is written to storage, and automatically decrypt data when the data is read from storage.

Which of the following services will you use to achieve this?


Use Microsoft SQL Server Windows Authentication.
​Enable Transparent Data Encryption (TDE).
​Enable RDS Encryption.
​Use IAM DB Authentication.

A

​Enable Transparent Data Encryption (TDE).

Amazon RDS supports using Transparent Data Encryption (TDE) to encrypt stored data on your DB instances running Microsoft SQL Server.

277
Q

A developer is planning to add a global secondary index in a DynamoDB table. This will allow the application to query a specific index that can span all of the data in the base table, across all partitions.

Which of the following should the developer consider when using this type of index? (Select TWO.)


Queries on this index support eventual consistency only.

Queries or scans on this index consume read capacity units from the base table.

For each partition key value, the total size of all indexed items must be 10 GB or less.

When you query this index, you can choose either eventual consistency or strong consistency.

Queries or scans on this index consume capacity units from the index, not from the base table.

A

Queries on this index support eventual consistency only.

Queries or scans on this index consume capacity units from the index, not from the base table.

The following options are incorrect because these are about local secondary indexes:

  • When you query this index, you can choose either eventual consistency or strong consistency.
  • Queries or scans on this index consume read capacity units from the base table
  • For each partition key value, the total size of all indexed items must be 10 GB or less.
278
Q

A developer is managing an application hosted in EC2, which stores data in an S3 bucket. To comply with the new security policy, the developer must ensure that the data is encrypted at rest using an encryption key that is provided and managed by the company. The change should also provide AES-256 encryption to their data.

Which of the following actions could the developer take to achieve this? (Select TWO.)


Use SSL to encrypt the data while in transit to Amazon S3.

Implement Amazon S3 server-side encryption with customer-provided keys (SSE-C).

Implement Amazon S3 server-side encryption with AWS KMS-Managed Keys (SSE-KMS).

Encrypt the data on the client-side before sending to Amazon S3 using their own master key.

Implement Amazon S3 server-side encryption with Amazon S3-Managed Encryption Keys.

A

Implement Amazon S3 server-side encryption with customer-provided keys (SSE-C).

Encrypt the data on the client-side before sending to Amazon S3 using their own master key.

Implementing Amazon S3 server-side encryption with Amazon S3-Managed Encryption Keys is incorrect because the Amazon S3-Managed encryption does not comply with the policy mentioned in the given scenario since the keys are managed by AWS (through Amazon S3) and not by the company. The suitable server-side encryption that you should use here is SSE-C.

279
Q

You have created an SWF workflow to coordinate the tasks of your media processing cluster, which processes the videos, and a separate media publishing cluster, which publishes the processed videos. Since the media processing cluster converts a single video multiple times, you need to record how many times a video is converted before another action is executed.

Which of the following SWF options can be used to record such events?

Signals
​Markers
Timers
​Tags

A

​Markers

Using Timers is incorrect because it just enables you to notify your decider when a certain amount of time has elapsed and does not meet the requirement in this scenario.

280
Q

The Customer and Payment service components of a microservices application have two separate DynamoDB tables. New items inserted into the Customer service table must be dynamically updated in the Payment service table.

How can the Payment service get near real-time updates?


Enable DynamoDB Streams to stream all the changes from the Customer service table and trigger a Lambda function to update the Payment service table.

Use a Kinesis data stream to stream all the changes from the Customer service database directly into the Payment service table.

Create a scheduled Amazon EventBridge rule that invokes a Lambda function every minute to update the changes from the Customer service table into the Payment service table.

Create a Kinesis Data Firehose delivery stream to stream all the changes from the Customer service table and trigger a Lambda function to update the Payment service table.

A

Enable DynamoDB Streams to stream all the changes from the Customer service table and trigger a Lambda function to update the Payment service table.

The option that says: Use a Kinesis data stream to stream all the changes from the Customer service database directly into the Payment service table is incorrect. Kinesis Data Stream is a valid service for streaming changes from a DynamoDB table. However, it cannot directly write stream records to a DynamoDB table. You need a processing layer that will poll records from the stream and update the other DynamoDB table.

281
Q

An API gateway with a Lambda proxy integration takes a long time to complete its processing. There were also occurrences where some requests timed out. You want to monitor the responsiveness of your API calls as well as the underlying Lambda function.

Which of the following CloudWatch metrics should you use to troubleshoot this issue? (Select TWO.)

​IntegrationLatency
CacheMissCount
​Count
Latency
CacheHitCount
A

​IntegrationLatency
Latency

  • Monitor the IntegrationLatency metrics to measure the responsiveness of the backend.
  • Monitor the Latency metrics to measure the overall responsiveness of your API calls.
282
Q

An augmented reality mobile game has a serverless backend composed of Lambda, DynamoDB, and API Gateway. Due to the surge of new players globally, there were a lot of delays in retrieving the player data. You are instructed by your manager to improve the game’s performance by reducing the database response times down to microseconds.

Which of the following is the BEST service to use that can satisfy this scenario?


DynamoDB Auto Scaling

ElastiCache

DynamoDB Session Handler

Amazon DynamoDB Accelerator (DAX)

A

Amazon DynamoDB Accelerator (DAX)

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management.

283
Q

You are building a distributed system using KMS where you need to encrypt data at a later time. An API must be called that returns only the encrypted copy of the data key which you will use for encryption. After an hour, you will decrypt the data key by calling the Decrypt API then using the returned plaintext data key to finally encrypt the data.

Which is the MOST suitable KMS API that the system should use to securely implement the requirements described above?

​
GenerateDataKeyWithoutPlaintext
Encrypt
GenerateRandom
GenerateDataKey
A

GenerateDataKeyWithoutPlaintext

GenerateDataKey is incorrect because this operation also returns a plaintext copy of the data key along with the copy of the encrypted data key under a customer master key (CMK) that you specified. Take note that the scenario explicitly mentioned that the API must return only the encrypted copy of the data key which will be used later for encryption. Although this API can be used in this scenario, it is not recommended since the actual encryption process of the data happens at a later time and not in real-time.

284
Q

Using the AWS Console, you are trying to scale DynamoDB past its pre-configured maximums. Which service limits can you increase by raising a ticket to AWS support?

Provisioned throughput limits
Local Secondary Indexes
Item Sizes
Global Secondary Indexes per table

A

Global Secondary Indexes per table

Provisioned throughput limits