Incorrect From Test Flashcards
A Developer is creating a new web application that will be deployed using AWS Elastic Beanstalk from the AWS Management Console. The Developer is about to create a source bundle which will be uploaded using the console.
Which of the following are valid requirements for creating the source bundle? (Select TWO.)
Must consist of one or more ZIP files. Must not exceed 512 MB. Must not include a parent folder or top-level directory. Must include the cron.yaml file. Must include a parent folder or top-level directory.
Must not exceed 512 MB.
Must not include a parent folder or top-level directory.
- Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)
- Not exceed 512 MB
- Not include a parent folder or top-level directory (subdirectories are fine)
An application uses AWS Lambda which makes remote calls to several downstream services. A developer wishes to add data to custom subsegments in AWS X-Ray that can be used with filter expressions. Which type of data should be used? Annotations Trace ID Daemon Metadata
Annotations
Annotations are key-value pairs with string, number, or Boolean values. Annotations are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API.
INCORRECT: “Metadata” is incorrect. Metadata are key-value pairs that can have values of any type, including objects and lists, but are not indexed for use with filter expressions. Use metadata to record additional data that you want stored in the trace but don’t need to use with search.
A serverless application uses an AWS Lambda function to process Amazon S3 events. The Lambda function executes 20 times per second and takes 20 seconds to complete each execution.
How many concurrent executions will the Lambda function require?
5
40
20
400
400
To calculate the concurrency requirements for the Lambda function simply multiply the number of executions per second (20) by the time it takes to complete the execution (20).
A Development team would use a GitHub repository and would like to migrate their application code to AWS CodeCommit.
What needs to be created before they can migrate a cloned repository to CodeCommit over HTTPS?
A set of Git credentials generated with IAM
An Amazon EC2 IAM role with CodeCommit permissions
A public and private SSH key file
A GitHub secure authentication token
Git credentials, an IAM -generated user name and password pair you can use to communicate with CodeCommit repositories over HTTPS.
A developer is planning to use a Lambda function to process incoming requests from an Application Load Balancer (ALB). How can this be achieved?
Create an Auto Scaling Group (ASG) and register the Lambda function in the launch configuration
Setup an API in front of the ALB using API Gateway and use an integration request to map the request to the Lambda function
Configure an event-source mapping between the ALB and the Lambda function
Create a target group and register the Lambda function using the AWS CLI
Create a target group and register the Lambda function using the AWS CLI
You can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.
A Development team wants to run their container workloads on Amazon ECS. Each application container needs to share data with another container to collect logs and metrics.
What should the Development team do to meet these requirements?
Create two task definitions. Make one to include the application container and the other to include the other container. Mount a shared volume between the two tasks
Create a single pod specification. Include both containers in the specification. Mount a persistent volume to both containers
Create one task definition. Specify both containers in the definition. Mount a shared volume between those two containers
Create two pod specifications. Make one to include the application container and the other to include the other container. Link the two pods together
Create one task definition. Specify both containers in the definition. Mount a shared volume between those two containers
To configure a Docker volume, in the task definition volumes section, define a data volume with name and DockerVolumeConfiguration values. In the containerDefinitions section, define multiple containers with mountPoints values that reference the name of the defined volume and the containerPath value to mount the volume at on the container.
The containers should both be specified in the same task definition. Therefore, the Development team should create one task definition, specify both containers in the definition and then mount a shared volume between those two containers
A Developer is setting up a code update to Amazon ECS using AWS CodeDeploy. The Developer needs to complete the code update quickly. Which of the following deployment types should the Developer use?
Linear
Canary
In-place
Blue/green
Blue/green
INCORRECT: “In-place” is incorrect as AWS Lambda and Amazon ECS deployments cannot use an in-place deployment type.
An application serves customers in several different geographical regions. Information about the location users connect from is written to logs stored in Amazon CloudWatch Logs. The company needs to publish an Amazon CloudWatch custom metric that tracks connections for each location.
Which approach will meet these requirements?
Configure a CloudWatch Events rule that creates a custom metric from the CloudWatch Logs group.
Stream data to an Amazon Elasticsearch cluster in near-real time and export a custom metric.
Create a CloudWatch metric filter to extract metrics from the log files with location as a dimension.
Create a CloudWatch Logs Insights query to extract the location information from the logs and to create a custom metric with location as a dimension.
Create a CloudWatch metric filter to extract metrics from the log files with location as a dimension.
When you create a metric from a log filter, you can also choose to assign dimensions and a unit to the metric. In this case, the company can assign a dimension that uses the location information.
INCORRECT: “Create a CloudWatch Logs Insights query to extract the location information from the logs and to create a custom metric with location as a dimension” is incorrect. You cannot create a custom metric through CloudWatch Logs Insights.
A developer is preparing to deploy a Docker container to Amazon ECS using CodeDeploy. The developer has defined the deployment actions in a file. What should the developer name the file?
appspec.yml
appspec.json
buildspec.yml
cron.yml
appspec.yml
The name of the AppSpec file for an EC2/On-Premises deployment must be appspec.yml. The name of the AppSpec file for an Amazon ECS or AWS Lambda deployment must be appspec.yaml.
INCORRECT: “buildspec.yml” is incorrect as this is the file name you should use for the file that defines the build instructions for AWS CodeBuild.
A company has created a set of APIs using Amazon API Gateway and exposed them to partner companies. The APIs have caching enabled for all stages. The partners require a method of invalidating the cache that they can build into their applications.
What can the partners use to invalidate the API cache?
They can use the query string parameter INVALIDATE_CACHE
They can pass the HTTP header Cache-Control: max-age=0
They must wait for the TTL to expire
They can invoke an AWS API endpoint which invalidates the cache
They can pass the HTTP header Cache-Control: max-age=0
A client of your API can invalidate an existing cache entry and reload it from the integration endpoint for individual requests. The client must send a request that contains the Cache-Control: max-age=0 header.
To grant permission for a client, attach a policy of the following format to an IAM execution role for the user.
A serverless application uses an Amazon API Gateway and AWS Lambda. The application processes data submitted in a form by users of the application and certain data must be stored and available to subsequent function calls.
What is the BEST solution for storing this data?
Store the data in the /tmp directory Store the data in an Amazon SQS queue Store the data in an Amazon Kinesis Data Stream Store the data in an Amazon DynamoDB table
Store the data in an Amazon DynamoDB table
Amazon DynamoDB is a good solution for this scenario as it is a low-latency NoSQL database that is often used for storing session state data. Amazon S3 would also be a good fit for this scenario but is not offered as an option.
An application component writes thousands of item-level changes to a DynamoDB table per day. The developer requires that a record is maintained of the items before they were modified. What MUST the developer do to retain this information? (Select TWO.)
Create a CloudWatch alarm that sends a notification when an item is modified
Set the StreamViewType to NEW_AND_OLD_IMAGES
Use an AWS Lambda function to extract the item records from the notification and write to an S3 bucket
Set the StreamViewType to OLD_IMAGE
Enable DynamoDB Streams for the table
Set the StreamViewType to OLD_IMAGE
KEYS_ONLY — Only the key attributes of the modified item.
NEW_IMAGE — The entire item, as it appears after it was modified.
OLD_IMAGE — The entire item, as it appeared before it was modified.
NEW_AND_OLD_IMAGES — Both the new and the old images of the item.
A Developer is building an application that will store data relating to financial transactions in multiple DynamoDB tables. The Developer needs to ensure the transactions provide atomicity, isolation, and durability (ACID) and that changes are committed following an all-or nothing paradigm.
What write API should be used for the DynamoDB table?
Strongly consistent
Eventually consistent
Transactional
Standard
Transactional
A company is deploying an on-premise application server that will connect to several AWS services. What is the BEST way to provide the application server with permissions to authenticate to AWS services?
Create an IAM role with the necessary permissions and assign it to the application server
Create an IAM group with the necessary permissions and add the on-premise application server to the group
Create an IAM user and generate access keys. Create a credentials file on the application server
Create an IAM user and generate a key pair. Use the key pair in API calls to AWS services
Create an IAM user and generate access keys. Create a credentials file on the application server
(ON PREMISE!)
A Developer requires a multi-threaded in-memory cache to place in front of an Amazon RDS database. Which caching solution should the Developer choose?
Amazon DynamoDB DAX Amazon RedShift Amazon ElastiCache Memcached
Amazon ElastiCache Redis
CORRECT: “Amazon ElastiCache Memcached” is the correct answer.
INCORRECT: “Amazon ElastiCache Redis” is incorrect as Redis it not multi-threaded.
A Developer is deploying an AWS Lambda update using AWS CodeDeploy. In the appspec.yaml file, which of the following is a valid structure for the order of hooks that should be specified?
BeforeInstall > AfterInstall > ApplicationStart > ValidateService
BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeAllowTraffic > AfterAllowTraffic
BeforeAllowTraffic > AfterAllowTraffic
A Developer is building a three-tier web application that must be able to handle a minimum of 10,000 requests per minute. The requirements state that the web tier should be completely stateless while the application maintains session state data for users.
How can the session state data be maintained externally, whilst keeping latency at the LOWEST possible value?
Implement a shared Amazon EFS file system solution across the underlying Amazon EC2 instances, then implement session handling at the application level to leverage the EFS file system for session data storage
Create an Amazon RedShift instance, then implement session handling at the application level to leverage a database inside the RedShift database instance for session data storage
Create an Amazon ElastiCache Redis cluster, then implement session handling at the application level to leverage the cluster for session data storage
Create an Amazon DynamoDB table, then implement session handling at the application level to leverage the table for session data storage
CORRECT: “Create an Amazon ElastiCache Redis cluster, then implement session handling at the application level to leverage the cluster for session data storage” is the correct answer.
INCORRECT: “Create an Amazon DynamoDB table, then implement session handling at the application level to leverage the table for session data storage” is incorrect as though this is a good solution for storing session state data, the latency will not be as low as with ElastiCache.
A company has a large Amazon DynamoDB table which they scan periodically so they can analyze several attributes. The scans are consuming a lot of provisioned throughput. What technique can a Developer use to minimize the impact of the scan on the table’s provisioned throughput?
Set a smaller page size for the scan Use parallel scans Define a range key on the table Prewarm the table by updating all items
Set a smaller page size for the scan
Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request.
A company has implemented AWS CodePipeline to automate its release pipelines. The Development team is writing an AWS Lambda function that will send notifications for state changes of each of the actions in the stages.
Which steps must be taken to associate the Lambda function with the event source?
Create an event trigger and specify the Lambda function from the CodePipeline console
Create a trigger that invokes the Lambda function from the Lambda console by selecting CodePipeline as the event source
Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source
Create an Amazon CloudWatch alarm that monitors status changes in CodePipeline and triggers the Lambda function
Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source
Amazon CloudWatch Events help you to respond to state changes in your AWS resources. When your resources change state, they automatically send events into an event stream. You can create rules that match selected events in the stream and route them to your AWS Lambda function to take action.
AWS CodePipeline can be configured as an event source in CloudWatch Events and can then send notifications using as service such as Amazon SNS.
A Developer is creating a DynamoDB table for storing transaction logs. The table has 10 write capacity units (WCUs). The Developer needs to configure the read capacity units (RCUs) for the table in order to MAXIMIZE the number of requests allowed per second. Which of the following configurations should the Developer use?
Strongly consistent reads of 5 RCUs reading items that are 4 KB in size
Eventually consistent reads of 15 RCUs reading items that are 1 KB in size
Strongly consistent reads of 15 RCUs reading items that are 1KB in size
Eventually consistent reads of 5 RCUs reading items that are 4 KB in size
Eventually consistent reads of 15 RCUs reading items that are 1 KB in size
Eventually consistent, 15 RCUs, 1 KB item = 30 items read per second.
· Strongly consistent, 15 RCUs, 1 KB item = 15 items read per second.
· Eventually consistent, 5 RCUs, 4 KB item = 10 items read per second.
· Strongly consistent, 5 RCUs, 4 KB item = 5 items read per second.
There are multiple AWS accounts across multiple regions managed by a company. The operations team require a single operational dashboard that displays some key performance metrics from these accounts and regions. What is the SIMPLEST solution?
Create an AWS Lambda function that collects metrics from each account and region and pushes the metrics to the account where the dashboard has been created
Create an Amazon CloudWatch dashboard in one account and region and import the data from the other accounts and regions
Create an Amazon CloudTrail trail that applies to all regions and deliver the logs to a single Amazon S3 bucket. Create a dashboard using the data in the bucket
Create an Amazon CloudWatch cross-account cross-region dashboard
Create an Amazon CloudWatch cross-account cross-region dashboard
A developer needs use the attribute of an Amazon S3 object that uniquely identifies the object in a bucket. Which of the following represents an Object Key?
Development/Projects.xls
Project=Blue
s3://dctlabs/Development/Projects.xls
arn:aws:s3:::dctlabs
Development/Projects.xls
A company maintains a REST API service using Amazon API Gateway with native API key validation. The company recently launched a new registration page, which allows users to sign up for the service. The registration page creates a new API key using CreateApiKey and sends the new key to the user. When the user attempts to call the API using this key, the user receives a 403 Forbidden error. Existing users are unaffected and can still call the API.
What code updates will grant these new users’ access to the API?
The createDeployment method must be called so the API can be redeployed to include the newly created API key
The importApiKeys method must be called to import all newly created API keys into the current stage of the API
The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan
The updateAuthorizer method must be called to update the API’s authorizer to include the newly created API key
The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan
A usage plan specifies who can access one or more deployed API stages and methods—and also how much and how fast they can access them. The plan uses API keys to identify API clients and meters access to the associated API stages for each key. It also lets you configure throttling limits and quota limits that are enforced on individual client API keys.
CORRECT: “The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan” is the correct answer.
A Developer is creating a serverless application that will process sensitive data. The AWS Lambda function must encrypt all data that is written to /tmp storage at rest.
How should the Developer encrypt this data?
Configure Lambda to use an AWS KMS customer managed customer master key (CMK). Use the CMK to generate a data key and encrypt all data prior to writing to /tmp storage.
Attach the Lambda function to a VPC and encrypt Amazon EBS volumes at rest using the AWS managed CMK. Mount the EBS volume to /tmp.
Enable default encryption on an Amazon S3 bucket using an AWS KMS customer managed customer master key (CMK). Mount the S3 bucket to /tmp.
Enable secure connections over HTTPS for the AWS Lambda API endpoints using Transport Layer Security (TLS).
CORRECT: “Configure Lambda to use an AWS KMS customer managed customer master key (CMK). Use the CMK to generate a data key and encrypt all data prior to writing to /tmp storage” is the correct answer.