Question4 Flashcards

(10 cards)

1
Q

A company has created a fitness tracking mobile app the uses a serverless REST API. The app consists of an Amazon API Gateway API with a Regional endpoint, AWS Lambda functions and an Amazon Aurora MySQL database cluster. The company recently secured a deal with a sports company to promote the new app which resulted in a significant increase in the number of requests received.
Unfortunately, the increase in traffic resulted in sporadic database memory errors and performance degradation. The traffic included significant numbers of HTTP requests querying the same data in short bursts of traffic during weekends and holidays.
The company needs to improve its ability to support the additional usage while minimizing the increase in costs associated with the solution.
Which strategy meets these requirements?
• ​
Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the cache.
• ​
Create usage plans in API Gateway and distribute API keys to clients. Configure metered access to the production stage.
• ​
Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
• ​
Modify the instance type of the Aurora database cluster to use an instance with more memory.

A

• ​
Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
(Correct)

Explanation
Anedge-optimized API endpointis best for geographically distributed clients. API requests are routed to the nearest CloudFront Point of Presence (POP). For mobile clients this is a good use case for this type of endpoint. The Regional endpoint is best suited to traffic coming from within the Region only.
You can enable API caching in Amazon API Gateway to cache your endpoint’s responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API.

CORRECT:”Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage” is the correct answer.
INCORRECT:”Create usage plans in API Gateway and distribute API keys to clients. Configure metered access to the production stage” is incorrect. This does not support the additional usage; it limits additional usage.
INCORRECT:”Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the cache” is incorrect. This will increase costs associated with the solution as the ElastiCache cluster could be expensive.
INCORRECT:”Modify the instance type of the Aurora database cluster to use an instance with more memory” is incorrect. This would mean the database cluster cost more at all times, not just when the traffic increases.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A Solutions Architect must enable an AWS CloudHSM M of N access control—also named a quorum authentication mechanism—to allow security officers to make administrative changes to a hardware security module (HSM). The new security policy states that at least two of the four security officers must authorize any administrative changes to CloudHSM. This is the first time this configuration has been setup. Which steps must be taken to enable quorum authentication (Select TWO.)
• ​
Use AWS IAM to create a policy that requires a minimum of three crypto officers (COs) to configure the minimum number of approvals required to perform HSM user management operations.
• ​
Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and register a key for signing with the registerMofnPubKey command.

• ​
Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and set the Quorum minimum value to two using the setMValue command.

• ​
Edit the cloudhsm_client.cfg document to import a key and register the key for signing.
• ​
Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and get a Quorum token with the getToken command.

A

• ​
Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and register a key for signing with the registerMofnPubKey command.
(Correct)
• ​
Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and set the Quorum minimum value to two using the setMValue command.
(Correct)

Explanation
The first time setup for M of N authentication involves creating and registering a key for signing and setting the minimum value on the HSM. This involves the following high-level steps:
• To use quorum authentication, each CO must create an asymmetric key for signing (asigning key). This is done outside of the HSM. Keys can be personal keys or public keys.
• A CO must log in to the HSM and then set thequorum minimum value, also known as them value. This is the minimum number of CO approvals that are required to perform HSM user management operations. Any CO on the HSM can set the quorum minimum value, including COs that have not registered a key for signing.
CORRECT:”Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and register a key for signing with the registerMofnPubKey command” is a correct answer.
CORRECT:”Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and set the Quorum minimum value to two using the setMValue command” is a correct answer.
INCORRECT:”Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and get a Quorum token with the getToken command” is incorrect. The getToken command is used by a CO to get a token after the quorum authentication has been setup successfully.
INCORRECT:”Use AWS IAM to create a policy that requires a minimum of three crypto officers (COs) to configure the minimum number of approvals required to perform HSM user management operations” is incorrect. IAM is not used to configure the quorum minimum value.
INCORRECT:”Edit the cloudhsm_client.cfg document to import a key and register the key for signing” is incorrect. This document is used for specifying client-side synchronization for keys and is not related to setting up quorum authentication.
References:
https://docs.aws.amazon.com/cloudhsm/latest/userguide/quorum-authentication-crypto-officers.html#quorum-crypto-officers-use-token
https://docs.aws.amazon.com/cloudhsm/latest/userguide/quorum-authentication-crypto-officers-first-time-setup.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-security-identity-compliance/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A Solutions Architect has been tasked with migrating an application to AWS. The application includes a desktop client application and web application. The web application has an uptime SLA of 99.95%. The Solutions Architect must re-architect the application to meet or exceed this SLA.
The application contains a MySQL database running on a single virtual machine. The web application uses multiple virtual machines with a load balancer. Remote users complain about slow load times while using this latency-sensitive application.
The Solutions Architect must minimize changes to the application whilst improving the user experience, minimizing costs, and ensuring the availability requirements are met. Which solutions best meets these requirements?
• ​
Migrate the database to a MySQL database in Amazon EC2. Host the web application on automatically scaled Amazon ECS containers behind an Application Load Balancer. Allocate an Amazon WorkSpaces WorkSpace for each end user to improve the user experience.
• ​
Migrate the database to an Amazon EMR cluster with at least two nodes. Deploy the web application on automatically scaled Amazon ECS containers behind an Application Load Balancer. Use Amazon CloudFront to improve the user experience.
• ​
Migrate the database to an Amazon RDS Aurora MySQL configuration. Host the web application on an Auto Scaling configuration of Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience.
• ​
Migrate the database to an Amazon RDS MySQL Multi-AZ configuration. Host the web application on automatically scaled AWS Fargate containers behind a Network Load Balancer. Use Amazon ElastiCache to improve the user experience.

A

• ​
Migrate the database to an Amazon RDS Aurora MySQL configuration. Host the web application on an Auto Scaling configuration of Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience.
(Correct)

Explanation
The uptime SLA for Amazon RDS is 99.95%. Therefore, it is not necessary to add a Multi-AZ configuration which will increase the solution cost. For the compute layer this could be containers or EC2 instances. To minimize changes to the application using EC2 instances may be slightly easier, but could work. To solve the user experience issues Amazon AppStream 2.0 should be used.
Amazon AppStream 2.0 is a fully managed non-persistent application and desktop streaming service. You centrally manage your desktop applications on AppStream 2.0 and securely deliver them to any computer. Each end user has a fluid and responsive experience because applications run on virtual machines optimized for specific use cases and each streaming sessions automatically adjust to network conditions.
CORRECT:”Migrate the database to an Amazon RDS Aurora MySQL configuration. Host the web application on an Auto Scaling configuration of Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience” is the correct answer.
INCORRECT:”Migrate the database to a MySQL database in Amazon EC2. Host the web application on automatically scaled Amazon ECS containers behind an Application Load Balancer. Allocate an Amazon WorkSpaces WorkSpace for each end user to improve the user experience” is incorrect. An RDS managed service would be better for the database and AppStream 2.0 is a better fit for an optimized desktop application.
INCORRECT:”Migrate the database to an Amazon EMR cluster with at least two nodes. Deploy the web application on automatically scaled Amazon ECS containers behind an Application Load Balancer. Use Amazon CloudFront to improve the user experience” is incorrect. EMR is a hosted Hadoop framework for running analytics on big data and is not suitable for this workload. CloudFront is not well suited to optimizing performance for desktop applications.
INCORRECT:”Migrate the database to an Amazon RDS MySQL Multi-AZ configuration. Host the web application on automatically scaled AWS Fargate containers behind a Network Load Balancer. Use Amazon ElastiCache to improve the user experience” is incorrect. AppStream 2.0 is a better fit for the desktop application. ElastiCache will just improve database query performance. Also, Multi-AZ for RDS is not necessary for a 99.95% SLA.
References:
https://aws.amazon.com/rds/sla/
https://aws.amazon.com/appstream2/
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-front-end-web-mobile/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An application runs on an Amazon EC2 instance with an attached Amazon EBS Provisioned IOPS (PIOPS) volume. The volume is configured at 200-GB in size and has 3,000 IOPS provisioned. The application requires low latency and random access to the data. A Solutions Architect has been asked to consider options for lowering the cost of the storage without impacting performance and durability.
What should the Solutions Architect recommend?
• ​
Change the PIOPS volume for a 1-TB Throughput Optimized HDD (st1) volume.
• ​
Create an Amazon EFS file system with the throughput mode set to Provisioned. Mount the EFS file system to the EC2 operating system.
• ​
Create an Amazon EFS file system with the performance mode set to Max I/O. Mount the EFS file system to the EC2 operating system.
• ​
Change the PIOPS volume for a 1-TB EBS General Purpose SSD (gp2) volume.

A

• ​
Change the PIOPS volume for a 1-TB EBS General Purpose SSD (gp2) volume.
(Correct)

Explanation
The most cost-effective solution is to use an Amazon EBS General Purpose SSD (gp2) volume. The volume should be configured with 1-TB as gp2 volumes provide 3 IOPS per GB, which will allow the full 3,000 IOPS to be achieved.
CORRECT:”Change the PIOPS volume for a 1-TB EBS General Purpose SSD (gp2) volume” is the correct answer.
INCORRECT:”Change the PIOPS volume for a 1-TB Throughput Optimized HDD (st1) volume” is incorrect. This volume type supports a maximum of 500 IOPS per volume.
INCORRECT:”Create an Amazon EFS file system with the performance mode set to Max I/O. Mount the EFS file system to the EC2 operating system” is incorrect. EFS will be much more expensive than using a gp2 volume.
INCORRECT:”Create an Amazon EFS file system with the throughput mode set to Provisioned. Mount the EFS file system to the EC2 operating system” is incorrect. EFS will be much more expensive than using a gp2 volume.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-storage/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A developer is attempting to access an Amazon S3 bucket in a member account in AWS Organizations. The developer is logged in to the account with user credentials and has received an access denied error with no bucket listed. The developer should have read-only access to all buckets in the account.
A Solutions Architect has reviewed the permissions and found that the developer’s IAM user has been granted read-only access to all S3 buckets in the account.
Which additional steps should the Solutions Architect take to troubleshoot the issue? (Select TWO.)
• ​
Check the ACLs for all S3 buckets.
• ​
Check the SCPs set at the organizational units (OUs).

• ​
Check for the permissions boundaries set for the IAM user.

• ​
Check the bucket policies for all S3 buckets.
• ​
Check if an appropriate IAM role is attached to the IAM user.

A

Check the SCPs set at the organizational units (OUs).
(Correct)
• ​
Check for the permissions boundaries set for the IAM user.
(Correct)

Explanation
A service control policy (SCP) may have been implemented that limits the API actions that are available for Amazon S3. This will apply to all users in the account regardless of the permissions they have assigned to their user account.
Another potential cause of the issue is that the permissions boundary for the user limits the S3 API actions available to the user. A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity’s permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries.

CORRECT:”Check the SCPs set at the organizational units (OUs)” is a correct answer.
CORRECT:”Check for the permissions boundaries set for the IAM user” is also a correct answer.
INCORRECT:”Check if an appropriate IAM role is attached to the IAM user” is incorrect. The question states that the user is logged in with a user account so is not assuming a role.
INCORRECT:”Check the bucket policies for all S3 buckets” is incorrect. The user has not been granted access to any buckets, and the error does not list access denied to any specific bucket. Therefore, it is more likely that the user is not been granted the API action to list the buckets.
INCORRECT:”Check the ACLs for all S3 buckets” is incorrect. With a bucket ACL the grantee is an AWS account or one of the predefined groups. With an ACL you can grant read/write at the bucket level but list is restricted to the object level so would not apply to the bucket itself. The user has been unable to list any buckets in this case so an ACL is unlikely to be the cause.
References:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-security-identity-compliance/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-management-governance/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company runs an application that generates user activity reports and stores them in an Amazon S3 bucket. Users are able to download the reports using the application which generates a signed URL. A user recently reported that the reports of other users can be accessed directly from the S3 bucket. A Solutions Architect reviewed the bucket permissions and discovered that public access is currently enabled.
How can the documents be protected from unauthorized access without modifying the application workflow?
• ​
Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcls option to TRUE on the bucket.
• ​
Modify the settings on the S3 bucket to enable default encryption for all objects.
• ​
Configure server access logging and monitor the log files to check for unauthorized access.
• ​
Use the Block Public Access feature in Amazon S3 to set the BlockPublicPolicy option to TRUE on the bucket.

A

• ​
Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcls option to TRUE on the bucket.
(Correct)
Explanation
The S3 bucket is allowing public access and this must be immediately disabled. Setting the IgnorePublicAcls option to TRUE causes Amazon S3 to ignore all public ACLs on a bucket and any objects that it contains.
The other settings you can configure with the Block Public Access Feature are:
- BlockPublicAcls – PUT bucket ACL and PUT objects requests are blocked if granting public access.
- BlockPublicPolicy – Rejects requests to PUT a bucket policy if granting public access.
- RestrictPublicBuckets – Restricts access to principles in the bucket owners’ AWS account.
CORRECT:”Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcls option to TRUE on the bucket” is the correct answer.
INCORRECT:”Use the Block Public Access feature in Amazon S3 to set the BlockPublicPolicy option to TRUE on the bucket” is incorrect. This option will only reject requests to PUT a bucket policy that grants public access which is not relevant to the workflow in this scenario.
INCORRECT:”Configure server access logging and monitor the log files to check for unauthorized access” is incorrect. This will only identify unauthorized access; it does not block it.
INCORRECT:”Modify the settings on the S3 bucket to enable default encryption for all objects” is incorrect. Encryption will not prevent public access; it just encrypts the data at rest in the S3 bucket.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-storage/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company provides a service that allows users to upload high-resolution product images using an app on their phones for a price matching service. The service currently uses Amazon S3 in the us-west-1 Region. The company has expanded to Europe and users in European countries are experiencing significant delays when uploading images.
Which combination of changes can a Solutions Architect make to improve the upload times for the images? (Select TWO.)
• ​
Configure the client application to use byte-range fetches.
• ​
Redeploy the application to use Amazon S3 multipart upload.

• ​
Create an Amazon CloudFront distribution with the S3 bucket as an origin.
• ​
Modify the Amazon S3 bucket to use Intelligent Tiering.
• ​
Configure the S3 bucket to use S3 Transfer Acceleration.

A

• ​
Redeploy the application to use Amazon S3 multipart upload.
(Correct)

• ​
Configure the S3 bucket to use S3 Transfer Acceleration.
(Correct)

Explanation
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between a client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
Transfer Acceleration is a good solution for the following use cases:
- You have customers that upload to a centralized bucket from all over the world.
- You transfer gigabytes to terabytes of data on a regular basis across continents.
- You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3.
Multipart upload transfers parts of the file in parallel and can speed up performance. This should definitely be built into the application code. Multipart upload also handles the failure of any parts gracefully, allowing for those parts to be retransmitted.
Transfer Acceleration in combination with multipart upload will offer significant speed improvements when uploading data.
CORRECT:”Configure the S3 bucket to use S3 Transfer Acceleration” is the correct answer.
CORRECT:”Redeploy the application to use Amazon S3 multipart upload” is correct.
INCORRECT:”Create an Amazon CloudFront distribution with the S3 bucket as an origin” is incorrect. CloudFront can offer performance improvements for downloading data but to improve upload transfer times, Transfer Acceleration should be used.
INCORRECT:”Configure the client application to use byte-range fetches” is incorrect. This is a technique that is used when reading (not writing) data to fetch only the parts of the file that are required.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company has several IoT enabled devices and sells them to customers around the globe. Every 5 minutes, each IoT device sends back a data file that includes the device status and other information to an Amazon S3 bucket. Every midnight, a Python cron job runs from an Amazon EC2 instance to read and process each data file on the S3 bucket and loads the values on a designated Amazon RDS database. The cron job takes about 10 minutes to process a day’s worth of data. After each data file is processed, it is eventually deleted from the S3 bucket. The company wants to expedite the process and access the processed data on the Amazon RDS as soon as possible.
Which of the following actions would you implement to achieve this requirement with the LEAST amount of effort?
• ​
Convert the Python script cron job to an AWS Lambda function. Configure AWS CloudTrail to log data events of the Amazon S3 bucket. Set up a CloudWatch Events rule to trigger the Lambda function whenever an upload event on the S3 bucket occurs.
• ​
Convert the Python script cron job to an AWS Lambda function. Configure the Amazon S3 bucket event notifications to trigger the Lambda function whenever an object is uploaded to the bucket.
• ​
Convert the Python script cron job to an AWS Lambda function. Create an AWS CloudWatch Events rule scheduled at 1-minute intervals and trigger the Lambda function. Create parallel CloudWatch rules that trigger the same Lambda function to further reduce the processing time.
• ​
Increase the Amazon EC2 instance size and spawn more instances to speed up the processing of the data files. Set the Python script cron job schedule to a 1-minute interval to further improve the access time.

A

• ​
Convert the Python script cron job to an AWS Lambda function. Configure the Amazon S3 bucket event notifications to trigger the Lambda function whenever an object is uploaded to the bucket.
(Correct)

Explanation
TheAmazon S3 notification featureenables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. You store this configuration in the notification subresource that is associated with a bucket. Amazon S3 event notifications are designed to be delivered at least once. Typically, event notifications are delivered in seconds but can sometimes take a minute or longer.

Currently, Amazon S3 can publish notifications for the following events:
New object created events— Amazon S3 supports multiple APIs to create objects. You can request notification when only a specific API is used (for example, s3:ObjectCreated:Put), or you can use a wildcard (for example, s3:ObjectCreated:*) to request notification when an object is created regardless of the API used.
Object removal events— Amazon S3 supports deletes of versioned and unversioned objects. For information about object versioning, see Object Versioning and Using versioning.
Restore object events— Amazon S3 supports the restoration of objects archived to the S3 Glacier storage classes. You request to be notified of object restoration completion by using s3:ObjectRestore:Completed. You use s3:ObjectRestore:Post to request notification of the initiation of a restore.
Reduced Redundancy Storage (RRS) object lost events— Amazon S3 sends a notification message when it detects that an object of the RRS storage class has been lost.
Replication events— Amazon S3 sends event notifications for replication configurations that have S3 Replication Time Control (S3 RTC) enabled. It sends these notifications when an object fails replication, when an object exceeds the 15-minute threshold, when an object is replicated after the 15-minute threshold, and when an object is no longer tracked by replication metrics. It publishes a second event when that object replicates to the destination Region.
Enabling notifications is a bucket-level operation; that is, you store notification configuration information in the notification subresource associated with a bucket. After creating or changing the bucket notification configuration, typically you need to wait 5 minutes for the changes to take effect. Amazon S3 supports the following destinations where it can publish events - Amazon Simple Notification Service (Amazon SNS) topic, Amazon Simple Queue Service (Amazon SQS) queue, and AWS Lambda.
Therefore, the correct answer is:Convert the Python script cron job to an AWS Lambda function. Configure the Amazon S3 bucket event notifications to trigger the Lambda function whenever an object is uploaded to the bucketbecause this provides the best processing and access time. Each of the data files will be processed almost immediately once uploaded on the S3 bucket.
The option that says:Convert the Python script cron job to an AWS Lambda function. Configure AWS CloudTrail to log data events of the Amazon S3 bucket. Setup a CloudWatch Events rule to trigger the Lambda function whenever an upload event on the S3 bucket occursis incorrect. Although this is possible, you do not have to use CloudTrail and CloudWatch Events to satisfy the given requirement. This solution entails a lot of steps. You can simply use the Amazon S3 event notification feature that can trigger the Lambda function directly.
The option that says:Increase the Amazon EC2 instance size and spawn more instances to speed up the processing of the data files. Set the Python script cron job schedule to a 1-minute interval to further improve the access timeis incorrect. This solution is unreliable since the Amazon EC2 instances can process the same data file at the same time, and because of the limitations ofcron, the minimum interval for processing is only 1 minute.
The option that says:Convert the Python script cron job to an AWS Lambda function. Create an AWS CloudWatch Events rule scheduled at 1-minute intervals and trigger the Lambda function. Create parallel CloudWatch rules that trigger the same Lambda function to further reduce the processing timeis incorrect. The scheduled CloudWatch events rule can only have a minimum of 1-minute intervals. Using Amazon S3 Event notifications as triggers will result in almost near real-time processing of the data files.

References:

https: //docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
https: //docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html#with-s3-example-configure-event-source
https: //docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html

Check out this Amazon S3 Cheat Sheet:
https://tutorialsdojo.com/amazon-s3/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A mobile app has become extremely popular with global usage increasing to millions of users. The app allows users to capture and upload funny images of animals and add captions. The current application runs on Amazon EC2 instances with Amazon EFS storage behind an Application Load Balancer. The data access patterns are unpredictable and during peak periods the application has experienced performance issues.
Which changes should a Solutions Architect make to the application architecture to control costs and improve performance?
• ​
Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and the ALB.
• ​
Create an Amazon CloudFront distribution and place the ALB behind the distribution. Store static content in Amazon S3 in an Infrequent Access storage class.
• ​
Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and AWS Lambda for processing the images.
• ​
Place AWS Global Accelerator in front of the ALB. Migrate the static content to Amazon FSx for Windows File Server. Use an AWS Lambda function to reduce image size during the migration process.

A

• ​
Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and AWS Lambda for processing the images.
(Correct)
Explanation
The best option for reducing costs and improving performance would be to move to a fully serverless solution. Amazon S3 can store the image files and CloudFront can be used to improve performance for the global user base. AWS Lambda is ideal for processing the images. The solution will scale seamlessly and handle peak loads and is also low cost.
CORRECT:”Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and AWS Lambda for processing the images” is the correct answer.
INCORRECT:”Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and the ALB” is incorrect. This is a good solution but not quite as cost-effective as using Lambda in place of the ALB and EC2 instances.
INCORRECT:”Create an Amazon CloudFront distribution and place the ALB behind the distribution. Store static content in Amazon S3 in an Infrequent Access storage class” is incorrect. Infrequent access storage class incurs retrieval fees so the data costs could be more expensive compared to other storage classes.
INCORRECT:”Place AWS Global Accelerator in front of the ALB. Migrate the static content to Amazon FSx for Windows File Server. Use an AWS Lambda function to reduce image size during the migration process” is incorrect. GA is best suited for use cases where you need to leverage the AWS global network to improve performance to your application endpoints across multiple Regions. In this case it represents a more costly solution.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-configure-s3-event-notification/
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-networking-content-delivery/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company has a requirement to store documents that will be accessed by a serverless application. The documents will be accessed frequently for the first 3 months, and rarely after that. The documents must be retained for 7 years.
What is the MOST cost-effective solution to meet these requirements?
• ​
Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
• ​
Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
• ​
Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
• ​
Store the documents in an encrypted EBS volume and create a cron job to delete the documents after 7 years.

A

• ​
Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
(Correct)
Explanation
AnS3 Lifecycle configurationis a set of rules that define actions that Amazon S3 applies to a group of objects. Actions are to either transition objects to another storage class or expire (delete) the objects.
In this case the lifecycle policy can be created to move the objects to S3 Glacier (lower cost archival) when they are no longer frequently accessed, and then expire the objects when they no longer need to be retained.
The following image shows the waterfall model for support transitions between storage classes:

CORRECT:”Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old” is the correct answer.
INCORRECT:”Store the documents in an encrypted EBS volume and create a cron job to delete the documents after 7 years” is incorrect. Amazon EBS volumes must be mounted to EC2 instances and this is not a cost-effective solution.
INCORRECT:”Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years” is incorrect. Amazon EFS filesystems must be mounted to EC2 instances and this is not a cost-effective solution.
INCORRECT:”Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years” is incorrect. It is not necessary to use a Lambda function to delete the objects, a lifecycle policy can be used instead and is more cost-effective.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Save time with our exam-specific cheat sheets:

https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-storage/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly