Set 2 Kindle SAA-003 Practice Test Flashcards

1
Q

A group of business analysts perform read-only SQL queries on an Amazon RDS database. The queries have become quite numerous and the database has experienced some performance degradation. The queries must be run against the latest data. A Solutions Architect must solve the performance problems with minimal changes to the existing web application. What should the Solutions Architect recommend?

A. Export the data to Amazon S3 and instruct the business analysts to run their queries using Amazon Athena.
B. Load the data into an Amazon Redshift cluster and instruct the business analysts to run their queries against the cluster.
C. Load the data into Amazon ElastiCache and instruct the business analysts to run their queries against the ElastiCache endpoint.
D. Create a read replica of the primary database and instruct the business analysts to direct queries to the replica.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 198). Kindle Edition.

A

D. Create a read replica of the primary database and instruct the business analysts to direct queries to the replica.

Explanation:
The performance issues can be easily resolved by offloading the SQL queries the business analysts are performing to a read replica. This ensures that data that is being queries is up to date and the existing web application does not require any modifications to take place. CORRECT: “Create a read replica of the primary database and instruct the business analysts to direct queries to the replica” is the correct answer. INCORRECT: “Export the data to Amazon S3 and instruct the business analysts to run their queries using Amazon Athena” is incorrect. The data must be the latest data and this method would therefore require constant exporting of the data. INCORRECT: “Load the data into an Amazon Redshift cluster and instruct the business analysts to run their queries against the cluster” is incorrect. This is another solution that requires exporting the loading the data which means over time it will become out of date. INCORRECT: “Load the data into Amazon ElastiCache and instruct the business analysts to run their queries against the ElastiCache endpoint” is incorrect. It will be much easier to create a read replica. ElastiCache requires updates to the application code so should be avoided in this example.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 198-199). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company is planning to upload a large quantity of sensitive data to Amazon S3. The company’s security department require that the data is encrypted before it is uploaded. Which option meets these requirements?

A. Use server-side encryption with customer-provided encryption keys.
B. Use client-side encryption with a master key stored in AWS KMS.
C. Use client-side encryption with Amazon S3 managed encryption keys.
D. Use server-side encryption with keys stored in KMS.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 199-200). Kindle Edition.

A

B. Use client-side encryption with a master key stored in AWS KMS.

Explanation:
The requirement is that the objects must be encrypted before they are uploaded. The only option presented that meets this requirement is to use client-side encryption. You then have two options for the keys you use to perform the encryption: Use a customer master key (CMK) stored in AWS Key Management Service (AWS KMS). Use a master key that you store within your application. In this case the correct answer is to use an AWS KMS key. Note that you cannot use client-side encryption with keys managed by Amazon S3. CORRECT: “Use client-side encryption with a master key stored in AWS KMS” is the correct answer. INCORRECT: “Use client-side encryption with Amazon S3 managed encryption keys” is incorrect. You cannot use S3 managed keys with client-side encryption. INCORRECT: “Use server-side encryption with customer-provided encryption keys” is incorrect. With this option the encryption takes place after uploading to S3. INCORRECT: “Use server-side encryption with keys stored in KMS” is incorrect. With this option the encryption takes place after uploading to S3.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 200-201). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

An application running on Amazon ECS processes data and then writes objects to an Amazon S3 bucket. The application requires permissions to make the S3 API calls. How can a Solutions Architect ensure the application has the required permissions?

A. Update the S3 policy in IAM to allow read/write access from Amazon ECS, and then relaunch the container.
B. Create a set of Access Keys with read/write permissions to the bucket and update the task credential ID. C. Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn.
D. Attach an IAM policy with read/write permissions to the bucket to an IAM group and add the container instances to the group.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 202). Kindle Edition.

A

C. Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn.

Explanation:
With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with AWS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances. You define the IAM role to use in your task definitions, or you can use ataskRoleArnoverride when running a task manually with theRunTaskAPI operation. Note that there are instances roles and task roles that you can assign in ECS when using the EC2 launch type. The task role is better when you need to assign permissions for just that specific task: CORRECT: “Create an IAM role that has read/write permissions to the bucket and update the task definition to specify the role as the taskRoleArn” is the correct answer. INCORRECT: “Update the S3 policy in IAM to allow read/write access from Amazon ECS, and then relaunch the container” is incorrect. Policies must be assigned to tasks using IAM Roles and this is not mentioned here. INCORRECT: “Create a set of Access Keys with read/write permissions to the bucket and update the task credential ID” is incorrect. You cannot update the task credential ID with access keys and roles should be used instead. INCORRECT: “Attach an IAM policy with read/write permissions to the bucket to an IAM group and add the container instances to the group” is incorrect. You cannot add container instances to an IAM group.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 202-203). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An application upgrade caused some issues with stability. The application owner enabled logging and has generated a 5 GB log file in an Amazon S3 bucket. The log file must be securely shared with the application vendor to troubleshoot the issues. What is the MOST secure way to share the log file?

A. Create access keys using an administrative account and share the access key ID and secret access key with the vendor.
B. Enable default encryption for the bucket and public access. Provide the S3 URL of the file to the vendor. C. Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-factor authentication.
D. Generate a presigned URL and ask the vendor to download the log file before the URL expires.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 204). Kindle Edition.

A

D. Generate a presigned URL and ask the vendor to download the log file before the URL expires.

Explanation:
A presigned URL gives you access to the object identified in the URL. When you create a presigned URL, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The presigned URLs are valid only for the specified duration. That is, you must start the action before the expiration date and time. This is the most secure way to provide the vendor with time-limited access to the log file in the S3 bucket. CORRECT: “Generate a presigned URL and ask the vendor to download the log file before the URL expires” is the correct answer. INCORRECT: “Create an IAM user for the vendor to provide access to the S3 bucket and the application. Enforce multi-factor authentication” is incorrect. This is less secure as you have to create an account to access AWS and then ensure you lock down the account appropriately. INCORRECT: “Create access keys using an administrative account and share the access key ID and secret access key with the vendor” is incorrect. This is extremely insecure as the access keys will provide administrative permissions to AWS and should never be shared. INCORRECT: “Enable default encryption for the bucket and public access. Provide the S3 URL of the file to the vendor” is incorrect. Encryption does not assist here as the bucket would be public and anyone could access it.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 204-205). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company has a file share on a Microsoft Windows Server in an on-premises data center. The server uses a local network attached storage (NAS) device to store several terabytes of files. The management team require a reduction in the data center footprint and to minimize storage costs by moving on-premises storage to AWS. What should a Solutions Architect do to meet these requirements

A. Create an Amazon EFS volume and use an IPSec VPN.
B. Configure an AWS Storage Gateway file gateway.
C. Create an Amazon S3 bucket and an S3 gateway endpoint.
D. Configure an AWS Storage Gateway as a volume gateway.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 205-206). Kindle Edition.

A

B. Configure an AWS Storage Gateway file gateway.

Explanation:
An AWS Storage Gateway File Gateway provides your applications a file interface to seamlessly store files as objects in Amazon S3, and access them using industry standard file protocols. This removes the files from the on-premises NAS device and provides a method of directly mounting the file share for on-premises servers and clients. CORRECT: “Configure an AWS Storage Gateway file gateway” is the correct answer. INCORRECT: “Configure an AWS Storage Gateway as a volume gateway” is incorrect. A volume gateway uses block-based protocols. In this case we are replacing a NAS device which uses file-level protocols so the best option is a file gateway. INCORRECT: “Create an Amazon EFS volume and use an IPSec VPN” is incorrect. EFS can be mounted over a VPN but it would have more latency than using a storage gateway. INCORRECT: “Create an Amazon S3 bucket and an S3 gateway endpoint” is incorrect. S3 is an object-level storage system so is not suitable for this use case. A gateway endpoint is a method of accessing S3 using private addresses from your VPC, not from your data center.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 206-207). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company uses a Microsoft Windows file share for storing documents and media files. Users access the share using Microsoft Windows clients and are authenticated using the company’s Active Directory. The chief information officer wants to move the data to AWS as they are approaching capacity limits. The existing user authentication and access management system should be used. How can a Solutions Architect meet these requirements?

A. Move the documents and media files to an Amazon FSx for Windows File Server file system.
B. Move the documents and media files to an Amazon Elastic File System and use POSIX permissions.
C. Move the documents and media files to an Amazon FSx for Lustre file system.
D. Move the documents and media files to an Amazon Simple Storage Service bucket and apply bucket ACLs.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 207). Kindle Edition.

A

A. Move the documents and media files to an Amazon FSx for Windows File Server file system.

Explanation:
Amazon FSx for Windows File Server makes it easy for you to launch and scale reliable, performant, and secure shared file storage for your applications and end users. With Amazon FSx, you can launch highly durable and available file systems that can span multiple availability zones (AZs) and can be accessed from up to thousands of compute instances using the industry-standard Server Message Block (SMB) protocol. It provides a rich set of administrative and security features, and integrates with Microsoft Active Directory (AD). To serve a wide spectrum of workloads, Amazon FSx provides high levels of file system throughput and IOPS and consistent sub-millisecond latencies. You can also mount FSx file systems from on-premises using a VPN or Direct Connect connection. This topology is depicted in the image below: CORRECT: “Move the documents and media files to an Amazon FSx for Windows File Server file system” is the correct answer. INCORRECT: “Move the documents and media files to an Amazon FSx for Lustre file system” is incorrect. FSx for Lustre is not suitable for migrating a Microsoft Windows File Server implementation. INCORRECT: “Move the documents and media files to an Amazon Elastic File System and use POSIX permissions” is incorrect. EFS can be used from on-premises over a VPN or DX connection but POSIX permissions are very different to Microsoft permissions and mean a different authentication and access management solution is required. INCORRECT: “Move the documents and media files to an Amazon Simple Storage Service bucket and apply bucket ACLs” is incorrect. S3 with bucket ACLs would be changing to an object-based storage system and a completely different authentication and access management solution.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 208-209). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company requires a solution for replicating data to AWS for disaster recovery. Currently, the company uses scripts to copy data from various sources to a Microsoft Windows file server in the on-premises data center. The company also requires that a small amount of recent files are accessible to administrators with low latency. What should a Solutions Architect recommend to meet these requirements?

A. Update the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises file server.
B. Update the script to copy data to an Amazon EBS volume instead of the on-premises file server.
C. Update the script to copy data to an Amazon EFS volume instead of the on-premises file server.
D. Update the script to copy data to an Amazon S3 Glacier archive instead of the on-premises file server.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 212). Kindle Edition.

A

A. Update the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises file server.

Explanation:
The best solution here is to use an AWS Storage Gateway File Gateway virtual appliance in the on-premises data center. This can be accessed the same protocols as the existing Microsoft Windows File Server (SMB/CIFS). Therefore, the script simply needs to be updated to point to the gateway. The file gateway will then store data on Amazon S3 and has a local cache for data that can be accessed at low latency. The file gateway provides an excellent method of enablingfile protocol access to low cost S3 object storage. CORRECT: “Update the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises file server” is the correct answer. INCORRECT: “Update the script to copy data to an Amazon EBS volume instead of the on-premises file server” is incorrect. This would also need an attached EC2 instance running Windows to be able to mount using the same protocols and would not offer any local low-latency access. INCORRECT: “Update the script to copy data to an Amazon EFS volume instead of the on-premises file server” is incorrect. This solution would not provide a local cache. INCORRECT: “Update the script to copy data to an Amazon S3 Glacier archive instead of the on-premises file server” is incorrect. This would not provide any immediate access with low-latency.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 212-214). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company runs an application in an Amazon VPC that requires access to an Amazon Elastic Container Service (Amazon ECS) cluster that hosts an application in another VPC. The company’s security team requires that all traffic must not traverse the internet. Which solution meets this requirement?

A. Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the VPC that hosts the ECS cluster.
B. Configure a gateway endpoint for Amazon ECS. Update the route table to include an entry pointing to the ECS cluster.
C. Configure an Amazon Route 53 private hosted zone for each VPC. Use private records to resolve internal IP addresses in each VPC.
D. Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 214). Kindle Edition.

A

D. Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC.

Explanation:
The correct solution is to use AWS PrivateLink in a service provider model. In this configuration a network load balancer will be implemented in the service provider VPC (the one with the ECS cluster in this example), and a PrivateLink endpoint will be created in the consumer VPC (the one with the company’s application). CORRECT: “Create a Network Load Balancer in one VPC and an AWS PrivateLink endpoint for Amazon ECS in another VPC” is the correct answer. INCORRECT: “Create a Network Load Balancer and AWS PrivateLink endpoint for Amazon ECS in the VPC that hosts the ECS cluster” is incorrect. The endpoint should be in the consumer VPC, not the service provider VPC (see the diagram above). INCORRECT: “Configure a gateway endpoint for Amazon ECS. Update the route table to include an entry pointing to the ECS cluster” is incorrect. You cannot use a gateway endpoint to connect to a private service. Gateway endpoints are only for S3 and DynamoDB. INCORRECT: “Configure an Amazon Route 53 private hosted zone for each VPC. Use private records to resolve internal IP addresses in each VPC” is incorrect. This does not provide a mechanism for resolving each other’s addresses and there’s no method of internal communication using private IPs such as VPC peering.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 214-216). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An application stores transactional data in an Amazon S3 bucket. The data is analyzed for the first week and then must remain immediately available for occasional analysis. What is the MOST cost-effective storage solution that meets the requirements?

A. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.
B. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days.
C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
D. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 216). Kindle Edition.

A

C. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

Explanation:
The transition should be to Standard-IA rather than One Zone-IA. Though One Zone-IA would be cheaper, it also offers lower availability and the question states the objects “must remain immediately available”. Therefore the availability is a consideration. Though there is no minimum duration when storing data in S3 Standard, you cannot transition to Standard IA within 30 days. This can be seen when trying to create a lifecycle rule: Therefore, the best solution is to transition after 30 days. CORRECT: “Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days” is the correct answer. INCORRECT: “Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days” is incorrect as explained above. INCORRECT: “Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days” is incorrect as explained above. INCORRECT: “Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days” is incorrect as explained above.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 216-217). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A highly sensitive application runs on Amazon EC2 instances using EBS volumes. The application stores data temporarily on Amazon EBS volumes during processing before saving results to an Amazon RDS database. The company’s security team mandate that the sensitive data must be encrypted at rest. Which solution should a Solutions Srchitect recommend to meet this requirement?

A. Configure encryption for the Amazon EBS volumes and Amazon RDS database with AWS KMS keys.
B. Use AWS Certificate Manager to generate certificates that can be used to encrypt the connections between the EC2 instances and RDS.
C. Use Amazon Data Lifecycle Manager to encrypt all data as it is stored to the EBS volumes and RDS database.
D. Configure SSL/TLS encryption using AWS KMS customer master keys (CMKs) to encrypt database volumes.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 218). Kindle Edition.

A

A. Configure encryption for the Amazon EBS volumes and Amazon RDS database with AWS KMS keys.

Explanation:
As the data is stored both in the EBS volumes (temporarily) and the RDS database, both the EBS and RDS volumes must be encrypted at rest. This can be achieved by enabling encryption at creation time of the volume and AWS KMS keys can be used to encrypt the data. This solution meets all requirements. CORRECT: “Configure encryption for the Amazon EBS volumes and Amazon RDS database with AWS KMS keys” is the correct answer. INCORRECT: “Use AWS Certificate Manager to generate certificates that can be used to encrypt the connections between the EC2 instances and RDS” is incorrect. This would encrypt the data in-transit but not at-rest. INCORRECT: “Use Amazon Data Lifecycle Manager to encrypt all data as it is stored to the EBS volumes and RDS database” is incorrect. DLM is used for automating the process of taking and managing snapshots for EBS volumes. INCORRECT: “Configure SSL/TLS encryption using AWS KMS customer master keys (CMKs) to encrypt database volumes” is incorrect. You cannot configure SSL/TLS encryption using KMS CMKs or use SSL/TLS to encrypt data at rest.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 218-219). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company runs an eCommerce application that uses an Amazon Aurora database. The database performs well except for short periods when monthly sales reports are run. A Solutions Architect has reviewed metrics in Amazon CloudWatch and found that the Read Ops and CPUUtilization metrics are spiking during the periods when the sales reports are run. What is the MOST cost-effective solution to solve this performance issue?

A. Create an Amazon Redshift data warehouse and run the reporting there.
B. Modify the Aurora database to use an instance class with more CPU.
C. Create an Aurora Replica and use the replica endpoint for reporting.
D. Enable storage Auto Scaling for the Amazon Aurora database.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 219-220). Kindle Edition.

A

C. Create an Aurora Replica and use the replica endpoint for reporting.

Explanation:
The simplest and most cost-effective option is to use an Aurora Replica. The replica can serve read operations which will mean the reporting application can run reports on the replica endpoint without causing any performance impact on the production database. CORRECT: “Create an Aurora Replica and use the replica endpoint for reporting” is the correct answer. INCORRECT: “Enable storage Auto Scaling for the Amazon Aurora database” is incorrect. Aurora storage automatically scales based on volumes, there is no storage auto scaling feature for Aurora. INCORRECT: “Create an Amazon Redshift data warehouse and run the reporting there” is incorrect. This would be less cost-effective and require more work in copying the data to the data warehouse. INCORRECT: “Modify the Aurora database to use an instance class with more CPU” is incorrect. This may not resolve the storage performance issues and could be more expensive depending on instances sizes.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 220). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company runs an application on Amazon EC2 instances which requires access to sensitive data in an Amazon S3 bucket. All traffic between the EC2 instances and the S3 bucket must not traverse the internet and must use private IP addresses. Additionally, the bucket must only allow access from services in the VPC. Which combination of actions should a Solutions Architect take to meet these requirements? (Select TWO.)

A. Create a VPC endpoint for Amazon S3.
B. Apply a bucket policy to restrict access to the S3 endpoint.
C. Enable default encryption on the bucket.
D. Create a peering connection to the S3 bucket VPC.
E. Apply an IAM policy to a VPC peering connection.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 221). Kindle Edition.

A

A. Create a VPC endpoint for Amazon S3.
B. Apply a bucket policy to restrict access to the S3 endpoint.

Explanation:
Private access to public services such as Amazon S3 can be achieved by creating a VPC endpoint in the VPC. For S3 this would be a gateway endpoint. The bucket policy can then be configured to restrict access to the S3 endpoint only which will ensure that only services originating from the VPC will be granted access. CORRECT: “Create a VPC endpoint for Amazon S3” is a correct answer. CORRECT: “Apply a bucket policy to restrict access to the S3 endpoint” is also a correct answer. INCORRECT: “Enable default encryption on the bucket” is incorrect. This will encrypt data at rest but does not restrict access. INCORRECT: “Create a peering connection to the S3 bucket VPC” is incorrect. You cannot create a peering connection to S3 as it is a public service and does not run in a VPC. INCORRECT: “Apply an IAM policy to a VPC peering connection” is incorrect. You cannot apply an IAM policy to a peering connection.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 221-222). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company wants to migrate a legacy web application from an on-premises data center to AWS. The web application consists of a web tier, an application tier, and a MySQL database. The company does not want to manage instances or clusters. Which combination of services should a solutions architect include in the overall architecture? (Select TWO.)

A. Amazon DynamoDB
B. Amazon RDS for MySQL
C. Amazon EC2 Spot Instances
D. Amazon Kinesis Data Streams
E. AWS Fargate

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 223). Kindle Edition.

A

B. Amazon RDS for MySQL
E. AWS Fargate

Explanation:
Amazon RDS is a managed service and you do not need to manage the instances. This is an ideal backend for the application and you can run a MySQL database on RDS without any refactoring. For the application components these can run on Docker containers with AWS Fargate. Fargate is a serverless service for running containers on AWS. CORRECT: “AWS Fargate” is a correct answer. CORRECT: “Amazon RDS for MySQL” is also a correct answer. INCORRECT: “Amazon DynamoDB” is incorrect. This is a NoSQL database and would be incompatible with the relational MySQL DB. INCORRECT: “Amazon EC2 Spot Instances” is incorrect. This would require managing instances. INCORRECT: “Amazon Kinesis Data Streams” is incorrect. This is a service for streaming data.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 223). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A web application is being deployed on an Amazon ECS cluster using the Fargate launch type. The application is expected to receive a large volume of traffic initially. The company wishes to ensure that performance is good for the launch and that costs reduce as demand decreases What should a solutions architect recommend?

A. Use Amazon EC2 Auto Scaling to scale out on a schedule and back in once the load decreases.
B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm.
C. Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached.
D. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when an Amazon CloudWatch alarm is breached.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 224). Kindle Edition.

A

C. Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached.

Explanation:
Amazon ECS uses the AWS Application Auto Scaling service to scales tasks. This is configured through Amazon ECS using Amazon ECS Service Auto Scaling. A Target Tracking Scaling policy increases or decreases the number of tasks that your service runs based on a target value for a specific metric. For example, in the image below the tasks will be scaled when the average CPU breaches 80% (as reported by CloudWatch): CORRECT: “Use Amazon ECS Service Auto Scaling with target tracking policies to scale when an Amazon CloudWatch alarm is breached” is the correct answer. INCORRECT: “Use Amazon EC2 Auto Scaling with simple scaling policies to scale when an Amazon CloudWatch alarm is breached” is incorrect INCORRECT: “Use Amazon EC2 Auto Scaling to scale out on a schedule and back in once the load decreases” is incorrect INCORRECT: “Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm” is incorrect

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 224-225). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company runs several NFS file servers in an on-premises data center. The NFS servers must run periodic backups to Amazon S3 using automatic synchronization for small volumes of data. Which solution meets these requirements and is MOST cost-effective?

A. Set up AWS Glue to extract the data from the NFS shares and load it into Amazon S3.
B. Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon S3.
C. Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to Amazon S3.
D. Set up an AWS Direct Connect connection between the on-premises data center and AWS and copy the data to Amazon S3.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 226). Kindle Edition.

A

B. Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon S3.

Explanation:
AWS DataSync is an online data transfer service that simplifies, automates, and accelerates copying large amounts of databetween on-premises systems and AWS Storage services, as well as between AWS Storage services. DataSync can copy data between Network File System (NFS) shares, or Server Message Block (SMB) shares,self-managed object storage,AWS Snowcone,Amazon Simple Storage Service (Amazon S3)buckets,Amazon Elastic File System (Amazon EFS)file systems, andAmazon FSx for Windows File Serverfile systems. This is the most cost-effective solution from the answer options available. CORRECT: “Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon S3” is the correct answer. INCORRECT: “Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to Amazon S3” is incorrect. This solution does not provide the scheduled synchronization features of AWS DataSync and is more expensive. INCORRECT: “Set up AWS Glue to extract the data from the NFS shares and load it into Amazon S3” is incorrect. AWS Glue is an ETL service and cannot be used for copying data to Amazon S3 from NFS shares. INCORRECT: “Set up an AWS Direct Connect connection between the on-premises data center and AWS and copy the data to Amazon S3” is incorrect. An AWS Direct Connect connection is an expensive option and no solution is provided for automatic synchronization.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 226-227). Kindle Edition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An organization plans to deploy a higher performance computing (HPC) workload on AWS using Linux. The HPC workload will use many Amazon EC2 instances and will generate a large quantity of small output files that must be stored in persistent storage for future use. A Solutions Architect must design a solution that will enable the EC2 instances to access data using native file system interfaces and to store output files in cost-effective long-term storage. Which combination of AWS services meets these requirements?

A. Amazon FSx for Lustre with Amazon S3.
B. Amazon FSx for Windows File Server with Amazon S3.
C. Amazon EBS volumes with Amazon S3 Glacier.
D. AWS DataSync with Amazon S3 Intelligent tiering.

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (pp. 227-228). Kindle Edition.

A

A. Amazon FSx for Lustre with Amazon S3.

Explanation:
Amazon FSx for Lustre is ideal for high performance computing (HPC) workloads running on Linux instances. FSx for Lustre provides a native file system interface and works as any file system does with your Linux operating system. When linked to an Amazon S3 bucket, FSx for Lustre transparently presents objects as files, allowing you to run your workload without managing data transfer from S3. This solution provides all requirements as it enables Linux workloads to use the native file system interfaces and to use S3 for long-term and cost-effective storage of output files. CORRECT: “Amazon FSx for Lustre with Amazon S3” is the correct answer. INCORRECT: “Amazon FSx for Windows File Server with Amazon S3” is incorrect. This service should be used with Windows instances and does not integrate with S3. INCORRECT: “Amazon EBS volumes with Amazon S3 Glacier” is incorrect. EBS volumes do not provide the shared, high performance storage solution using file system interfaces. INCORRECT: “AWS DataSync with Amazon S3 Intelligent tiering” is incorrect. AWS DataSync is used for migrating / synchronizing data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

An application has been deployed on Amazon EC2 instances behind an Application Load Balancer (ALB). A Solutions Architect must improve the security posture of the application and minimize the impact of a DDoS attack on resources. Which of the following solutions is MOST effective?

A. Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer.
B. Create a custom AWS Lambda function that monitors for suspicious traffic and modifies a network ACL when a potential DDoS attack is identified.
C. Enable VPC Flow Logs and store them in Amazon S3. Use Amazon Athena to parse the logs and identify and block potential DDoS attacks.
D. Enable access logs on the Application Load Balancer and configure Amazon CloudWatch to monitor the access logs and trigger a Lambda function when potential attacks are identified. Configure the Lambda function to modify the ALBs security group and block the attack.

A

A. Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer.

Explanation:
A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You set the limit as the number of requests per 5-minute time span. You can use this type of rule to put a temporary block on requests from an IP address that’s sending excessive requests. By default, AWS WAF aggregates requests based on the IP address from the web request origin, but you can configure the rule to use an IP address from an HTTP header, likeX-Forwarded-For, instead. CORRECT: “Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application Load Balancer” is the correct answer. INCORRECT: “Create a custom AWS Lambda function that monitors for suspicious traffic and modifies a network ACL when a potential DDoS attack is identified” is incorrect. There’s not description here of how Lambda is going to monitor for traffic. INCORRECT: “Enable VPC Flow Logs and store them in Amazon S3. Use Amazon Athena to parse the logs and identify and block potential DDoS attacks” is incorrect. Amazon Athena is not able to block DDoS attacks, another service would be needed. INCORRECT: “Enable access logs on the Application Load Balancer and configure Amazon CloudWatch to monitor the access logs and trigger a Lambda function when potential attacks are identified. Configure the Lambda function to modify the ALBs security group and block the attack” is incorrect. Access logs are exported to S3 but not to CloudWatch. Also, it would not be possible to block an attack from a specific IP using a security group (while still allowing any other source access) as they do not support deny rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

An automotive company plans to implement IoT sensors in manufacturing equipment that will send data to AWS in real time. The solution must receive events in an ordered manner from each asset and ensure that the data is saved for future processing. Which solution would be MOST efficient?

A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
B. Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS.
C. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS.
D. Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3

A

A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.

Explanation:
Amazon Kinesis Data Streams is the ideal service for receiving streaming data. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream. Therefore, a separate partition (rather than shard) should be used for each equipment asset. Amazon Kinesis Firehose can be used to receive streaming data from Data Streams and then load the data into Amazon S3 for future processing. CORRECT: “Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3” is the correct answer. INCORRECT: “Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS” is incorrect. A partition should be used rather than a shard as explained above. INCORRECT: “Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS” is incorrect. Amazon SQS cannot be used for real-time use cases. INCORRECT: “Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3” is incorrect. Amazon SQS cannot be used for real-time use cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

An IoT sensor is being rolled out to thousands of a company’s existing customers. The sensors will stream high volumes of data each second to a central location. A solution must be designed to ingest and store the data for analytics. The solution must provide near-real time performance and millisecond responsiveness. Which solution should a Solutions Architect recommend?

A. Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon RedShift.
B. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB.
C. Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon DynamoDB.
D. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon RedShift.

A

B. Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB.

Explanation:
A Kinesis data stream is a set ofshards. Each shard contains a sequence of data records. Aconsumeris an application that processes the data from a Kinesis data stream. You can map a Lambda function to a shared-throughput consumer (standard iterator), or to a dedicated-throughput consumer withenhanced fan-out. Amazon DynamoDB is the best database for this use case as it supports near-real time performance and millisecond responsiveness.

CORRECT: “Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon DynamoDB” is the correct answer. INCORRECT: “Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda function and then store the data in Amazon RedShift” is incorrect. Amazon RedShift cannot provide millisecond responsiveness. INCORRECT: “Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon RedShift” is incorrect. Amazon SQS does not provide near real-time performance and RedShift does not provide millisecond responsiveness. INCORRECT: “Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda function and then store the data in Amazon DynamoDB” is incorrect. Amazon SQS does not provide near real-time performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A company runs a number of core enterprise applications in an on-premises data center. The data center is connected to an Amazon VPC using AWS Direct Connect. The company will be creating additional AWS accounts and these accounts will also need to be quickly, and cost-effectively connected to the on-premises data center in order to access the core applications. What deployment changes should a Solutions Architect implement to meet these requirements with the LEAST operational overhead?

A. Create a Direct Connect connection in each new account. Route the network traffic to the on-premises servers.
B. Configure VPC endpoints in the Direct Connect VPC for all required services. Route the network traffic to the on-premises servers.
C. Create a VPN connection between each new account and the Direct Connect VPC. Route the network traffic to the on-premises servers.
D. Configure AWS Transit Gateway between the accounts. Assign Direct Connect to the transit gateway and route network traffic to the on-premises servers

A

D. Configure AWS Transit Gateway between the accounts. Assign Direct Connect to the transit gateway and route network traffic to the on-premises servers

Explanation:
AWS Transit Gateway connects VPCs and on-premises networks through a central hub. With AWS Transit Gateway, you can quickly add Amazon VPCs, AWS accounts, VPN capacity, or AWS Direct Connect gateways to meet unexpected demand, without having to wrestle with complex connections or massive routing tables. This is the operationally least complex solution and is also cost-effective. The image below depicts how transit gateway can assist with simplifying network deployments: CORRECT: “Configure AWS Transit Gateway between the accounts. Assign Direct Connect to the transit gateway and route network traffic to the on-premises servers” is the correct answer. INCORRECT: “Create a VPN connection between each new account and the Direct Connect VPC. Route the network traffic to the on-premises servers” is incorrect. You cannot connect VPCs using using AWS managed VPNs and would need to configure a software VPN and then complex routing configurations. This is not the best solution. INCORRECT: “Create a Direct Connect connection in each new account. Route the network traffic to the on-premises servers” is incorrect. This is an expensive solution as you would need to have multiple Direct Connect links. INCORRECT: “Configure VPC endpoints in the Direct Connect VPC for all required services. Route the network traffic to the on-premises servers” is incorrect. You cannot create VPC endpoints for all services and this would be a complex solution for those you can.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A solutions architect has been tasked with designing a highly resilient hybrid cloud architecture connecting an on-premises data center and AWS. The network should include AWS Direct Connect (DX). Which DX configuration offers the HIGHEST resiliency?

A. Configure a DX connection with an encrypted VPN on top of it.
B. Configure multiple public VIFs on top of a DX connection.
C. Configure multiple private VIFs on top of a DX connection.
D. Configure DX connections at multiple DX locations.

A

D. Configure DX connections at multiple DX locations.

Explanation:
The most resilient solution is to configure DX connections at multiple DX locations. This ensures that any issues impacting a single DX location do not affect availability of the network connectivity to AWS. Take note of the following AWS recommendations for resiliency: AWS recommends connecting from multiple data centers for physical location redundancy. When designing remote connections, consider using redundant hardware and telecommunications providers. Additionally, it is a best practice to use dynamically routed, active/active connections for automatic load balancing and failover across redundant network connections. Provision sufficient network capacity to ensure that the failure of one network connection does not overwhelm and degrade redundant connections. The diagram below is an example of an architecture that offers high resiliency: CORRECT: “Configure DX connections at multiple DX locations” is the correct answer. INCORRECT: “Configure a DX connection with an encrypted VPN on top of it” is incorrect. A VPN that is separate to the DX connection can be a good backup. But a VPN on top of the DX connection does not help. Also, encryption provides security but not resilience. INCORRECT: “Configure multiple public VIFs on top of a DX connection” is incorrect. Virtual interfaces do not add resiliency as resiliency must be designed into the underlying connection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A website is running on Amazon EC2 instances and access is restricted to a limited set of IP ranges. A solutions architect is planning to migrate static content from the website to an Amazon S3 bucket configured as an origin for an Amazon CloudFront distribution. Access to the static content must be restricted to the same set of IP addresses. Which combination of steps will meet these requirements? (Select TWO.)

A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.
B. Create an origin access identity (OAI) and associate it with the distribution. Generate presigned URLs that limit access to the OAI.
C. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the Amazon S3 bucket.
D. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.
E. Attach the existing security group that contains the IP restrictions to the Amazon CloudFront distribution.

A

A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.

D. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution.

Explanation:
To prevent users from circumventing the controls implemented on CloudFront (using WAF or presigned URLs / signed cookies) you can use an origin access identity (OAI). An OAI is a special CloudFront user that you associate with a distribution. The next step is to change the permissions either on your Amazon S3 bucket or on the files in your bucket so that only the origin access identity has read permission (or read and download permission). This can be implemented through a bucket policy. To control access at the CloudFront layer the AWS Web Application Firewall (WAF) can be used. With WAF you must create an ACL that includes the IP restrictions required and then associate the web ACL with the CloudFront distribution. CORRECT: “Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects” is a correct answer. CORRECT: “Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution” is also a correct answer. INCORRECT: “Create an origin access identity (OAI) and associate it with the distribution. Generate presigned URLs that limit access to the OAI” is incorrect. Presigned URLs can be used to protect access to CloudFront but they cannot be used to limit access to an OAI. INCORRECT: “Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the Amazon S3 bucket” is incorrect. The Web ACL should be associated with CloudFront, not S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A company is storing a large quantity of small files in an Amazon S3 bucket. An application running on an Amazon EC2 instance needs permissions to access and process the files in the S3 bucket. Which action will MOST securely grant the EC2 instance access to the S3 bucket?

A. Create a bucket ACL on the S3 bucket and configure the EC2 instance ID as a grantee.
B. Create an IAM role with least privilege permissions and attach it to the EC2 instance profile.
C. Create an IAM user for the application with specific permissions to the S3 bucket.
D. Generate access keys and store the credentials on the EC2 instance for use in making API calls.

A

B. Create an IAM role with least privilege permissions and attach it to the EC2 instance profile.

Explanation:
IAM roles should be used in place of storing credentials on Amazon EC2 instances. This is the most secure way to provide permissions to EC2 as no credentials are stored and short-lived credentials are obtained using AWS STS. Additionally, the policy attached to the role should provide least privilege permissions. CORRECT: “Create an IAM role with least privilege permissions and attach it to the EC2 instance profile” is the correct answer. INCORRECT: “Generate access keys and store the credentials on the EC2 instance for use in making API calls” is incorrect. This is not best practice, IAM roles are preferred. INCORRECT: “Create an IAM user for the application with specific permissions to the S3 bucket” is incorrect. Instances should use IAM Roles for delegation not user accounts. INCORRECT: “Create a bucket ACL on the S3 bucket and configure the EC2 instance ID as a grantee” is incorrect. You cannot configure an EC2 instance ID on a bucket ACL and bucket ACLs cannot be used to restrict access in this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A company requires a solution to allow customers to customize images that are stored in an online catalog. The image customization parameters will be sent in requests to Amazon API Gateway. The customized image will then be generated on-demand and can be accessed online. The solutions architect requires a highly available solution. Which solution will be MOST cost-effective?

A. Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances
B. Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
C. Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances
D. Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin

A

B. Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin

Explanation:
All solutions presented are highly available. The key requirement that must be satisfied is that the solution should be cost-effective and you must choose the most cost-effective option. Therefore, it’s best to eliminate services such as Amazon EC2 and ELB as these require ongoing costs even when they’re not used. Instead, a fully serverless solution should be used. AWS Lambda, Amazon S3 and CloudFront are the best services to use for these requirements. CORRECT: “Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin” is the correct answer. INCORRECT: “Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances” is incorrect. This is not the most cost-effective option as the ELB and EC2 instances will incur costs even when not used. INCORRECT: “Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances” is incorrect. This is not the most cost-effective option as the ELB will incur costs even when not used. Also, Amazon DynamoDB will incur RCU/WCUs when running and is not the best choice for storing images. INCORRECT: “Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin” is incorrect. This is not the most cost-effective option as the EC2 instances will incur costs even when not used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A solutions architect is finalizing the architecture for a distributed database that will run across multiple Amazon EC2 instances. Data will be replicated across all instances so the loss of an instance will not cause loss of data. The database requires block storage with low latency and throughput that supports up to several million transactions per second per server. Which storage solution should the solutions architect use?

A. Amazon EBS
B. Amazon EC2 instance store
C. Amazon EFS
D. Amazon S3

A

B. Amazon EC2 instance store

Explanation:
Aninstance storeprovides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
Some instance types use NVMe or SATA-based solid state drives (SSD) to deliver high random I/O performance. This is a good option when you need storage with very low latency, but you don’t need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures. In this scenario the data is replicated and fault tolerant so the best option to provide the level of performance required is to use instance store volumes. CORRECT: “Amazon EC2 instance store” is the correct answer. INCORRECT: “Amazon EBS “ is incorrect. The Elastic Block Store (EBS) is a block storage device but as the data is distributed and fault tolerant a better option for performance would be to use instance stores. INCORRECT: “Amazon EFS “ is incorrect as EFS is not a block device, it is a filesystem that is accessed using the NFS protocol. INCORRECT: “Amazon S3” is incorrect as S3 is an object-based storage system, not a block-based storage system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website’s DNS records are hosted in Amazon Route 53 with the domain name pointing to the ALB. A solution is required for displaying a static error page if the website becomes unavailable. Which configuration should a solutions architect use to meet these requirements with the LEAST operational overhead?

A. Create a Route 53 alias record for an Amazon CloudFront distribution and specify the ALB as the origin. Create custom error pages for the distribution
B. Create a Route 53 active-passive failover configuration. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the static website as the passive record for failover
C. Create a Route 53 weighted routing policy. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the record for the S3 static website with a weighting of zero. When an issue occurs increase the weighting
D. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB

A

A. Create a Route 53 alias record for an Amazon CloudFront distribution and specify the ALB as the origin. Create custom error pages for the distribution

Explanation:
Using Amazon CloudFront as the front-end provides the option to specify a custom message instead of the default message. To specify the specific file that you want to return and the errors for which the file should be returned, you update your CloudFront distribution to specify those The CloudFront distribution can use the ALB as the origin, which will cause the website content to be cached on the CloudFront edge caches. This solution represents the most operationally efficient choice as no action is required in the event of an issue, other than troubleshooting the root cause. CORRECT: “Create a Route 53 alias record for an Amazon CloudFront distribution and specify the ALB as the origin. Create custom error pages for the distribution” is the correct answer. INCORRECT: “Create a Route 53 active-passive failover configuration. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the static website as the passive record for failover” is incorrect. This option does not represent the lowest operational overhead as manual intervention would be required to cause a fail-back to the main website. INCORRECT: “Create a Route 53 weighted routing policy. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the record for the S3 static website with a weighting of zero. When an issue occurs increase the weighting” is incorrect. This option requires manual intervention and there would be a delay from the issue arising before an administrative action could make the changes. INCORRECT: “Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB” is incorrect. With an active-active configuration traffic would be split between the website and the error page.

27
Q

A company is deploying a new web application that will run on Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. The application requires a shared storage solution that offers strong consistency as the content will be regularly updated. Which solution requires the LEAST amount of effort?

A. Create an Amazon S3 bucket to store the web content and use Amazon CloudFront to deliver the content
B. Create an Amazon Elastic File System (Amazon EFS) file system and mount it on the individual Amazon EC2 instances
C. Create a shared Amazon Block Store (Amazon EBS) volume and mount it on the individual Amazon EC2 instances
D. Create a volume gateway using AWS Storage Gateway to host the data and mount it to the Auto Scaling group

A

B. Create an Amazon Elastic File System (Amazon EFS) file system and mount it on the individual Amazon EC2 instances

Explanation:
Amazon EFS is a fully-managed service that makes it easy to set up, scale, and cost-optimize file storage in the Amazon Cloud. EFS file systems are accessible to Amazon EC2 instances via a file system interface (using standard operating system file I/O APIs) and support full file system access semantics (such as strong consistency and file locking). EFS is a good solution for when you need to attach a shared filesystem to multiple EC2 instances across multiple Availability Zones. CORRECT: “Create an Amazon Elastic File System (Amazon EFS) file system and mount it on the individual Amazon EC2 instances” is the correct answer. INCORRECT: “Create an Amazon S3 bucket to store the web content and use Amazon CloudFront to deliver the content” is incorrect as this may require more effort in terms of reprogramming the application to use the S3 API. INCORRECT: “Create a shared Amazon Block Store (Amazon EBS) volume and mount it on the individual Amazon EC2 instances” is incorrect. Please note that you can multi-attach an EBS volume to multiple EC2 instances but the instances must be in the same AZ. INCORRECT: “Create a volume gateway using AWS Storage Gateway to host the data and mount it to the Auto Scaling group” is incorrect as a storage gateway is used on-premises.

28
Q

A web application has recently been launched on AWS. The architecture includes two tier with a web layer and a database layer. It has been identified that the web server layer may be vulnerable to cross-site scripting (XSS) attacks.

What should a solutions architect do to remediate the vulnerability?

A. Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
B. Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
C. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
D. Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard

A

C. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF

Explanation:
The AWS Web Application Firewall (WAF) is available on the Application Load Balancer (ALB). You can use AWS WAF directly on Application Load Balancers (both internal and external) in a VPC, to protect your websites and web services. Attackers sometimes insert scripts into web requests in an effort to exploit vulnerabilities in web applications. You can create one or more cross-site scripting match conditions to identify the parts of web requests, such as the URI or the query string, that you want AWS WAF to inspect for possible malicious scripts. CORRECT: “Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF” is the correct answer. INCORRECT: “Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF” is incorrect as you cannot use AWS WAF with a classic load balancer. INCORRECT: “Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF” is incorrect as you cannot use AWS WAF with a network load balancer. INCORRECT: “Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard” is incorrect as you cannot use AWS Shield to protect against XSS attacks. Shield is used to protect against DDoS attacks.

29
Q

A multi-tier application runs with eight front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an Application Load Balancer. A solutions architect needs to modify the infrastructure to be highly available without modifying the application. Which architecture should the solutions architect choose that provides high availability?

A. Create an Auto Scaling group that uses four instances across each of two Regions
B. Modify the Auto Scaling group to use four instances across each of two Availability Zones
C. Create an Auto Scaling template that can be used to quickly create more instances in another Region D. Create an Auto Scaling group that uses four instances across each of two subnets

A

B. Modify the Auto Scaling group to use four instances across each of two Availability Zones

Explanation:
High availability can be enabled for this architecture quite simply by modifying the existing Auto Scaling group to use multiple availability zones. The ASG will automatically balance the load so you don’t actually need to specify the instances per AZ. The architecture for the web tier will look like the one below: CORRECT: “Modify the Auto Scaling group to use four instances across each of two Availability Zones” is the correct answer. INCORRECT: “Create an Auto Scaling group that uses four instances across each of two Regions” is incorrect as EC2 Auto Scaling does not support multiple regions. INCORRECT: “Create an Auto Scaling template that can be used to quickly create more instances in another Region” is incorrect as EC2 Auto Scaling does not support multiple regions. INCORRECT: “Create an Auto Scaling group that uses four instances across each of two subnets” is incorrect as the subnets could be in the same AZ

30
Q

A company’s web application is using multiple Amazon EC2 Linux instances and storing data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure. What should a solutions architect do to meet these requirements?

A. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance
B. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance
C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance
D. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-A)

A

C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance

Explanation:
To increase the resiliency of the application the solutions architect can use Auto Scaling groups to launch and terminate instances across multiple availability zones based on demand. An application load balancer (ALB) can be used to direct traffic to the web application running on the EC2 instances. Lastly, the Amazon Elastic File System (EFS) can assist with increasing the resilience of the application by providing a shared file system that can be mounted by multiple EC2 instances from multiple availability zones. CORRECT: “Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance” is the correct answer. INCORRECT: “Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance” is incorrect as the EBS volumes are single points of failure which are not shared with other instances. INCORRECT: “Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance” is incorrect as instance stores are ephemeral data stores which means data is lost when powered down. Also, instance stores cannot be shared between instances. INCORRECT: “Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)” is incorrect as there are data retrieval charges associated with this S3 tier. It is not a suitable storage tier for application files.

31
Q

A website runs on a Microsoft Windows server in an on-premises data center. The web server is being migrated to Amazon EC2 Windows instances in multiple Availability Zones on AWS. The web server currently uses data stored in an on-premises network-attached storage (NAS) device. Which replacement to the NAS file share is MOST resilient and durable?

A. Migrate the file share to Amazon EBS
B. Migrate the file share to AWS Storage Gateway
C. Migrate the file share to Amazon FSx for Windows File Server
D. Migrate the file share to Amazon Elastic File System (Amazon EFS)

A

C. Migrate the file share to Amazon FSx for Windows File Server

Explanation:
Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. It offers single-AZ and multi-AZ deployment options, fully managed backups, and encryption of data at rest and in transit. This is the only solution presented that provides resilient storage for Windows instances. CORRECT: “Migrate the file share to Amazon FSx for Windows File Server” is the correct answer. INCORRECT: “Migrate the file share to Amazon Elastic File System (Amazon EFS)” is incorrect as you cannot use Windows instances with Amazon EFS. INCORRECT: “Migrate the file share to Amazon EBS” is incorrect as this is not a shared storage solution for multi-AZ deployments. INCORRECT: “Migrate the file share to AWS Storage Gateway” is incorrect as with Storage Gateway replicated files end up on Amazon S3. The replacement storage solution should be a file share, not an object-based storage system

32
Q

A company is planning a migration for a high performance computing (HPC) application and associated data from an on-premises data center to the AWS Cloud. The company uses tiered storage on premises with hot high-performance parallel storage to support the application during periodic runs of the application, and more economical cold storage to hold the data when the application is not actively running. Which combination of solutions should a solutions architect recommend to support the storage needs of the application? (Select TWO.)

A. Amazon S3 for cold data storage
B. Amazon EFS for cold data storage
C. Amazon S3 for high-performance parallel storage
D. Amazon FSx for Lustre for high-performance parallel storage
E. Amazon FSx for Windows for high-performance parallel storage

A

A. Amazon S3 for cold data storage
D. Amazon FSx for Lustre for high-performance parallel storage

Explanation:
Amazon FSx for Lustre provides a high-performance file system optimized for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA). These workloads commonly require data to be presented via a fast and scalable file system interface, and typically have data sets stored on long-term data stores like Amazon S3.
Amazon FSx works natively with Amazon S3, making it easy to access your S3 data to run data processing workloads. Your S3 objects are presented as files in your file system, and you can write your results back to S3. This lets you run data processing workloads on FSx for Lustre and store your long-term data on S3 or on-premises data stores. Therefore, the best combination for this scenario is to use S3 for cold data and FSx for Lustre for the parallel HPC job. CORRECT: “Amazon S3 for cold data storage” is the correct answer. CORRECT: “Amazon FSx for Lustre for high-performance parallel storage” is the correct answer. INCORRECT: “Amazon EFS for cold data storage” is incorrect as FSx works natively with S3 which is also more economical. INCORRECT: “Amazon S3 for high-performance parallel storage” is incorrect as S3 is not suitable for running high-performance computing jobs. INCORRECT: “Amazon FSx for Windows for high-performance parallel storage” is incorrect as FSx for Lustre should be used for HPC use cases and use cases that require storing data on S3

33
Q

A web application that allows users to upload and share documents is running on a single Amazon EC2 instance with an Amazon EBS volume. To increase availability the architecture has been updated to use an Auto Scaling group of several instances across Availability Zones behind an Application Load Balancer. After the change users can only see a subset of the documents. What is the BEST method for a solutions architect to modify the solution so users can see all documents?

A. Run a script to synchronize the data between Amazon EBS volumes
B. Use Sticky Sessions with the ALB to ensure users are directed to the same EC2 instance in a session
C. Copy the data from all EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
D. Configure the Application Load Balancer to send the request to all servers. Return each document from the correct server

A

C. Copy the data from all EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS

Explanation:
The problem that is being described is that the users are uploading the documents to an individual EC2 instance with a local EBS volume. Therefore, as EBS volumes cannot be shared across AZs, the data is stored separately and the ALB will be distributing incoming connections to different instances / data sets. The simple resolution is to implement a shared storage layer for the documents so that they can be stored in one place and seen by any user who connects no matter which instance they connect to.

CORRECT: “Copy the data from all EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS” is the correct answer. INCORRECT: “Run a script to synchronize the data between Amazon EBS volumes” is incorrect. This is a complex and messy approach. A better solution is to use a shared storage layer. INCORRECT: “Use Sticky Sessions with the ALB to ensure users are directed to the same EC2 instance in a session” is incorrect as this will just “stick” a user to the same instance. They won’t see documents uploaded to other instances / EBS volumes. INCORRECT: “Configure the Application Load Balancer to send the request to all servers. Return each document from the correct server” is incorrect as there is no mechanism here for selecting a specific document. The requirement also requests that all documents are visible.

34
Q

A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by midmorning How should the scaling be changed to address the staff complaints and keep costs to a minimum?

A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens
B. Implement a step scaling action triggered at a low CPU threshold, and decrease the cooldown period
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
D. Implement a scheduled action that sets the minimum and maximum capacity to 30 shortly before the office opens

A

C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period

Explanation:
Though this sounds like a good use case for scheduled actions, both answers using scheduled actions will have 20 instances running regardless of actual demand. A better option to be more cost effective is to use a target tracking action that triggers at a lower CPU threshold. With this solution the scaling will occur before the CPU utilization gets to a point where performance is affected. This will result in resolving the performance issues whilst minimizing costs. Using a reduced cooldown period will also more quickly terminate unneeded instances, further reducing costs.

CORRECT: Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period” is the correct answer. INCORRECT: “Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens” is incorrect as this is not the most cost-effective option. Note you can choose min, max, or desired for a scheduled action. INCORRECT: “Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens” is incorrect as this is not the most cost-effective option. Note you can choose min, max, or desired for a scheduled action. INCORRECT: “Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period” is incorrect as AWS recommend you use target tracking in place of step scaling for most use cases.

35
Q

An application uses Amazon EC2 instances and an Amazon RDS MySQL database. The database is not currently encrypted. A solutions architect needs to apply encryption to the database for all new and existing data. How should this be accomplished?

A. Create an Amazon ElastiCache cluster and encrypt data using the cache nodes
B. Enable encryption for the database using the API. Take a full snapshot of the database. Delete old snapshots
C. Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot
D. Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance

A

C. Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot

Explanation:
There are somelimitations for encrypted Amazon RDS DB Instances: you can’t modify an existing unencrypted Amazon RDS DB instance to make the instance encrypted, and you can’t create an encrypted read replica from an unencrypted instance. However, you can use the Amazon RDS snapshot feature to encrypt an unencrypted snapshot that’s taken from the RDS database that you want to encrypt. Restore a new RDS DB instance from the encrypted snapshot to deploy a new encrypted DB instance. Finally, switch your connections to the new DB instance. CORRECT: “Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot” is the correct answer. INCORRECT: “Create an Amazon ElastiCache cluster and encrypt data using the cache nodes” is incorrect as you cannot encrypt an RDS database using an ElastiCache cache node. INCORRECT: “Enable encryption for the database using the API. Take a full snapshot of the database. Delete old snapshots” is incorrect as you cannot enable encryption for an existing database. INCORRECT: “Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance” is incorrect as you cannot create an encrypted read replica from an unencrypted database instance.

36
Q

A company have 500 TB of data in an on-premises file share that needs to be moved to Amazon S3 Glacier. The migration must not saturate the company’s low-bandwidth internet connection and the migration must be completed within a few weeks. What is the MOST cost-effective solution?

A. Create an AWS Direct Connect connection and migrate the data straight into Amazon Glacier
B. Order 7 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce a VPC endpoint
C. Use AWS Global Accelerator to accelerate upload and optimize usage of the available bandwidth
D. Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier

A

D. Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier

Explanation:
As the company’s internet link is low-bandwidth uploading directly to Amazon S3 (ready for transition to Glacier) would saturate the link. The best alternative is to use AWS Snowball appliances. The Snowball edge appliance can hold up to 80 TB of data so 7 devices would be required to migrate 500 TB of data. Snowball moves data into AWS using a hardware device and the data is then copied into an Amazon S3 bucket of your choice. From there, lifecycle policies can transition the S3 objects to Amazon S3 Glacier. CORRECT: “Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier” is the correct answer. INCORRECT: “Order 7 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce a VPC endpoint” is incorrect as you cannot set a Glacier vault as the destination, it must be an S3 bucket. You also can’t enforce a VPC endpoint using a bucket policy. INCORRECT: “Create an AWS Direct Connect connection and migrate the data straight into Amazon Glacier” is incorrect as this is not the most cost-effective option and takes time to setup. INCORRECT: “Use AWS Global Accelerator to accelerate upload and optimize usage of the available bandwidth” is incorrect as this service is not used for accelerating or optimizing the upload of data from on-premises networks

37
Q

A company has refactored a legacy application to run as two microservices using Amazon ECS. The application processes data in two parts and the second part of the process takes longer than the first.

How can a solutions architect integrate the microservices and allow them to scale independently?

A. Implement code in microservice 1to send data to an Amazon S3 bucket. use S3 event notifications to invoke microservice 2
B. Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic
C. Implement code in microservice 1 to send data to Amazon Kinesis Data Forehose. Implement code in microservice 2 to read from Kinesis Data Firehose
D. Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue

A

D. Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue

Explanation:
This is a good use case for Amazon SQS. The microservices must be decoupled so they can scale independently. An Amazon SQS queue will enable microservice 1 to add messages to the queue. Microservice 2 can then pick up the messages and process them. This ensures that if there’s a spike in traffic on the frontend, messages do not get lost due to the backend process not being ready to process them. CORRECT: “Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue” is the correct answer. INCORRECT: “Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2” is incorrect as a message queue would be preferable to an S3 bucket. INCORRECT: “Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic” is incorrect as notifications to topics are pushed to subscribers. In this case we want the second microservice to pickup the messages when ready (pull them). INCORRECT: “Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose” is incorrect as this is not how Firehose works. Firehose sends data directly to destinations, it is not a message queue.

38
Q

A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company.

How should security groups be configured in this situation? (Select TWO.)

A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0
B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0
C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier
D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier
E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier

A

A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0

C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier

Explanation:
In this scenario an inbound rule is required to allow traffic from any internet client to the web front end on SSL/TLS port 443. The source should therefore be set to 0.0.0.0/0 to allow any inbound traffic.

To secure the connection from the web frontend to the database tier, an outbound rule should be created from the public EC2 security group with a destination of the private EC2 security group. The port should be set to 1433 for MySQL. The private EC2 security group will also need to allow inbound traffic on 1433 from the public EC2 security group. This configuration can be seen in the diagram:

CORRECT: “Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0” is a correct answer. CORRECT: “Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier” is also a correct answer. INCORRECT: “Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0” is incorrect as this is configured backwards. INCORRECT: “Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier” is incorrect as the MySQL database instance does not need to send outbound traffic on either of these ports. INCORRECT: “Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier” is incorrect as the database tier does not need to allow inbound traffic on port 443.

39
Q

A solutions architect has created a new AWS account and must secure AWS account root user access

Which combination of actions will accomplish this? (Select TWO.)

A. Ensure the root user uses a strong password
B. Enable multi-factor authentication to the root user
C. Store root user access keys in an encrypted Amazon S3 bucket
D. Add the root user to a group containing administrative permissions
E. Delete the root user account

A

A. Ensure the root user uses a strong password
B. Enable multi-factor authentication to the root user

Explanation:
There are several security best practices for securing the root user account:

Lock away root user access keys OR delete them if possible
Use a strong password
Enable multi-factor authentication (MFA)

The root user automatically has full privileges to the account and these privileges cannot be restricted so it is extremely important to follow best practice advice about securing the root user account.

CORRECT: “Ensure the root user uses a strong password” is the correct answer.
CORRECT: “Enable multi-factor authentication to the root user” is the correct answer.
INCORRECT: “Store root user access keys in an encrypted Amazon S3 bucket” is incorrect as the best practice is to lock away or delete the root user access keys. An S3 bucket is not a suitable location for storing them, even if encrypted

40
Q

A company allows its developers to attach existing IAM policies to existing IAM roles to enable faster experimentation and agility. However, the security operations team is concerned that the developers could attach the existing administrator policy, which would allow the developers to circumvent any other security policies. How should a solutions architect address this issue?

A. Create an Amazon SNS topic to send an alert every time a developer creates a new policy
B. Use service control policies to disable IAM activity across all accounts in the organizational unit
C. Prevent the developers from attaching any policies and assign all IAM duties to the security operations team
D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy

A

D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy

Explanation:
The permissions boundary for an IAM entity (user or role) sets the maximum permissions that the entity can have. This can change the effective permissions for that user or role. The effective permissions for an entity are the permissions that are granted by all the policies that affect the user or role. Within an account, the permissions for an entity can be affected by identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, or session policies. Therefore, the solutions architect can set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy. CORRECT: “Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy” is the correct answer. INCORRECT: “Create an Amazon SNS topic to send an alert every time a developer creates a new policy” is incorrect as this would mean investigating every incident which is not an efficient solution. INCORRECT: “Use service control policies to disable IAM activity across all accounts in the organizational unit” is incorrect as this would prevent the developers from being able to work with IAM completely. INCORRECT: “Prevent the developers from attaching any policies and assign all IAM duties to the security operations team” is incorrect as this is not necessary. The requirement is to allow developers to work with policies, the solution needs to find a secure way of achieving this.

41
Q

A solutions architect is optimizing a website for real-time streaming and on-demand videos. The website’s users are located around the world and the solutions architect needs to optimize the performance for both the real-time and on-demand streaming. Which service should the solutions architect choose?

A. Amazon CloudFront
B. AWS Global Accelerator
C. Amazon Route 53
D. Amazon S3 Transfer Acceleration

A

A. Amazon CloudFront

Explanation:
Amazon CloudFront can be used to stream video to users across the globe using a wide variety of protocols that are layered on top of HTTP. This can include both on-demand video as well as real time streaming video. CORRECT: “Amazon CloudFront” is the correct answer. INCORRECT: “AWS Global Accelerator” is incorrect as this would be an expensive way of getting the content closer to users compared to using CloudFront. As this is a use case for CloudFront and there are so many edge locations it is the better option. INCORRECT: “Amazon Route 53” is incorrect as you still need a solution for getting the content closer to users. INCORRECT: “Amazon S3 Transfer Acceleration” is incorrect as this is used to accelerate uploads of data to Amazon S3 buckets.

42
Q

Objects uploaded to Amazon S3 are initially accessed frequently for a period of 30 days. Then, objects are infrequently accessed for up to 90 days. After that, the objects are no longer needed. How should lifecycle management be configured?

A. Transition to STANDARD_IA after 30 days. After 90 days transition to GLACIER
B. Transition to STANDARD_IA after 30 days. After 90 days transition to ONEZONE_IA
C. Transition to ONEZONE_IA after 30 days. After 90 days expire the objects
D. Transition to REDUCED_REDUNDANCY after 30 days. After 90 days expire the objects is the correct answer. INCORRECT: “Transition to STANDARD_IA after 30 days. After 90 days transition to GLACIER” is incorrect. INCORRECT: “Transition to STANDARD_IA after 30 days. After 90 days transition to ONEZONE_IA” is incorrect. INCORRECT: “Transition to REDUCED_REDUNDANCY after 30 days. After 90 days expire the objects “ is incorrect.

A

C. Transition to ONEZONE_IA after 30 days. After 90 days expire the objects

Explanation:
In this scenario we need to keep the objects in the STANDARD storage class for 30 days as the objects are being frequently accessed. We can configure a lifecycle action that then transitions the objects to INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE_IA. After that we don’t need the objects so they can be expired. All other options do not meet the stated requirements or are not supported lifecycle transitions. For example:

You cannot transition to REDUCED_REDUNDANCY from any storage class.
Transitioning from STANDARD_IA to ONEZONE_IA is possible but we do not want to keep the objects so it incurs unnecessary costs.
Transitioning to GLACIER is possible but again incurs unnecessary costs.
Transition to ONEZONE_IA after 30 days. After 90 days expire the objects “ is the correct answer

43
Q

A mpany has acquired another business and needs to migrate their 50TB of data into AWS within 1 month. They also require a secure, reliable and private connection to the AWS cloud. How are these requirements best accomplished?

A. Provision an AWS Direct Connect connection and migrate the data over the link \
B. Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link
C. Launch a Virtual Private Gateway (VPG) and migrate the data over the AWS VPN
D. Provision an AWS VPN CloudHub connection and migrate the data over redundant links

A

B. Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link

Explanation:
AWS Direct Connect provides a secure, reliable and private connection. However, lead times are often longer than 1 month so it cannot be used to migrate data within the timeframes. Therefore, it is better to use AWS Snowball to move the data and order a Direct Connect connection to satisfy the other requirement later on. In the meantime the organization can use an AWS VPN for secure, private access to their VPC. CORRECT: “Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link” is the correct answer. INCORRECT: “Provision an AWS Direct Connect connection and migrate the data over the link” is incorrect due to the lead time for installation. INCORRECT: “Launch a Virtual Private Gateway (VPG) and migrate the data over the AWS VPN” is incorrect. A VPG is the AWS-side of an AWS VPN. A VPN does not provide a private connection and is not reliable as you can never guarantee the latency over the Internet INCORRECT: “Provision an AWS VPN CloudHub connection and migrate the data over redundant links” is incorrect. AWS VPN CloudHub is a service for connecting multiple sites into your VPC over VPN connections. It is not used for aggregating links and the limitations of Internet bandwidth from the company where the data is stored will still be an issue. It also uses the public Internet so is not a private or reliable connection.

44
Q

An application on Amazon Elastic Container Service (ECS) performs data processing in two parts. The second part takes much longer to complete. How can an Architect decouple the data processing from the backend application component?

A. Process both parts using the same ECS task.
B. Create an Amazon Kinesis Firehose stream Process each part using a separate ECS task.
C. Create an Amazon SNS topic and send a notification when the processing completes
D. Create an Amazon DynamoDB table and save the output of the first part to the table Process each part using a separate ECS task. Create an Amazon SQS queue

A

D. Create an Amazon DynamoDB table and save the output of the first part to the table Process each part using a separate ECS task. Create an Amazon SQS queue

Explanation:
Processing each part using a separate ECS task may not be essential but means you can separate the processing of the data. An Amazon Simple Queue Service (SQS) is used for decoupling applications. It is a message queue on which you place messages for processing by application components. In this case you can process each data processing part in separate ECS tasks and have them write an Amazon SQS queue. That way the backend can pick up the messages from the queue when they’re ready and there is no delay due to the second part not being complete. CORRECT: “Process each part using a separate ECS task. Create an Amazon SQS queue” is the correct answer. INCORRECT: “Process both parts using the same ECS task. Create an Amazon Kinesis Firehose stream” is incorrect. Amazon Kinesis Firehose is used for streaming data. This is not an example of streaming data. In this case SQS is better as a message can be placed on a queue to indicate that the job is complete and ready to be picked up by the backend application component. INCORRECT: “Process each part using a separate ECS task. Create an Amazon SNS topic and send a notification when the processing completes” is incorrect. Amazon Simple Notification Service (SNS) can be used for sending notifications. It is useful when you need to notify multiple AWS services. In this case an Amazon SQS queue is a better solution as there is no mention of multiple AWS services and this is an ideal use case for SQS. INCORRECT: “Create an Amazon DynamoDB table and save the output of the first part to the table” is incorrect. Amazon DynamoDB is unlikely to be a good solution for this requirement. There is a limit on the maximum amount of data that you can store in an entry in a DynamoDB table.

45
Q

An application is running on Amazon EC2 behind an Elastic Load Balancer (ELB). Content is being published using Amazon CloudFront and you need to restrict the ability for users to circumvent CloudFront and access the content directly through the ELB.

How can you configure this solution?

A. Create an Origin Access Identity (OAI) and associate it with the distribution
B. Use signed URLs or signed cookies to limit access to the content
C. Use a Network ACL to restrict access to the ELB
D. Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change

A

D. Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change

Explanation:
The only way to get this working is by using a VPC Security Group for the ELB that is configured to allow only the internal service IP ranges associated with CloudFront. As these are updated from time to time, you can use AWS Lambda to automatically update the addresses. This is done using a trigger that is triggered when AWS issues an SNS topic update when the addresses are changed. CORRECT: “Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change” is the correct answer. INCORRECT: “Create an Origin Access Identity (OAI) and associate it with the distribution” is incorrect. You can use an OAI to restrict access to content in Amazon S3 but not on EC2 or ELB. INCORRECT: “Use signed URLs or signed cookies to limit access to the content” is incorrect.
Signed cookies and URLs are used to limit access to files but this does not stop people from circumventing CloudFront and accessing the ELB directly. INCORRECT: “Use a Network ACL to restrict access to the ELB” is incorrect. A Network ACL can be used to restrict access to an ELB but it is recommended to use security groups and this solution is incomplete as it does not account for the fact that the internal service IP ranges change over time.

46
Q

A company has divested a single business unit and needs to move the AWS account owned by the business unit to another AWS Organization. How can this be achieved?

A. Create a new account in the destination AWS Organization and migrate resources
B. Create a new account in the destination AWS Organization and share the original resources using AWS Resource Access Manager
C. Migrate the account using AWS CloudFormation
D. Migrate the account using the AWS Organizations console

A

D. Migrate the account using the AWS Organizations console

Explanation:
Accounts can be migrated between organizations. To do this you must have root or IAM access to both the member and master accounts. Resources will remain under the control of the migrated account. CORRECT: “Migrate the account using the AWS Organizations console” is the correct answer. INCORRECT: “Create a new account in the destination AWS Organization and migrate resources” is incorrect. You do not need to create a new account in the destination AWS Organization as you can just migrate the existing account. INCORRECT: “Create a new account in the destination AWS Organization and share the original resources using AWS Resource Access Manager” is incorrect. You do not need to create a new account in the destination AWS Organization as you can just migrate the existing account. INCORRECT: “Migrate the account using AWS CloudFormation” is incorrect. You do not need to use AWS CloudFormation. You can use the Organizations API or AWS CLI for when there are many accounts to migrate and therefore you could use CloudFormation for any additional automation but it is not necessary for this scenario.

47
Q

An Amazon RDS PostgreSQL database is configured as Multi-AZ. A solutions architect needs to scale read performance and the solution must be configured for high availability. What is the most cost-effective solution?

A. Create a read replica as a Multi-AZ DB instance
B. Deploy a read replica in a different AZ to the master DB instance
C. Deploy a read replica using Amazon ElastiCache
D. Deploy a read replica in the same AZ as the master DB instance

A

A. Create a read replica as a Multi-AZ DB instance

Explanation:
You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB instance.

48
Q

A High Performance Computing (HPC) application will be migrated to AWS. The application requires low network latency and high throughput between nodes and will be deployed in a single AZ. How should the application be deployed for best inter-node performance?

A. In a partition placement group
B. In a cluster placement group
C. In a spread placement group
D. Behind a Network Load Balancer (NLB)

A

B. In a cluster placement group

Explanation:
A cluster placement group provides low latency and high throughput for instances deployed in a single AZ. It is the best way to provide the performance required for this application. CORRECT: “In a cluster placement group” is the correct answer. INCORRECT: “In a partition placement group” is incorrect. A partition placement group is used for grouping instances into logical segments. It provides control and visibility into instance placement but is not the best option for performance. INCORRECT: “In a spread placement group” is incorrect. A spread placement group is used to spread instances across underlying hardware. It is not the best option for performance. INCORRECT: “Behind a Network Load Balancer (NLB)” is incorrect. A network load balancer is used for distributing incoming connections, this does assist with inter-node performance

49
Q

A web application is deployed in multiple regions behind an ELB Application Load Balancer. You need deterministic routing to the closest region and automatic failover. Traffic should traverse the AWS global network for consistent performance.

How can this be achieved?

A. Configure AWS Global Accelerator and configure the ALBs as targets
B. Place an EC2 Proxy in front of the ALB and configure automatic failover
C. Create a Route 53 Alias record for each ALB and configure a latency-based routing policy
D. Use a CloudFront distribution with multiple custom origins in each region and configure for high availability

A

A. Configure AWS Global Accelerator and configure the ALBs as targets

Explanation:
AWS Global Accelerator is a service that improves the availability and performance of applications with local or global users. You can configure the ALB as a target and Global Accelerator will automatically route users to the closest point of presence. Failover is automatic and does not rely on any client side cache changes as the IP addresses for Global Accelerator are static anycast addresses. Global Accelerator also uses the AWS global network which ensures consistent performance. CORRECT: “Configure AWS Global Accelerator and configure the ALBs as targets” is the correct answer. INCORRECT: “Place an EC2 Proxy in front of the ALB and configure automatic failover” is incorrect. Placing an EC2 proxy in front of the ALB does not meet the requirements. This solution does not ensure deterministic routing the closest region and failover is happening within a region which does not protect against regional failure. Also, this introduces a potential bottleneck and lack of redundancy. INCORRECT: “Create a Route 53 Alias record for each ALB and configure a latency-based routing policy” is incorrect. A Route 53 Alias record for each ALB with latency-based routing does provide routing based on latency and failover. However, the traffic will not traverse the AWS global network. INCORRECT: “Use a CloudFront distribution with multiple custom origins in each region and configure for high availability” is incorrect. You can use CloudFront with multiple custom origins and configure for HA. However, the traffic will not traverse the AWS global network.

50
Q

A companys Amazon EC2 instances were terminated or stopped, resulting in a loss of important data that was stored on attached EC2 instance stores.

They want to avoid this happening in the future and need a solution that can scale as data volumes increase with the LEAST amount of management and configuration

Which storage is most appropriate?

A. Amazon EFS
B. Amazon S3
C. Amazon EBS
D. Amazon RDS

A

A. Amazon EFS

Explanation:
Amazon EFS is a fully managed service that requires no changes to your existing applications and tools, providing access through a standard file system interface for seamless integration

It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files.

This is an easy solution to implement and the option that requires the least management and configuration

An instance store provides temporary block level storage for an EC2 instance.

If you terminate the instance you lose all your data

The alternative is to use Elastic Block Storage which are also block level storage devices but the data is persistent.

However, EBS is not fully managed solution and doesnt frow automatically as your dcata requirements increase - you would need to increase the volume size and then extend your filesystem

51
Q

An application launched on Amazon EC2 instances needs to publish personally identifiable information (PII) about customers using Amazon SNS. The application is launched in private subnets within an Amazon VPC. Which is the MOST secure way to allow the application to access service endpoints in the same region?

A. Use an Internet Gateway
B. Use AWS PrivateLink
C. Use a proxy instance
D. Use a NAT gateway

A

B. Use AWS PrivateLink

Explanation:
To publish messages to Amazon SNS topics from an Amazon VPC, create an interface VPC endpoint. Then, you can publish messages to SNS topics while keeping the traffic within the network that you manage with the VPC. This is the most secure option as traffic does not need to traverse the Internet. CORRECT: “Use AWS PrivateLink” is the correct answer. INCORRECT: “Use an Internet Gateway” is incorrect. Internet Gateways are used by instances in public subnets to access the Internet and this is less secure than an VPC endpoint. INCORRECT: “Use a proxy instance” is incorrect. A proxy instance will also use the public Internet and so is less secure than a VPC endpoint. INCORRECT: “Use a NAT gateway” is incorrect. A NAT Gateway is used by instances in private subnets to access the Internet and this is less secure than an VPC endpoint

52
Q

A Solutions Architect is designing a web application that runs on Amazon EC2 instances behind an Elastic Load Balancer. All data in transit must be encrypted.

Which solution options meet the encryption requirement? (Select TWO.)

A. Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances
B. Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances
C. Use an Application Load Balancer (ALB) in passthrough mode, then terminate SSL on EC2 instances
D. Use a Network Load Balancer (NLB) with an HTTPS listener, then install SSL certificates on the NLB and EC2 instances
E. Use an Application Load Balancer (ALB) with a TCP listener, then terminate SSL on EC2 instances

A

A. Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances
B. Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances

Explanation:
You can passthrough encrypted traffic with an NLB and terminate the SSL on the EC2 instances, so this is a valid answer. You can use a HTTPS listener with an ALB and install certificates on both the ALB and EC2 instances. This does not use passthrough, instead it will terminate the first SSL connection on the ALB and then re-encrypt the traffic and connect to the EC2 instances. CORRECT: “Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances” is the correct answer. CORRECT: “Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances” is the correct answer. INCORRECT: “Use an Application Load Balancer (ALB) in passthrough mode, then terminate SSL on EC2 instances” is incorrect. You cannot use passthrough mode with an ALB and terminate SSL on the EC2 instances. INCORRECT: “Use a Network Load Balancer (NLB) with an HTTPS listener, then install SSL certificates on the NLB and EC2 instances” is incorrect. You cannot use a HTTPS listener with an NLB. INCORRECT: “Use an Application Load Balancer (ALB) with a TCP listener, then terminate SSL on EC2 instances” is incorrect. You cannot use a TCP listener with an ALB.

53
Q

An application running video-editing software is using significant memory on an Amazon EC2 instance. How can a user track memory usage on the Amazon EC2 instance?

A. Install the CloudWatch agent on the EC2 instance to push memory usage to an Amazon CloudWatch custom metric
B. Use an instance type that supports memory usage reporting to a metric by default
C. Call Amazon CloudWatch to retrieve the memory usage metric data that exists for the EC2 instance D. Assign an IAM role to the EC2 instance with an IAM policy granting access to the desired metric

A

A. Install the CloudWatch agent on the EC2 instance to push memory usage to an Amazon CloudWatch custom metric

Explanation:
There is no standard metric in CloudWatch for collecting EC2 memory usage. However, you can use the CloudWatch agent to collect both system metrics and log files from Amazon EC2 instances and on-premises servers. The metrics can be pushed to a CloudWatch custom metric. CORRECT: “Install the CloudWatch agent on the EC2 instance to push memory usage to an Amazon CloudWatch custom metric” is the correct answer. INCORRECT: “Use an instance type that supports memory usage reporting to a metric by default” is incorrect. There is no such thing as an EC2 instance type that supports memory usage reporting to a metric by default. The limitation is not in EC2 but in the metrics that are collected by CloudWatch. INCORRECT: “Call Amazon CloudWatch to retrieve the memory usage metric data that exists for the EC2 instance” is incorrect. As there is no standard metric for collecting EC2 memory usage in CloudWatch the data will not already exist there to be retrieved. INCORRECT: “Assign an IAM role to the EC2 instance with an IAM policy granting access to the desired metric” is incorrect. This is not an issue of permissions.

54
Q

An organization is migrating data to the AWS cloud. An on-premises application uses Network File System shares and must access the data without code changes. The data is critical and is accessed frequently. Which storage solution should a Solutions Architect recommend to maximize availability and durability?

A. Amazon Elastic Block Store
B. Amazon Simple Storage Service
C. AWS Storage Gateway – File Gateway
D. Amazon Elastic File System

A

C. AWS Storage Gateway – File Gateway

Explanation:
The solution must use NFS file shares to access the migrated data without code modification. This means you can use either Amazon EFS or AWS Storage Gateway – File Gateway. Both of these can be mounted using NFS from on-premises applications. However, EFS is the wrong answer as the solution asks to maximize availability and durability. The File Gateway backs off of Amazon S3 which has much higher availability and durability than EFS which is why it is the best solution for this scenario. CORRECT: “AWS Storage Gateway – File Gateway” is the correct answer. INCORRECT: “Amazon Elastic Block Store” is incorrect. Amazon EBS is not a suitable solution as it is a block-based (not file-based like NFS) storage solution that you mount to EC2 instances in the cloud – not from on-premises applications. INCORRECT: “Amazon Simple Storage Service” is incorrect. Amazon S3 does not offer an NFS interface. INCORRECT: “Amazon Elastic File System” is incorrect as explained above.

55
Q

A Solutions Architect needs to design a solution that will allow Website Developers to deploy static web content without managing server infrastructure. All web content must be accessed over HTTPS with a custom domain name. The solution should be scalable as the company continues to grow. Which of the following will provide the MOST cost-effective solution?

A. Amazon S3 with a static website
B. Amazon CloudFront with an Amazon S3 bucket origin
C. AWS Lambda function with Amazon API Gateway
D. Amazon EC2 instance with Amazon EBS

A

B. Amazon CloudFront with an Amazon S3 bucket origin

Explanation:
You can create an Amazon CloudFront distribution that uses an S3 bucket as the origin. This will allow you to serve the static content using the HTTPS protocol. To serve a static website hosted on Amazon S3, you can deploy a CloudFront distribution using one of these configurations: Using a REST API endpoint as the origin with access restricted by anorigin access identity (OAI). Using a website endpoint as the origin with anonymous (public) access allowed. Using a website endpoint as the origin with access restricted by a Referer header. CORRECT: “Amazon CloudFront with an Amazon S3 bucket origin” is the correct answer. INCORRECT: “Amazon S3 with a static website” is incorrect. You can create a static website using Amazon S3 with a custom domain name. However, you cannot connect to an Amazon S3 static website using HTTPS (only HTTP) so this solution does not work. INCORRECT: “AWS Lambda function with Amazon API Gateway” is incorrect. AWS Lambda and API Gateway are both serverless services however this combination does not provide a solution for serving static content over HTTPS. INCORRECT: “Amazon EC2 instance with Amazon EBS” is incorrect. Amazon EC2 with EBS is not a suitable solution as you would need to manage the server infrastructure (which the question states is not desired).

56
Q

A Solutions Architect must design a storage solution for incoming billing reports in CSV format. The data will be analyzed infrequently and discarded after 30 days. Which combination of services will be MOST cost-effective in meeting these requirements?

A. Write the files to an S3 bucket and use Amazon Athena to query the data
B. Import the logs to an Amazon Redshift cluster
C. Use AWS Data Pipeline to import the logs into a DynamoDB table
D. Import the logs into an RDS MySQL instance Answer: 1 Explanation:

A

A. Write the files to an S3 bucket and use Amazon Athena to query the data

Explanation:
Amazon S3 is great solution for storing objects such as this. You only pay for what you use and don’t need to worry about scaling as it will scale as much as you need it to. Using Amazon Athena to analyze the data works well as it is a serverless service so it will be very cost-effective for use cases where the analysis is only happening infrequently. You can also configure Amazon S3 to expire the objects after 30 days. CORRECT: “Write the files to an S3 bucket and use Amazon Athena to query the data” is the correct answer. INCORRECT: “Import the logs to an Amazon Redshift cluster” is incorrect. Importing the log files into an Amazon RedShift cluster will mean you can perform analytics on the data as this is the primary use case for RedShift (it’s a data warehouse). However, this is not the most cost-effective solution as RedShift uses EC2 instances (it’s not serverless) so the instances will be running all the time even though the analytics is infrequent. INCORRECT: “Use AWS Data Pipeline to import the logs into a DynamoDB table” is incorrect. AWS Data Pipeline is used to process and move data. You can move data into DynamoDB, but this is not a good storage solution for these log files. Also, there is no analytics solution in this option. INCORRECT: “Import the logs into an RDS MySQL instance” is incorrect. Importing the logs into an RDS MySQL instance is not a good solution. This is not the best storage solution for log files and its main use case as a DB is transactional rather than analytical.

57
Q

A Solutions Architect must design a solution that encrypts data in Amazon S3. Corporate policy mandates encryption keys be generated and managed on premises. Which solution should the Architect use to meet the security requirements?

A. SSE-C: Server-side encryption with customer-provided encryption keys
B. SSE-S3: Server-side encryption with Amazon-managed master key
C. SSE-KMS: Server-side encryption with AWS KMS managed keys
D. AWS CloudHSM

A

A. SSE-C: Server-side encryption with customer-provided encryption keys

Explanation:
Server-side encryption is about protecting data at rest. Server-side encryption encrypts only the object data, not object metadata. Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your own encryption keys. With the encryption key you provide as part of your request, Amazon S3 manages the encryption as it writes to disks and decryption when you access your objects. Therefore, you don’t need to maintain any code to perform data encryption and decryption. The only thing you do is manage the encryption keys you provide. When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory. When you retrieve an object, you must provide the same encryption key as part of your request. Amazon S3 first verifies that the encryption key you provided matches and then decrypts the object before returning the object data to you. CORRECT: “SSE-C: Server-side encryption with customer-provided encryption keys” is the correct answer. INCORRECT: “SSE-S3: Server-side encryption with Amazon-managed master key” is incorrect. With SSE-S3, Amazon manage the keys for you, so this is incorrect. INCORRECT: “SSE-KMS: Server-side encryption with AWS KMS managed keys” is incorrect. With SSE-KMS the keys are managed in the Amazon Key Management Service, so this is incorrect. INCORRECT: “AWS CloudHSM” is incorrect. With AWS CloudHSM your keys are held in AWS in a hardware security module. Again, the keys are not on-premises they are in AWS, so this is incorrect.

58
Q

A Solutions Architect must select the most appropriate database service for two use cases. A team of data scientists perform complex queries on a data warehouse that take several hours to complete. Another team of scientists need to run fast, repeat queries and update dashboards for customer support staff. Which solution delivers these requirements MOST cost-effectively?

A. RedShift for both use cases
B. RDS for both use cases
C . RedShift for the analytics use case and ElastiCache in front of RedShift for the customer support dashboard
D. RedShift for the analytics use case and RDS for the customer support dashboard

A

A. RedShift for both use cases

Explanation:
RedShift is a columnar data warehouse DB that is ideal for running long complex queries. RedShift can also improve performance for repeat queries by caching the result and returning the cached result when queries are re-run. Dashboard, visualization, and business intelligence (BI) tools that execute repeat queries see a significant boost in performance due to result caching. CORRECT: “RedShift for both use cases” is the correct answer. INCORRECT: “RDS for both use cases” is incorrect. RDS may be a good fit for the fast queries (not for the complex queries) but you now have multiple DBs to manage and multiple sets of data which is not going to be cost-effective. INCORRECT: “RedShift for the analytics use case and ElastiCache in front of RedShift for the customer support dashboard” is incorrect. You could put ElastiCache in front of the RedShift DB and this would provide good performance for the fast, repeat queries. However, it is not essential and would add cost to the solution so is not the most cost-effective option available. INCORRECT: “RedShift for the analytics use case and RDS for the customer support dashboard” is incorrect as RedShift is a better fit for both use cases.

59
Q

A DynamoDB database you manage is randomly experiencing heavy read requests that are causing latency. What is the simplest way to alleviate the performance issues?

A. Create DynamoDB read replicas
B. Enable EC2 Auto Scaling for DynamoDB
C. Create an ElastiCache cluster in front of DynamoDB
D. Enable DynamoDB DAX

A

D. Enable DynamoDB DAX

Explanation:
DynamoDB offers consistent single-digit millisecond latency. However, DynamoDB + DAX further increases performance with response times in microseconds for millions of requests per second for read-heavy workloads. The DAX cache uses cluster nodes running on Amazon EC2 instances and sits in front of the DynamoDB table as you can see in the diagram below: CORRECT: “Enable DynamoDB DAX” is the correct answer. INCORRECT: “Create DynamoDB read replicas” is incorrect. There’s no such thing as DynamoDB Read Replicas (Read Replicas are an RDS concept). INCORRECT: “Enable EC2 Auto Scaling for DynamoDB” is incorrect. You cannot use EC2 Auto Scaling with DynamoDB. You can use Application Auto Scaling to scales DynamoDB but as the spikes in read traffic are random and Auto Scaling needs time to adjust the capacity of the DB it wouldn’t be as responsive as using DynamoDB DAX. INCORRECT: “Create an ElastiCache cluster in front of DynamoDB” is incorrect. ElastiCache in front of DynamoDB is not the best answer as DynamoDB DAX is a simpler implementation and provides the required performance i

60
Q

A large media site has multiple applications running on Amazon ECS. A Solutions Architect needs to use content metadata to route traffic to specific services. What is the MOST efficient method to fulfil this requirement?

A. Use an AWS Classic Load Balancer with a host-based routing rule to route traffic to the correct service
B. Use the AWS CLI to update an Amazon Route 53 hosted zone to route traffic as services get updated
C. Use an AWS Application Load Balancer with a path-based routing rule to route traffic to the correct service
D. Use Amazon CloudFront to manage and route traffic to the correct service

A

C. Use an AWS Application Load Balancer with a path-based routing rule to route traffic to the correct service

Explanation:
The ELB Application Load Balancer can route traffic based on data included in the request including the host name portion of the URL as well as the path in the URL. Creating a rule to route traffic based on information in the path will work for this solution and ALB works well with Amazon ECS. The diagram below depicts a configuration where a listener directs traffic that comes in with /orders in the URL to the second target group and all other traffic to the first target group: CORRECT: “Use an AWS Application Load Balancer with a path-based routing rule to route traffic to the correct service” is the correct answer. INCORRECT: “Use an AWS Classic Load Balancer with a host-based routing rule to route traffic to the correct service” is incorrect. The ELB Classic Load Balancer does not support any content-based routing including host or path-based. INCORRECT: “Use the AWS CLI to update an Amazon Route 53 hosted zone to route traffic as services get updated” is incorrect. Using the AWS CLI to update Route 53 as to how to route traffic may work, but it is definitely not the most efficient way to solve this challenge. INCORRECT: “Use Amazon CloudFront to manage and route traffic to the correct service” is incorrect. Amazon CloudFront does not have the capability to route traffic to different Amazon ECS services based on content metadata.

61
Q

You have created a file system using Amazon Elastic File System (EFS) which will hold home directories for users. What else needs to be done to enable users to save files to the EFS file system?

A. Create a separate EFS file system for each user and grant read-write-execute permissions on the root directory to the respective user. Then mount the file system to the users’ home directory
B. Modify permissions on the root directory to grant read-write-execute permissions to the users. Then create a subdirectory and mount it to the users’ home directory
C. Instruct the users to create a subdirectory on the file system and mount the subdirectory to their home directory
D. Create a subdirectory for each user and grant read-write-execute permissions to the users. Then mount the subdirectory to the users’ home directory

A

D. Create a subdirectory for each user and grant read-write-execute permissions to the users. Then mount the subdirectory to the users’ home directory

Explanation:
After creating a file system, by default, only the root user (UID 0) has read-write-execute permissions. For other users to modify the file system, the root user must explicitly grant them access. One common use case is to create a “writable” subdirectory under this file system root for each user you create on the EC2 instance and mount it on the user’s home directory. All files and subdirectories the user creates in their home directory are then created on the Amazon EFS file system CORRECT: “Create a subdirectory for each user and grant read-write-execute permissions to the users. Then mount the subdirectory to the users’ home directory” is the correct answer. INCORRECT: “Create a separate EFS file system for each user and grant read-write-execute permissions on the root directory to the respective user. Then mount the file system to the users’ home directory” is incorrect. You don’t want to create a separate EFS file system for each user, this would be a higher cost and require more management overhead. INCORRECT: “Modify permissions on the root directory to grant read-write-execute permissions to the users. Then create a subdirectory and mount it to the users’ home directory” is incorrect. You don’t want to modify permission on the root directory as this will mean all users are able to access other users’ files (and this is a home directory, so the contents are typically kept private). INCORRECT: “Instruct the users to create a subdirectory on the file system and mount the subdirectory to their home directory” is incorrect. Instructing the users to create a subdirectory on the file system themselves would not work as they will not have access to write to the directory root.

62
Q

An AWS workload in a VPC is running a legacy database on an Amazon EC2 instance. Data is stored on a 2000GB Amazon EBS (gp2) volume. At peak load times, logs show excessive wait time. What should be implemented to improve database performance using persistent storage?

A. Change the EC2 instance type to one with burstable performance
B. Change the EC2 instance type to one with EC2 instance store volumes
C. Migrate the data on the Amazon EBS volume to an SSD-backed volume
D. Migrate the data on the EBS volume to provisioned IOPS SSD (io1)

A

D. Migrate the data on the EBS volume to provisioned IOPS SSD (io1)

Explanation:
The data is already on an SSD-backed volume (gp2), therefore to improve performance the best option is to migrate the data onto a provisioned IOPS SSD (io1) volume type which will provide improved I/O performance and therefore reduce wait times. CORRECT: “Migrate the data on the EBS volume to provisioned IOPS SSD (io1)” is the correct answer. INCORRECT: “Change the EC2 instance type to one with burstable performance” is incorrect. Burstable performance instances provide a baseline of CPU performance with the ability to burst to a higher level when required. However, the issue in this scenario is disk wait time, not CPU performance, therefore we need to improve I/O not CPU performance. INCORRECT: “Change the EC2 instance type to one with EC2 instance store volumes” is incorrect. Using an instance store volume may provide high performance but the data is not persistent so it is not suitable for a database. INCORRECT: “Migrate the data on the Amazon EBS volume to an SSD-backed volume” is incorrect as the data is already on an SSD-backed volume (gp2).

63
Q

A data-processing application runs on an i3.large EC2 instance with a single 100 GB EBS gp2 volume. The application stores temporary data in a small database (less than 30 GB) located on the EBS root volume. The application is struggling to process the data fast enough, and a Solutions Architect has determined that the I/O speed of the temporary database is the bottleneck. What is the MOST cost-efficient way to improve the database response times?

A. Put the temporary database on a new 50-GB EBS io1 volume with a 3000 IOPS allocation
B. Move the temporary database onto instance storage
C. Put the temporary database on a new 50-GB EBS gp2 volume
D. Enable EBS optimization on the instance and keep the temporary files on the existing volume

A

B. Move the temporary database onto instance storage

Explanation:
EC2 Instance Stores are high-speed ephemeral storage that is physically attached to the EC2 instance. The i3.large instance type comes with a single 475GB NVMe SSD instance store so it would be a good way to lower cost and improve performance by using the attached instance store. As the files are temporary, it can be assumed that ephemeral storage (which means the data is lost when the instance is stopped) is sufficient. CORRECT: “Move the temporary database onto instance storage” is the correct answer. INCORRECT: “Put the temporary database on a new 50-GB EBS io1 volume with a 3000 IOPS allocation” is incorrect. Moving the DB to a new 50-GB EBS io1 volume with a 3000 IOPS allocation will improve performance but is more expensive so will not be the most cost-efficient solution. INCORRECT: “Put the temporary database on a new 50-GB EBS gp2 volume” is incorrect. Moving the DB to a new 50-GB EBS gp2 volume will not result in a performance improvement as you get IOPS allocated per GB so a smaller volume will have lower performance. INCORRECT: “Enable EBS optimization on the instance and keep the temporary files on the existing volume” is incorrect. Enabling EBS optimization will not lower cost. Also, EBS Optimization is a network traffic optimization, it does not change the I/O performance of the volume.

64
Q

An application is hosted on the U.S west coast. Users there have no problems, but users on the east coast are experiencing performance issues. The users have reported slow response times with the search bar autocomplete and display of account listings. How can you improve the performance for users on the east coast?

A. Host the static content in an Amazon S3 bucket and distribute it using CloudFront
B. Setup cross-region replication and use Route 53 geolocation routing
C. Create a DynamoDB Read Replica in the U.S east region
D. Create an ElastiCache database in the U.S east region

A

D. Create an ElastiCache database in the U.S east region

Explanation:
ElastiCache can be deployed in the U.S east region to provide high-speed access to the content. ElastiCache Redis has a good use case for autocompletion (see links below). CORRECT: “Create an ElastiCache database in the U.S east region” is the correct answer. INCORRECT: “Host the static content in an Amazon S3 bucket and distribute it using CloudFront” is incorrect. This is not static content that can be hosted in an Amazon S3 bucket and distributed using CloudFront. INCORRECT: “Setup cross-region replication and use Route 53 geolocation routing” is incorrect. Cross-region replication is an Amazon S3 concept and the dynamic data that is presented by this application is unlikely to be stored in an S3 bucket. INCORRECT: “Create a DynamoDB Read Replica in the U.S east region” is incorrect. There’s no such thing as a DynamoDB Read Replica (Read Replicas are an RDS concept).