sthithapragnakk -- SAA Exam Dumps Jan 24 1-100 Flashcards

1
Q

651# A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares. Applications running on Amazon EC2 instances access file shares. The company needs a storage disaster recovery (DR) solution in a secondary region. Data that is replicated in the secondary region needs to be accessed using the same protocols as the primary region. Which solution will meet these requirements with the LESS operating overhead?

A. Create an AWS Lambda function to copy the data to an Amazon S3 bucket. Replicates the S3 bucket to the secondary region.
B. Create an FSx backup for ONTAP volumes using AWS Backup. Copy the volumes to the secondary region. Create a new FSx instance for ONTAP from the backup.
C. Create an FSx instance for ONTAP in the secondary region. Use NetApp SnapMirror to replicate data from the primary region to the secondary region.
D. Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume. Replicates the volume to the secondary region.

A

C. Create an FSx instance for ONTAP in the secondary region. Use NetApp SnapMirror to replicate data from the primary region to the secondary region.

NetApp SnapMirror is a data replication feature designed for ONTAP systems, enabling efficient data replication between primary and secondary systems. It meets the requirement of replicating data using the same protocols (CIFS and NFS) and involves less operational expenses compared to other options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Question #: 652
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has a large data workload that runs for 6 hours each day. The company cannot lose any data while the process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical data workload.

Which solution will meet these requirements MOST cost-effectively?

A. Configure a long-running cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
C. Configure a transient cluster that runs the primary node on an On-Demand Instance and the core nodes and task nodes on Spot Instances.
D. Configure a long-running cluster that runs the primary node on an On-Demand Instance, the core nodes on Spot Instances, and the task nodes on Spot Instances.

A

B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.

So BNC both looks similar, but what is the difference B is going to use primary and core nodes on demand whereas C is going to use primary argument and core node and tasmota Spot Instances obviously, we want to use this even for transient cluster The reason being if both core nodes and task nodes are on Spot Instances, then you won’t have any instances to process your data at all. Even though you have the primary node on on demand, you still need the core node at least some core nodes available because those are the worker nodes right. So, if you have bought spot then no way you are going to achieve it. But option B you own you have both primary and core node on demand, which means you will always have some available to to process the application. Even the task notes even if they are not available, that’s fine because you still have your core notes on on demand. So for that reason, we will pick the option B,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Question #: 653
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an organization in AWS Organizations. The company needs a solution that will tag all resources that are created in a specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user who created the resource.

Which solution will meet these requirements?

A. Move the specific AWS account to a new organizational unit (OU) in Organizations from the management account. Create a service control policy (SCP) that requires all existing resources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU.
B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
C. Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda function to look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon EventBridge scheduled rule to invoke the CloudFormation stack.
D. Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost center tag.

A

B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.

option A. Let’s cross out because this is not the right answer. This is suggesting us to use service control protocol to enforce bagging before resource creation. But saps don’t directly perform tagging operations. So for that reason, we won’t use that then we have option C. This is talking about our users CloudFormation. And it is using a scheduled event bridge role which may introduce unnecessary complexity. And that does not ensure immediate tagging upon resource resource creation at all, because you’re scheduling it to do that. And then we have option D. Under this proposes tagging resources with a default value and then reacting to events to correct the attack. This introduces a potential delay and does not guarantee that resources are immediately tagged correctly. So for that reason, we are going to go with option B. What does option B says you know, lambda function is used to tag resources and then lambda function is configured to look up the appropriate cost center from the RDS database. What does this do this ensures that each resource is tagged with the correct cost center ID we are not, you know going with the default value instead we are looking up and then we are using human to bridge in conjunction with AWS cloud trail events, we are not scheduling here, we are going to have the cloud trail events to trigger the lambda function when resources are created. This ensures that time processes are automatically initiated whenever a relevant event occurs instead of scheduling like the option C is doing. So for that reason, option B is the correct one in this case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Question #: 654
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company recently migrated its web application to the AWS Cloud. The company uses an Amazon EC2 instance to run multiple processes to host the application. The processes include an Apache web server that serves static content. The Apache web server makes requests to a PHP application that uses a local Redis server for user sessions.

The company wants to redesign the architecture to be highly available and to use AWS managed solutions.

Which solution will meet these requirements?

A. Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to deploy its EC2 instance into a public subnet. Assign a public IP address.
B. Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API to proxy requests to the Lambda function. Set the API Gateway CORS configuration to respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information.
C. Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-AZ enabled. Configure the ElastiCache for Redis cluster in cluster mode. Copy the frontend resources to Amazon S3. Configure the backend code to reference the EC2 instance.
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.

A

D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.

Whenever you see static content, blindly go with s3 and cloud format CloudFront. Those two are like awesome combination for handling static content. So let’s go through the options. And with the hint I just gave you we can clearly see option D is the right answer, but let’s see why the other options are not the correct ones. Option A is talking about Elastic Beanstalk and reality no it is a fully managed service that makes it easy to deploy and run applications. While it simplifies application development it may I’d be the best fit for hosting static content directly. Assigning a public IP to an easy to instance in a public subnet suggests exposure to public internet. While this is common for web servers, it’s not the most highly available architecture. And he does not leverage AWS managed solutions for static content delivery. Hence, we don’t pick that one. Then we have option B, we have lambda, it can host serverless functions, but it may not be the best fit for hosting entire web application, especially one with PHP code. And then we have Amazon API, gateway, and lambda. These are commonly used for serverless applications. But for a web application with PHP, it may introduce complexities, then configuring elastic cache for Redis. For session information, it is a good practice. But this option lacks clear separation between static content dynamic content and session management. So now we don’t pick that. Then option C. This option maintains the back end code on an easy to instance. Again, they want managed solutions not manual solutions, which may not be the most scalable and managed solution. While using elastic cache for readies with multi AZ is a good choice for session management. The overall architecture lacks the separation of concerns between static content and dynamic content content, and also copying the front end resources to Amazon s3 is a step toward better scalability for static content. But the architecture could be further optimized. So hence for that reason, we will go with option D. Why because this option leverages Amazon CloudFront for global content delivery providing low latency access to static resources. Separation of static content, which is hosted in s3 and dynamic content running on ECS with fargate is a good practice for scalability and manageability. Using an application load balancer with AWS fargate tasks for the PHP application provides a scalable and highly available environment. And Amazon elastic cache for Redis cluster with multiple availability zones is used for session management and ensuring highly availability. So overall, Option D is a well architected solution that leverages multiple AWS managed services to achieve high availability, scalability and separation of concern. Clearly, it aligns with the best practices for hosting web applications on AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Question #: 655
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company designed the application to work with session affinity (sticky sessions) for a better user experience.

The application must be available publicly over the internet as an endpoint. A WAF must be applied to the endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint.

Which combination of steps will meet these requirements? (Choose two.)

A. Create a public Network Load Balancer. Specify the application target group.
B. Create a Gateway Load Balancer. Specify the application target group.
C. Create a public Application Load Balancer. Specify the application target group.
D. Create a second target group. Add Elastic IP addresses to the EC2 instances.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

A

C. Create a public Application Load Balancer. Specify the application target group.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

Option one, create a public network load balancer, specify the application target group network load balancers. This is one of the load balancer. The other one is application load balancers. And then we have classic load balancer and then also get a load balancer. But for now, let’s concentrate on network load balancer. These are designed for TCP UDP. That’s the hint you need to remember. Okay, that is a tip from me. network load balancer just remember TCP UDP, application load balancer HTTP and HTTPS traffic. Okay, so NL B’s are designed for TCP UDP traffic and they do not have native support for session affinity or sticky sessions for this ALP is suitable far more suitable for HTTP and HTTPS traffic. Then we have option B. Create a gateway load balancers specify the application target group. Okay, great. Well, load balancers are actually designed for handling traffic at the network and transport layers. They are not used for HTTP HTTPS traffic and they do not support session affinity. Then that will leave us with these three. Let’s go and look at option D. Which is talking about create a second target group Add elastic IP addresses to the easy two instances. Adding elastic IP addresses to easy two instances not directly related to achieving session affinity or applying a web application firewall. Session affinity is typically managed by load balancer and web application firewall is a separate service for the application security. So that will leave us with options C, I and II. Let’s see why they are the right options. And we have discussed so far. LB is designed for HTTP, HTTPS traffic and its support session affinity. So by creating a public ELB, you can expose your web application to the internet with the necessary routing and load balancing capabilities, which is all good but the question asked about endpoints and etc. So that will be handled by option C, or sorry, option E, creating a web ACL in the valve. So whilst our web application firewall provides protection against common web exploits and attacks, if you did the cloud practitioner you know, whenever we hear the words, SQL injection attacks are cross site scripting attacks, we will use the web application firewall because that is actually protects from those attacks. So by creating a web ACL, you can define rules to filter and control the web traffic. associating the web ACL with the endpoint ensures that the web application is protected by the specified security policies hence, is is one of the right option. So in summary, C and E represent the appropriate steps for exposing the web application with session affinity and applying waf for additional security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Question #: 656
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a website that stores images of historical events. Website users need the ability to search and view images based on the year that the event in the image occurred. On average, users request each image only once or twice a year. The company wants a highly available solution to store and deliver the images to users.

Which solution will meet these requirements MOST cost-effectively?

A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
C. Store images in Amazon S3 Standard. Use S3 Standard to directly deliver images by using a static website.
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.

A

D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.

whenever they ask for storage to store images, audio video files blindly go with s3, there is no better solution or cost effective solution than s3 to store images, audio video, so that actually leaves us with options CMD. So you can immediately eliminate A and B cross them out. So in CMD, they’re both RS three. So which one do we choose? And option C is talking about using s3 standard, then use standard to directly deliver images by using a static static website. Well, you can do it but if you have to talk about most cost effective obviously going with this is a little bit too much. Don’t you assume why? Because the emails are going to users request each email once or twice a year for that VI goes standard standard means you will which is costlier than any other s3 class. Okay, and standard means 24/7 It will be available, you can query it anytime, etc. But since this is once or twice, and the better option cost effective wise would be option D because it is going to use infrequent access, infrequent access. As the name suggests, you will use this class when the files are infrequently accessed, which is our case, obviously, we’ll go with this instead of standard so that this will be cost effective. So that’s what I’m talking about. All the options are right, all the options can actually handle the scenario given. But C and D are the best ones for images, and d is the best one cost effective. So this is kind of combination of three to four questions as I mentioned. So if you don’t know which one these particular services are, etc, then it would be very hard for you to answer these questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Question #: 657
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has multiple AWS accounts in an organization in AWS Organizations that different business units use. The company has multiple offices around the world. The company needs to update security group rules to allow new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize the management of security group rules to minimize the administrative overhead that updating CIDR ranges requires.

Which solution will meet these requirements MOST cost-effectively?

A. Create VPC security groups in the organization’s management account. Update the security groups when a CIDR range update is necessary.
B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.
C. Create an AWS managed prefix list. Use an AWS Security Hub policy to enforce the security group update across the organization. Use an AWS Lambda function to update the prefix list automatically when the CIDR ranges change.
D. Create security groups in a central administrative AWS account. Create an AWS Firewall Manager common security group policy for the whole organization. Select the previously created security groups as primary groups in the policy.

A

B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.

option one, which is not the correct answer for our case. So it’s talking about creating VPC security groups in the organization’s management account, this may lead to an administrative overhead and may not be as scalable or centralized. So for that reason, we are not going to use that. Then we are going to create because how many security groups are you going to create? So right, it’s not scalable at all options, see, is talking about creating AWS manage prefix list and security hub policies, this could provide automation, it might introduce unnecessary complexity and may not be as cost effective. So for that reason, we won’t go with that as well. And then we have option D. Similar to Option A, but this is talking about creating security groups. And it is talking about you creating AWS firewall manager. Well, firewall manager is generally used for managing AWS web application firewall and AWS shield advanced. And it’s used for managing security groups might be overkill for a specific use case described. And additionally, it may introduce additional costs, right? And because they are asking about minimizing administrative overhead when they say that what does the mean means use the features of the tools that are mentioned in the question, and do it don’t try to introduce something else. So part of that reason, option B is more suitable. Why? Because it’s talking about creating a VPC customer managed request list. This will allow you to define a list of cider ranges that can be shared across AWS accounts and regions. This provides a centralized way to manage data ranges. And then they’re talking about using AWS RAM, which is just a resource access manager to share the prefix list across the organization. So what is Ram? It enables resource sharing across AWS accounts, including prefix lists. So by sharing the customer manager prefix list, you centralize the management of cider ranges. And then they’re talking about using the prefix lists and security groups across the organization. So what does this do, you can reference the shared prefix list in security group rules that you created in the previous steps. This ensures that security groups across multiple AWS accounts use the same centralized set of set of ranges. So this approach minimizes administrative overhead or loss for centralized control, and provides a scalable solution for managing security group rules globally. So this makes this option most cost effective and suitable for centralizing the management of security group roles across organizations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

674 Question #: 658
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses an on-premises network-attached storage (NAS) system to provide file shares to its high performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and its storage to the AWS Cloud. The company must be able to provide NFS and SMB multi-protocol access from the file system.

Which solution will meet these requirements with the LEAST latency? (Choose two.)

A. Deploy compute optimized EC2 instances into a cluster placement group.
B. Deploy compute optimized EC2 instances into a partition placement group.
C. Attach the EC2 instances to an Amazon FSx for Lustre file system.
D. Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

A

A. Deploy compute optimized EC2 instances into a cluster placement group.
E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

So this question is divided into two groups a and b as one CD as another one and between A and B. Obviously, I will go with a not B. Why? Because A is talking about cluster placement groups. Well, what are cluster placement placement goes exactly. These allow you to group instances within a single availability zone to provide low latency network performance. This is suitable for tightly coupled HPC workloads. So cluster placement groups are they go hand in hand with HPC HPC workloads, which is what our question is asking. Whereas the partition placement groups, they can provide low latency networking. But cluster placement groups are generally preferred for HPC workloads, as they provide a higher degree of network performance optimization. Then among CDE, which one do we pick? Well, we pick the correct one based on these NFS and SMB multi protocol access. And out of these three, only NetApp ONTAP supports both NFS and SMB. This doesn’t. Well, if you want me to show something, I haven’t. So if you see this is a table, you can go and see here, SMB and NFS is supported by NetApp ONTAP. Open it only supports NFS whereas the luster supports none of this, okay, so the for that reason, we will go with NetApp ONTAP file system. So this satisfies both the entire question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

675 Question #: 483
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C02 Questions]
A company is relocating its data center and wants to securely transfer 50 TB of data to AWS within 2 weeks. The existing data center has a Site-to-Site VPN connection to AWS that is 90% utilized.
Which AWS service should a solutions architect use to meet these requirements?

A. AWS DataSync with a VPC endpoint
B. AWS Direct Connect
C. AWS Snowball Edge Storage Optimized
D. AWS Storage Gateway

A

C. AWS Snowball Edge Storage Optimized

since it’s 90% utilized, you cannot use the 10% utilization to move the 50 DB data in two weeks. But whenever the question says most something within one week or two weeks, usually they are referring you to use the snowball family device. So not the other words. So in this case, if you want to look at the data sync with VPC, again, Data Sync, it will send the data over internet and we don’t have that much bandwidth available. So you can cross that out. Same thing with Direct Connect already 90% is useless Direct Connect even though it is a direct connect to connection between on premise and your cloud. Usually, we cannot use this for one time data transfer right this because right now we want to do a one time secure transfer of HTTP data. So for that, setting up Direct Connect is not feasible. And again, even if you want to set a bad connect, it takes at least one month to do the complete setup, which is an overkill. So that will leave us with options of C and D and even D is not the right option. Why because storage gateway it enables hybrid cloud storage between on premise environments and AWS. However, given the requirement to transfer a large amount of data quickly, and the existing VPN already being utilized, 90% using a physical Data Transfer Service like Snowball is preferred, right? Because this happens over the internet and storage gateway is actually used to access cloud storage from on prem is not for migration data one time thing is, so that will leave us with a snowball device, which is a physical device, right? It’s a physical device that you can use to transfer large amounts of data to and from AWS. The storage optimized variant is specifically designed for data transfer, it can be shipped to your data center and you can load the data onto the device. After that you ship the device back to AWS where the data is transferred into your s3 bucket. So this is a hint, whenever they say their bandwidth is utilized, they are hinting you to pick the storage device.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

676 Question #: 660
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application peak hours occur at the same time each day. Application users report slow application performance at the start of peak hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure that the application works properly at the start of peak hours.

Which solution will meet these requirements?

A. Configure an Application Load Balancer to distribute traffic properly to the instances.
B. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on memory utilization.
C. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on CPU utilization.
D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.

A

D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.

So as you can see in the question multiple times they know what happens exactly when the peak happens, how long it will run after the peak, and so on and so forth. So what do you think they are referring to? Whenever they tell you that, you know the company knows pretty much everything about the application, etc? They’re hinting you to select something scheduled? When do you pick scheduled when you know the patterns when you know the usage patterns, when the peak happens when this etc happens? Since you already know everything. In those situations, you will use schedule. If they say like, you know, the company doesn’t know when the peak happens, etc, then they are in hinting to dynamic scaling policies because you don’t know when it will peak. So obviously, you cannot schedule something that you already don’t know. So by that you can directly go and pick answer a day. But let’s look at the options other options why they are wrong? Option A, they are using configuring lb Well, this helps distribute traffic, but it may not address the issue of slow application performance or start off peak apps. LPS distribute traffic to existing instances but don’t inherently solve the problem of insufficient capacity during peak period. So then we have option B which uses dynamic scaling policy for Auto Scaling group based on memory utilization. Scaling based on memory utilization may not necessarily align with the actual demand for the application during peak hours. It’s typically better to scale based on metrics that directly related to application performance instead of memory. And similar to B C’s use, instead of CS Using CPU utilization here, I think same answer goes here as well, because it might not be the most accurate indicator of application demand during peak hours. So that will leave us with Option D, instead of using some utilizations, we already know that is going to happen, we already know the peak is going to happen. And after the peak, it is going to run so on and so forth. So we are going to go instead with scheduled. So this option addresses the specific requirements of ensuring that new instances are launched before the start of peak hours, providing the necessary capacity to handle the increased demand at the beginning of the peak period. Okay, so by using a scheduled scaling policy, you can proactively ensure that sufficient capacity is available to handle the expected peak demand, improving application performance at the start of peak hours. This will only work if you know when the peak is going to happen. If you don’t, then the dynamic scaling would be appropriate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

677 Question #: 661
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs applications on AWS that connect to the company’s Amazon RDS database. The applications scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database.

Which solution will meet these requirements with the LEAST operational overhead?

A. Use Amazon DynamoDB with connection pooling with a target group configuration for the database. Change the applications to use the DynamoDB endpoint.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the applications to use the custom proxy endpoint.
D. Use an AWS Lambda function to provide connection pooling with a target group configuration for the database. Change the applications to use the Lambda function.

A

B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.

As I always say, least operational overhead means use a feature that is part of the service that is mentioned in the question, don’t create your own solution. So that being said, let’s look at different options. Option A is trying to create their own solution. So using that logic, you can actually pick the appropriate or the correct one, you don’t even have to go through each one of them. Because option A C and D are creating a new solution, whereas option B, you are using the feature of rds obviously that is a free giveaway. But as usual, let’s go through why other options are wrong. So this option is not suitable because DynamoDB is a no SQL database service and is not a direct replacement for Amazon RDS, which might be necessary for the application. So hence that is gone. Then option C. Managing a custom proxy and easy to introduce his additional over operational complexity and maintenance overhead, which may not be the least effort solution, this easiest solution but it’s not least operational overhead. And option D if we talk about it. While lambda can be used for specific use cases, it may not be the most suitable solution for connection pooling and managing database connections due to its stateless nature and limitations in connecting persistence. So that will leave us with Option B, which uses RDS proxy, which is the Manage Database proxy that provides connection pooling, failover and security features for database applications. It allows applications to scale more effectively and by managing database connections on their behalf. It integrates well with RBS and reduces operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

678 Question #: 662
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.

Which solution will meet these requirements with the LEAST operational overhead?

A. Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
C. Delete all expired and unused snapshots to reduce snapshot costs.
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.

A

D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.

so again, another list operational overhead, the previous logic will apply here as well. Okay, now let’s look at the different options here. Option A is not the right option. Why? Because cloud while while CloudWatch logs can provide insights using elastic volumes to reduce the size of sorry, using elastic volumes to resize. volumes may involve manual intervention and could be operationally overhead intensive. So for that reason, we’ve already used that and Option B is kind of similar to Option A, but here it is using a custom script which introduces operational overhead and manually resizing values may not be the most efficient solution. And then we have option C. Deleting expired and unused snapshots is a good practice, but it may not directly address the observed increase in EBS storage costs. Snapshots may not be the primary contributor to increase the storage costs. So that will leave us with Option D.

What are this option D this option addresses the specific concern of non essential snapshots and uses data lifecycle manager to automate the snapshot management process, it is more streamlined and automated approach with a less operational overhead. So therefore, Option D provides a more efficient and automated solution to manage snapshots and optimize costs for the company’s current storage usage. Think of s3 lifecycle manager that’s for s3, whereas data lifecycle manager is for stories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

679 Question #: 663
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is developing a new application on AWS. The application consists of an Amazon Elastic Container Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS for MySQL database that contains the dataset for the application. The dataset contains sensitive information. The company wants to ensure that only the ECS cluster can access the data in the RDS for MySQL database and the data in the S3 bucket.

Which solution will meet these requirements?

A. Create a new AWS Key Management Service (AWS KMS) customer managed key to encrypt both the S3 bucket and the RDS for MySQL database. Ensure that the KMS key policy includes encrypt and decrypt permissions for the ECS task execution role.
B. Create an AWS Key Management Service (AWS KMS) AWS managed key to encrypt both the S3 bucket and the RDS for MySQL database. Ensure that the S3 bucket policy specifies the ECS task execution role as a user.
C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.

A

D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.

considering all the requirements that our application wants, first of all, where is this rd RDS for that contains the data set for the company wants to ensure that only ECS cluster can access the data. So whenever they say only something wants access to something else, we are talking about giving permission to the security groups. Okay, for that particular service, okay, only easy to cluster so easy to cluster, you can have a security group that can provide access, you will provide access to that on the RDS and the data in s3 bucket. And data in s3 bucket. As you already know, it goes through a public. It goes to public Internet, but if you want to provide access. Since it has sensitive information, they don’t want it to go through public internet. You want it to go through private internet and we already discussed I think somewhere. If you want to connect it to s3 privately instead of publicly then the service you use is VPC endpoints, okay? And okay that and the security groups or subnet, it is indirectly hinting us towards option D because option D is the only one that is using VPC endpoint. But let’s go through all the options. Option A, which is not the right one is talking about creating a new AWS kms customer managed key to encrypt both the s3 bucket and the RDS for MySQL database. And ensuring that that came is key policy includes encrypt and decrypt from the permissions for the ECS task execution role. But it is not the most direct way to restrict access to data sources, right? Because KMS is encrypt encryption. It’s not about permissions and access restrictions, etc. So forget about that. And then even be you can just cross it out because same day, they’re talking about a mess kms same options, so we will ignore it. Okay, looks like even option C is using VPC endpoints, but let’s go through that as well, which is not the correct answer. Option C. So we are in it is creating an s3 bucket policy that restricts bucket access to the ECS task execution role, which is a good practice, but it does not address securing access to RDS and MySQL database, right. So for that reason, we can ignore that. Create a VPC endpoint for Amazon RDS for MySQL, then update the RDS for MySQL security group to allow access from only the subnets that the cluster will generate tasks in. Well, this is a good one. But this is far too make the ease only the easiest cluster to access it. But what about the s3, we are using bucket policies that restrict bucket access file, roll, then create no it doesn’t work that way. You’re going to create VPC endpoint for s3, not for rds. So if you look at option D, which actually talks about that, we are going to create a VPC endpoint for RDS for MySQL. Then also update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Then we’ll create a VPC endpoint for s3 Then we will update the s3 bucket policy to allow access Only the VPC end point. You might be thinking like, oh, CS almost, but yes, almost it’s not the same. We are restricting to ECS task execution role, but instead you should be doing giving access only to the VPC endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

680 Question #: 664
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has a web application that runs on premises. The application experiences latency issues during peak hours. The latency issues occur twice each month. At the start of a latency issue, the application’s CPU utilization immediately increases to 10 times its normal amount.

The company wants to migrate the application to AWS to improve latency. The company also wants to scale the application automatically when application demand increases. The company will use AWS Elastic Beanstalk for application deployment.

Which solution will meet these requirements?

A. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale based on requests.
B. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment to scale based on requests.
C. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment to scale on a schedule.
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.

A

D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.

Well, basically they already gave the answer wherein they are going to use Beanstalk, but which configuration or instance type of the beanstalk you want to use? Fall if you are not aware of it, then you won’t be able to answer it. So let’s take this moment to answer that one. Looks like a and d are similar B and C ad similar. So let’s go through option A why that is not the right option. Option A is talking about configuring Bienstock with burstable performance, while using bustable performance instance in unlimited mald can help with bustable workloads. Configuring the environment to scale based on requests may not address this specific requirement of scaling automatically when the CPU utilization increases 10 times during latency issues. So for that reason, we don’t do that we don’t use that, then we have option B and C which are also wrong both of them are using Compute optimized instances. These are used to provide better performance, but scaling based on requests based on request as similar to Option A may not directly address the increase in CPU utilization during latency issues. And option C again, instead of based on request, they are scheduling it and obviously that is not dynamic at all, scheduling may not be responsive enough to handle unpredictable spikes in demand during latency issues. And it may not be the most effective solution for dynamic scaling. So for that reason, we are going to pick option D because it is scaling on predictive metrics. So what is it first of all, it is going to use burstable performance which we have already seen. And configuring the environment to scale on pre two metrics allows you to proactively scale based on anticipated demand, like you don’t know how the demand is going to be. So depending on that it is going to scare not scheduled not on requests, but dynamically. So this aligns well with the requirement to automatically scale when the CPU utilization increases 10 times during latency issues. Therefore, Option D is the most suitable solution for improving latency and automatically scaling the application during peak hours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

681 Question #: 665
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has customers located across the world. The company wants to use automation to secure its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure.

Which solution will meet these requirements?

A. Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes.
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
C. Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
D. Use AWS CloudFormation to set up the infrastructure. Use AWS Service Catalog to track changes.

A

B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.

And whenever you want to track incremental changes to the infrastructure, which is like changing the count for the you know, you want to audit the configuration, changes of the services etc. And in that case, we know we will use AWS config. AWS config is specifically used to handle that. Okay, let’s go through the options though. Clearly, two options are using organizations, combination of catalog and conflict. Let’s learn what they are so that we can pick the right option. Organizations we already know is more focused on managing multiple AWS accounts within an organization. Okay, while config is the service designed for tracking managing changes to AWS resources, Option A lacks the automation right because organizations is very common. So for that reason, even option D is wrong. We clearly know that CloudFormation is the one that actually gives you the automation capability of the infrastructure right you cannot automate, provisioning infrastructure etc. So but both option B and D has CloudFormation. So which one is the right option then we have to go see AWS configure a label service catalog. Alright well, service catalog is designed for creating and managing catalogs have IT resources. While it can be used for governance and tracking, it may not provide the same level of you know, inventory of resources like AWS config. For that reason, we will cross this out, and instead option B is the character.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

683 Question #: 667
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is moving its data and applications to AWS during a multiyear migration project. The company wants to securely access data on Amazon S3 from the company’s AWS Region and from the company’s on-premises location. The data must not traverse the internet. The company has established an AWS Direct Connect connection between its Region and its on-premises location.

Which solution will meet these requirements?

A. Create gateway endpoints for Amazon S3. Use the gateway endpoints to securely access the data from the Region and the on-premises location.
B. Create a gateway in AWS Transit Gateway to access Amazon S3 securely from the Region and the on-premises location.
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.
D. Use an AWS Key Management Service (AWS KMS) key to access the data securely from the Region and the on-premises location.

A

C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.

Amazon S3 supports both gateway endpoints and interface endpoints. With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface endpoint, which is available for an additional cost. For more information, see Types of VPC endpoints for Amazon S3 in the Amazon S3 User Guide. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

684 Question #: 668
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company created a new organization in AWS Organizations. The organization has multiple accounts for the company’s development teams. The development team members use AWS IAM Identity Center (AWS Single Sign-On) to access the accounts. For each of the company’s applications, the development teams must use a predefined application name to tag resources that are created.

A solutions architect needs to design a solution that gives the development team the ability to create resources only if the application name tag has an approved value.

Which solution will meet these requirements?

A. Create an IAM group that has a conditional Allow policy that requires the application name tag to be specified for resources to be created.
B. Create a cross-account role that has a Deny policy for any resource that has the application name tag.
C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all accounts.
D. Create a tag policy in Organizations that has a list of allowed application names.

A

D. Create a tag policy in Organizations that has a list of allowed application names.

Okay, they’re talking about creating it tag policies. But since we are talking about organization, we have to create it at an organization level. So let’s see which option is actually talking about that option is not the right right one because I m policies can include conditions, they are more focused on actions and resources and may not be more suitable for enforcing specific tag values. And then we have option B which talks about cross account role. And Denae policies are generally not recommended unless absolutely necessary and using them for tag enforcement might lead to complex policies. So no, usually you don’t do that through the normal account policies, and then we have option C, which is talking about AWS resource groups, they can help in organizing and searching their sources based on tags, but it does not enforce or control the tags that can be applied. So it will take us to tag policies that is an option that you can create in organizations that has a list of a load application names, which is very robust way if you want to implement tags. This is an effective way to ensure that only approved application names are used as tags. Therefore D creating a tag policy in organizations is the most appropriate solution for enforcing the required tag values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

685 Question #: 669
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the master user password by rotating the password every 30 days.

Which solution will meet these requirements with the LEAST operational overhead?

A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B. Use the modify-db-instance command in the AWS CLI to change the password.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.

A

C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.

Whenever you see the question talking about database credentials, API credentials, etc. There is only one service that should come to your mind which is a secret manager. Which option see you don’t have to even look at anything else.

password rotation = AWS Secrets Manager

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html#rds-secrets-manager-overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

686 Question #: 670
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company performs tests on an application that uses an Amazon DynamoDB table. The tests run for 4 hours once a week. The company knows how many read and write operations the application performs to the table each second during the tests. The company does not currently use DynamoDB for any other use case. A solutions architect needs to optimize the costs for the table.

Which solution will meet these requirements?

A. Choose on-demand mode. Update the read and write capacity units appropriately.
B. Choose provisioned mode. Update the read and write capacity units appropriately.
C. Purchase DynamoDB reserved capacity for a 1-year term.
D. Purchase DynamoDB reserved capacity for a 3-year term.

A

B. Choose provisioned mode. Update the read and write capacity units appropriately.

With provisioned capacity mode, you specify the number of reads and writes per second that you expect your application to require, and you are billed based on that. Furthermore if you can forecast your capacity requirements you can also reserve a portion of DynamoDB provisioned capacity and optimize your costs even further. https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html

The company knows how many read and write operations the application performs to the table each second during the tests.” so ideally they can set this as max

mentioned “Update the read and write capacity units appropriately.” which are automatically set in “on-demand”

solutions architect needs to optimize the cost for the table. And then when they literally tell you how many read and write operations it takes, and if you already know the pattern, then you don’t go for on demand on demand, you will use it when you don’t know a specific pattern like the traffic when it is big when it is not or how many read write operations it will take extra, but clearly we know that when we know that we are not going to go for on demand data. Okay. So for that, we can cancel out this one. And since we run for hours once a week, do you want to go with a reserved instance for one and three years? Of course not we are not going to usually in these cases you will go with on demand if you are only going to run that, but we have another one or another option available or another mode available for parameter which is the provision mode. Okay. So what does this do the provision mode, you manually provision the read and write capacity units based on your known workload. And in the question that clearly said they know it. Since the company knows the read and write operations during the test, they can provision the exact capacity needed for those specific periods. Optimizing costs by not paying for unused capacity during other times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

687 Question #: 671
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs its applications on Amazon EC2 instances. The company performs periodic financial assessments of its AWS costs. The company recently identified unusual spending.

The company needs a solution to prevent unusual spending. The solution must monitor costs and notify responsible stakeholders in the event of unusual spending.

Which solution will meet these requirements?

A. Use an AWS Budgets template to create a zero spend budget.
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
C. Create AWS Pricing Calculator estimates for the current running workload pricing details.
D. Use Amazon CloudWatch to monitor costs and to identify unusual spending.

A

B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.

AWS Cost Anomaly Detection is designed to automatically detect unusual spending patterns based on machine learning algorithms. It can identify anomalies and send notifications when it detects unexpected changes in spending. This aligns well with the requirement to prevent unusual spending and notify stakeholders.

https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/

clearly, there is a tool that does it but pricing calculator, you can eliminate it because this is something that you use before moving to cloud or the event the services are not yet deployed, if you want to know the costs and estimates etc, you will use that. So that’s got budgets, we already know if you want to get notified when particular cost is breached. For example, if you put $100 on the account, if it goes beyond that, then it will get notified. But again, that doesn’t give you whenever there is unusual spending, right, because what you want to do here is not limit the costs of certain things. But whenever there is unusual spending, you want to it’s and then you have cloud option D which is the cloud watch. While CloudWatch can be used for monitoring various metrics, it is not specifically designed for cost anomaly detection. So this is considered as an anomaly. This is not designed for that. For that purpose, we have actually a service or a feature which is Option B which is about cost anomaly detection. This uses machine learning to identify unexpected spending patterns and anomalies in your costs. It can automatically detect unusual spending and send notifications making it suitable for the scenario described think of this as like credit cards right whenever you swipe a card, if you swipe it in a different location that you usually stay in then obviously, it is going to send a call give it give you a call or text messages asking you is this you who made the transaction. So those are considered as anomaly detections. And even AWS has something called anomaly detection monitor here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

688 Question #: 672
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine whether to process the data further in the data pipeline.

Which solution will meet these requirements with the LEAST operational overhead?

A. Create external tables in a Spark catalog. Configure jobs in AWS Glue to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query the data.

A

B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.

Option B - leverages serverless services that minimise management tasks and allows the company to focus on querying and analysing the data with the LEAST operational overhead. AWS Glue with Athena (Option B): AWS Glue is a fully managed extract, transform, and load (ETL) service, and Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries. By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use Athena to query the data directly without the need to load it into a separate database. This minimizes operational overhead.

All they have to do is they have to analyze the Clickstream data in s3 quickly. So if you want to analyze data, that is honestly usually what do you do you run SQL queries, but then somebody will think like, oh, we have the SQL queries, s3 has s3 Select, but the problem we already discussed In one of the previous question is s3 Select works only on one file not on bunch of files. So what is the next quick solution if it is not as three, then Athena, right. So one of the options is right here because this is least operational overhead because Athena is serverless, etc. But let’s go through the other options as well. And option is talking about using Spark catalog and glues. While AWS glue can be used to query data using Spark with AWS glue introduces additional operational overhead. Spark jobs typically typically require more configuration and maintenance. So for that reason, we won’t use that. And then we have option C, which is talking about hive meta store and Spark job similar to Option A, this involves using Spark with EMR option A spark with glue here spark with EMR, again, it introduces additional complexity compared to serverless solution, and it’s not happening. And option D, which is somewhat similar to Option B, at least the first half of it. But using Amazon Kinesis data analytics is it’s more suitable for real time analytics on streaming data. And it might be an over engineered solution for analyzing Clickstream data stored in s3. So for that reason, we’ll go with option A, or Option B, wherein we are using glue crawler, we are not using glue jobs glue crawler, which can automatically discover and catalog metadata aboard the Clickstream data in s3, then we will use Athena being a serverless query service which allows for quick ad hoc SQL queries on the data without the need to set up and manage infrastructure. So therefore option B configuring an AWS glue crawler to crawl the data and using Amazon Athena for query is the most suitable solution for quickly analyzing the data with minimal operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

689 Question #: 673
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs an SMB file server in its data center. The file server stores large files that the company frequently accesses for up to 7 days after the file creation date. After 7 days, the company needs to be able to access the files with a maximum retrieval time of 24 hours.

Which solution will meet these requirements?

A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
C. Create an Amazon FSx File Gateway to increase the company’s storage space. Create an Amazon S3 Lifecycle policy to transition the data after 7 days.
D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.

A

B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.

S3 File Gateway will connect SMB to S3. Lifecycle policy will move objects to S3 Glacier Deep Archive which support 12 hours retrieval https://aws.amazon.com/blogs/aws/new-amazon-s3-storage-class-glacier-deep-archive/

Amazon S3 File Gateway supports SMB and NFS, Amazon FSx File Gateway SMB for windows workloads.

S3 file gateway supports SMB and S3 Glacier Deep Archive can retrieve data within 12 hours. https://aws.amazon.com/storagegateway/file/s3/ https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/amazon-s3-glacier.html

Option A clearly data sync so far we have what we have learned that data sync is usually used for transferring make Creating, copying data from on premise to the cloud. So copy data that is older than seven days agenda and than seven days from SMB files. For these scenarios, we don’t actually use Data Sync, this is more for data transfer services, then we have option B. Sorry, Option C, wherein it is using Fs X for Windows file server and transitioning data with s3 lifecycle policy after seven days, it is a suitable approach. However, this option does not explicitly address the specified maximum retrieval time of 24 hours. Right? It doesn’t talk about it, it just says lifecycle policy to transition after seven days, but what about this one retrieval time point for where are you transitioning it to which storage class already transitioning it to, etc. So let’s forget about that. It’s not good. Then we have option D, wherein it’s trying to access s3 for each user created s3 lifecycle policy to transition data to s3 glacier flexible retrieval after seven days, direct access to s3 may not be the most efficient solution for this scenario, right? Because it is storing the files where are the files, SMB file server, right? But here it’s saying access to s3 for each user etc, we have to fast you know, get those files access for them from the Assembly files are here it is not talking about it. Hence, we will go with option B wherein we are going to create an s3 file gateway to increase the company’s storage space. And we already know that file gateway is used to access the cloud storage from within on premise This is not addressing that, then we will create a lifecycle policy to transition the data to s3 glacier deep archive, because flexible retrieval is what five to 12 hours, but here this is 20 they are okay with 24 hours. So the glacier deep archive it I think it is around two well 248 hours. So we are fine with this and this is cheaper than this one. So we would go with option B instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

690 Question #: 674
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a database that runs on an Amazon RDS for PostgreSQL DB instance. The application performs slowly when traffic increases. The database experiences a heavy read load during periods of high traffic.

Which actions should a solutions architect take to resolve these performance issues? (Choose two.)

A. Turn on auto scaling for the DB instance.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
C. Convert the DB instance to a Multi-AZ DB instance deployment. Configure the application to send read traffic to the standby DB instance.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
E. Configure the Auto Scaling group subnets to ensure that the EC2 instances are provisioned in the same Availability Zone as the DB instance.

A

B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.

B: Read replicas distribute load and help improving performance D: Caching of any kind will help with performance Remember: “ The database experiences a heavy read load during periods of high traffic.”

By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the load and improving overall performance during periods of heavy read traffic. Amazon ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache rather than making repeated database queries.

B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica. By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the load and improving overall performance during periods of heavy read traffic. D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster. Amazon ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache rather than making repeated database queries.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/creating-elasticache-cluster-with-RDS-settings.html

Turn on auto scaling for the DB instance aka what happens when you do that? Well, this is not applicable for RDS instances, auto scaling is typically used for what EC two instances and not RDS instances. So this doesn’t work at all. Then we have option C, which is talking about converting the DB instance to a multi AZ deployment. This is mainly for enhancing availability, and fault tolerance, but might not significantly improve read performance. Okay. And then we have option II, which is talking about again, configuring Auto Scaling group subnets to ensure easy two instances are provisioned in the same availability zone as the DB instance this might not directly address the read performance issues. It’s more about optimizing the architecture for locality, which has nothing to do with improving the performance during heavy read. So whenever you see the question that talks about raid, Lord, you usually the answers options will have something called read replicas read replicas are specifically used, wherein it will help the database performance because the read workload will be completely offloaded from the main database to the read replica, which is the option B which talks about that one. So what is it saying it is saying create a read replica where you will offload your read traffic from the primary DB instance to the replica distributing the read load and improving overall performance this is a common approach to horizontally scale read heavy database workloads. And then we are another option is which is the wherein we are using Amazon elastic cache. We already know it is a managed caching service that can help improve application performance by doing what by caching frequently accessed data. So caching query results in elastic cache can reduce the load on Postgres equal database, especially for repeat Add read queries when you are reading the same data again and again. Instead of hitting the main database, not the second time onwards, it will instead hit the caching solution to get the same data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Question #: 675
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates one snapshot of each EBS volume every day to meet compliance requirements. The company wants to implement an architecture that prevents the accidental deletion of EBS volume snapshots. The solution must not change the administrative rights of the storage administrator user.

Which solution will meet these requirements with the LEAST administrative effort?

A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
B. Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
C. Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
D. Lock the EBS snapshots to prevent deletion.

A

D. Lock the EBS snapshots to prevent deletion.

The “lock” feature in AWS allows you to prevent accidental deletion of resources, including EBS snapshots. This can be set at the snapshot level, providing a straightforward and effective way to meet the requirements without changing the administrative rights of the storage administrator user. Exactly what a locked EBS snapshot is used for https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-snapshot-lock.html

whenever the talk about least which means there is some feature that is available on EBS that you have to use the all you have to do is enable that don’t try to implement a new solution. Okay. So when you gaze through these options, option A and B are trying to create a create something new, whereas C and D are trying to use the features of EBS, etc. So let’s go through these options anyways, option one, or Option A, which is not the right answer. Let’s cross that out. Why is it not the right answer? Well, because this, this involves creating a new Iam role. With permission to delete snapshots and attaching it to an easy to instance, the idea is to delegate the Delete permission to an easy to instance, and the user would use the AWS CLI from the instance. Violet is a valid approach. It introduces additional components and complexity. Again, don’t try to reinvent the wheel. This works, but this is too much administrative effort. Hence, that is not the right answer. Option B. What is he trying to do? It involves creating an IAM policy that explicitly denies permission to delete snapshots. This policy is then attached to the storage administrator user. While it achieves the goal of preventing accidental deletion. It involves modifying the administrator rights of the storage administrator user which might not be desired, because this clearly says do not change the administrator heads. And then we have options see. wherein we are tagging the snapshots and using a service like AWS resource access manager, or recycling been previously called to enforce retention rules based on the tax. It adds complexity and introduces a new service which might be more than what is needed for the simple requirement of preventing accidental deletion, which is where we are heading. We don’t have to reinvent the wheel. All you have to do is use the creation that is already there for the EBS snapshots. Okay, so here, all we are doing is the last leave a snapshot so well how are you doing it because EBS provides a built in feature to lock snapshots, preventing them from being deleted. This is a straightforward and effective solution that does not involve creating additional IAM roles or policies or tags. It directly addresses the requirement of preventing accidental deletion with minimal administrative effort.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

692 Question #: 676
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company’s application uses Network Load Balancers, Auto Scaling groups, Amazon EC2 instances, and databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and from the network interfaces in near real time in its Amazon VPC. The company wants to send the information to Amazon OpenSearch Service for analysis.

Which solution will meet these requirements?

A. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Streams to stream the logs from the log group to OpenSearch Service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.
C. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon Kinesis Data Streams to stream the logs from the trail to OpenSearch Service.
D. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon Kinesis Data Firehose to stream the logs from the trail to OpenSearch Service.

A

B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.

CloudTrail is for logging administrative actions, we need CloudWatch. We want the data in another AWS service (OpenSearch), not Kinesis, thus we need Firehose, not Streams. VPC Flow Logs capture information about the IP traffic going to and from network interfaces in a VPC. By configuring VPC Flow Logs to send the log data to a log group in Amazon CloudWatch Logs, you can then use Amazon Kinesis Data Firehose to stream the logs from the log group to Amazon OpenSearch Service for analysis. This approach provides near real-time streaming of logs to the analytics service.

So, option so most you know you need to remember near real time that is another thing that is what you need and capture information about the traffic to and from the network interfaces. Okay that is another thing that you need to not and we already covered this right whenever you want traffic information about traffic to and from in within the VPC, what should come to your mind server admins, VPC flow logs should come to your mind because that is what provides you detailed information about traffic to and fro from a PC and all the options has it so that is not a big help here at all. But let’s go through each option. Option is not the right one because this involves setting up VPC flow logs to capture network traffic information, and then send the logs to an Amazon CloudWatch To log group, and subsequently it suggests using Amazon Kinesis data streams to stream the logs from the log group to Amazon OpenSearch service with Viola, this is feasible, trust me it works. OK, using Kinesis data streams might introduce what might introduce unnecessary complexity for this use case. Okay, so let’s this will work. But there are there might be some other options in this options where you don’t have to do all this where it maybe there is a feature in one of these or VPC flow logs that you can use to make it possible. So let’s go through other wrong options, which will be landed the options see, and this option involves using Cloud trail to capture VPC flow logs and then using Kinesis data streams to stream the logs to the open source service. However, what is cloud trail it is typically used for logging API activity on your account and it may not may not provide the detailed network traffic information captured by VPC flow logs hence, that’s gone or similarly, Option D which is same as d where in it is using Cloud trail. So that will leave us with Option B okay, if you are looking at these two and like okay, both look the same. Why not? Why are you crossing out a and b? Well, here you are using data streams. Here we are using data firehose. Well, what is the difference? Well, this option again, similar to Option A, but suggests using Kinesis data firehose, instead of data streams. Firehose is a managed service that simplifies the process of delivering data to destinations, such as open source service, data streams cannot open data Firehose can send data to open source service, it is a more suitable option for near real time log analysis, making it a better solution for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

693 Question #: 677
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is developing an application that will run on a production Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster has managed node groups that are provisioned with On-Demand Instances.

The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes.

Which solution will meet these requirements MOST cost-effectively?

A. Create a managed node group that contains only Spot Instances.
B. Create two managed node groups. Provision one node group with On-Demand Instances. Provision the second node group with Spot Instances.
C. Create an Auto Scaling group that has a launch configuration that uses Spot Instances. Configure the user data to add the nodes to the EKS cluster.
D. Create a managed node group that contains only On-Demand Instances.

A

B. Create two managed node groups. Provision one node group with On-Demand Instances. Provision the second node group with Spot Instances.

The keywords are infrequent and resiliency. This solution allows you to have a mix of On-Demand Instances and Spot Instances within the same EKS cluster. You can use the On-Demand Instances for the development work where you need dedicated resources and then leverage Spot Instances for testing the resiliency of the application. Spot Instances are generally more cost-effective but can be terminated with short notice, so using a combination of On-Demand and Spot Instances provides a balance between cost savings and stability. Option A (Create a managed node group that contains only Spot Instances) might be cost-effective, but it could introduce potential challenges for tasks that require dedicated resources and might not be the best fit for all scenarios.

the company needs a dedicated Eks cluster for development work, the company will use the development cluster frequently to test the resiliency of the application, I think we have done similar one, but for a different use case, the Eks cluster must manage all nodes which solution most cost effectively. Okay, here we are seeing cost effectively. Okay. So what is it saying it has managed node groups. And then they want to test the resiliency of the application. And they are also using on demand, which is costly. But we want to choose the most cost effective solution here. So let’s go through this option is talking about creating a managed node group that contains only Spot Instances. Well, yeah, this is cheaper, but this might not be the best solution because stability and predictability are required for certain development activities I and we are talking about making it available right needs dedicated Eks cluster for development. If you go with Spot Instances, there will be instances where you won’t have any spot instances. So there won’t be any development Eks cluster in the development to work with that also having Spot Instances for all of them, no, we are not going to it is not going to work it out okay, then we have option C wherein it is talking about both actually both option C and Option D involves using auto scaling groups and user data scripts. While they could work managing multiple managed node groups, provides a cleaner and more straightforward approach in the context of eks. So here they are talking again, same thing manage node group that contains only on demand this is going to be costly for that reason, we won’t use it. And this is talking about auto scaling groups confident users Spot Instances again same thing as this. So instead what we will do is we will use a combination of dedicated on demand instances and Spot Instances. So whenever the Spot Instances are not available, it doesn’t mean that complete Eks cluster is down no we will have the on demand instances we will use it so option B allows the company to have a dedicated cluster for development work by creating two managed node instances instance one, one using on demand instances and the other using Spot Instances. The company can manage costs effectively, on demand on demand instances provides stability and reliability, which is suitable for development work that requires consistent and predictable performance. And the Spot Instances offer cost savings but come with the trade off of potential termination with short notice. However, for free infrequent testing and resilience experiments, Spot Instances can be utilized to optimize costs. Therefore, option B is the most cost effective solution that aligns with the specified requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

694 Question #: 678
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal effort for any data that must be encrypted.

Which solution will meet these requirements?

A. Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive data.
B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C. Create an AWS managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
D. Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys. Upload the encrypted objects back into Amazon S3.

A

B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).

This option allows you to create a customer managed key using AWS KMS. With a customer managed key, you have full control over key lifecycle management, including the ability to create, rotate, and disable keys with minimal effort. SSE-KMS also integrates with AWS Identity and Access Management (IAM) for fine-grained access control.

as you see encryption, we have to you have to think about kms. But there are other encryptions that are available as well. For example like SSE s3, but here we are talking about create rotate and disabled keys SSE s3, you cannot rotate them because it is managed by actually s3. So since it’s managed is SSE s3 is my s3 managed encryption grace. It does provide encryption but it does not give you the same level of control or what key management as done by kms. So let’s forget about that. So option C even though it is using AWS kms. It probably is important to note that AWS managed keys do not provide the same level of control over key creation, rotation and disabling as customer manage key because these are managed by kms itself. Okay, okay, and Option D involves manually downloading encrypting uploading s3 objects, which is a less efficient and more error prone process competitor leveraging AWS kms server side encryption for that reason we are going to go with option B. wherein we are going to create and manage customer managed keys. giving you full control over the lifecycle of the keys. This includes the ability to create rotate, disable keys as needed, and using server side encryption with AWS kms keys ensures that the s3 objects are encrypted with the specified customer man is the key. This provides a secure and managed approach to encrypting sensitive data in Amazon s3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

695 Question #: 679
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company wants to back up its on-premises virtual machines (VMs) to AWS. The company’s backup solution exports on-premises backups to an Amazon S3 bucket as objects. The S3 backups must be retained for 30 days and must be automatically deleted after 30 days.

Which combination of steps will meet these requirements? (Choose three.)

A. Create an S3 bucket that has S3 Object Lock enabled.
B. Create an S3 bucket that has object versioning enabled.
C. Configure a default retention period of 30 days for the objects.
D. Configure an S3 Lifecycle policy to protect the objects for 30 days.
E. Configure an S3 Lifecycle policy to expire the objects after 30 days.
F. Configure the backup solution to tag the objects with a 30-day retention period

A

A. Create an S3 bucket that has S3 Object Lock enabled.
C. Configure a default retention period of 30 days for the objects.
E. Configure an S3 Lifecycle policy to expire the objects after 30 days.

In theory, E alone would be enough because the objects are “retained for 30 days” without any configuration as long as no one deletes them. But let’s assume that they want us to prevent deletion. A: Yes, required to prevent deletion. Object Lock requires Versioning, so if we ‘create an S3 bucket that has S3 Object Lock enabled’ that this also has object versioning enabled, otherwise we would not be able to create it. C: Yes, “default retention period” specifies how long object lock will be applied to new objects by default, we need this to protect objects from deletion.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html

A. Create an S3 bucket that has S3 Object Lock enabled. Enable the S3 Object Lock feature on S3. C. Configure a default retention period of 30 days for the objects. To lock the objects for 30 days. E. Configure an S3 Lifecycle policy to expire the objects after 30 days. -> to delete the objects after 30 days.

First of all, they should be retained for 30 days, they should be automatically deleted after 30 days. Okay, so those are our requirements looks like so let’s go through these options. Let’s eliminate what you know the wrong answers obviously, it is talking about create s3 bucket that has object versioning enabled. Okay, so why do we need this? This is not necessarily for meeting the specific requirements mentioned in the question at all. Object versioning is more relevant when managing different versions of objects, but it does not directly address the retention and deletion policies. So forget about that. Then we have option D configure in lifecycle policy to protect the objects for 30 days. Well, this is obviously not applicable here as well because the goal is to retain the object for 30 days then automatically delete them not to protect them nowhere in the question they’re talking about protecting so forget about that. And then option f is not explicitly needed for implementing the retention and deletion policy using s3 object locking lifecycle policies. I mean, we can do that using that but here it is saying tagging while tagging can be useful for organizational purposes it is not a primary requirement, if you want to delete or retain them. So for that purpose, what are we going to do we are going to use s3 object or lock enable. What does this do it will ensure that the objects in the bucket cannot be deleted or modified fire a specified retention period. This helps in meeting the requirement of retaining backups for 30 days. Then, Option C configure a default retention period of 30 days for the objects Yes. By specifying the objects within the bucket are locked for a duration of 30 days when you do this. This enforces the retention policy on the objects. Finally we have option E which will make sure to delete a data medically after that it is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

696 Question #: 680
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and another S3 bucket. The files must be copied continuously. New files are added to the original S3 bucket consistently. The copied files should be overwritten only if the source file changes.

Which solution will meet these requirements with the LEAST operational overhead?

A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
B. Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
D. Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and the mounted file system.

A

A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.

A fulfils the requirement of “copied files should be overwritten only if the source file changes” so A is correct.

Transfer only data that has changed – DataSync copies only the data and metadata that differs between the source and destination location. Transfer all data – DataSync copies everything in the source to the destination without comparing differences between the locations. https://docs.aws.amazon.com/datasync/latest/userguide/configure-metadata.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

602# A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU correlates with the five businesses the company owns. The company’s research and development (R&D) business is being separated from the company and will need its own organization. A solutions architect creates a new, separate management account for this purpose. What should the solutions architect do next in the new management account?

A. Make the AWS R&D account part of both organizations during the transition.
B. Invite the AWS R&D account to be part of the new organization after the R&D AWS account has left the old organization.
C. Create a new AWS R&D account in the new organization. Migrate resources from the old AWS R&D account to the new AWS R&D account.
D. Have the AWS R&D account join the new organization. Make the new management account a member of the old organization.

A

B. Invite the AWS R&D account to be part of the new organization after the R&D AWS account has left the old organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

603# A company is designing a solution to capture customer activity across different web applications to process analytics and make predictions. Customer activity in web applications is unpredictable and can increase suddenly. The company needs a solution that integrates with other web applications. The solution must include an authorization step for security reasons. What solution will meet these requirements?

A. Configure a gateway load balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores information that the business receives on an Amazon Elastic File System (Amazon EFS) file system. ). Authorization is resolved in the GWLB.
B. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that stores the information the business receives in an Amazon S3 bucket. Use an AWS Lambda function to resolve authorization.
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information the business receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.
D. Configure a gateway load balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores information that the business receives on an Amazon Elastic File System (Amazon EFS) file system. ). Use an AWS Lambda function to resolve authorization.

A

C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information the business receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.

Amazon API Gateway: Acts as a managed service to create, publish, and secure APIs at scale. Allows the creation of API endpoints that can be integrated with other web applications. Amazon Kinesis Data Firehose: Used to capture and upload streaming data to other AWS services. In this case, you can store the information in an Amazon S3 bucket. API Gateway Lambda Authorizer: Provides a way to control access to your APIs using Lambda functions. Allows you to implement custom authorization logic. This solution offers scalability, the ability to handle unpredictable surges in activity, and integration capabilities. Using a Lambda API Gateway authorizer ensures that the authorization step is performed securely.

The other options have some limitations or are less aligned with the specified requirements:
A. GWLB vs. Amazon ECS: This option involves a load balancer vs. ECS, which might be better suited for scenarios requiring container orchestration, but could introduce an unnecessary complexity for the given requirements.
B. Amazon API Gateway vs. Amazon Kinesis Data Flow: This option lacks the Lambda authorizer to resolve authorization and may not be as easy to integrate with other web applications.
D. GWLB vs. Amazon ECS with Lambda function for authorization: Similar to option A, this introduces a load balancer and ECS, which might be more complex than necessary for the given requirements. In summary, Option C offers a streamlined solution with the necessary scalability, integration capabilities, and authorization control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

604# An e-commerce company wants a disaster recovery solution for its Amazon RDS DB instances running Microsoft SQL Server Enterprise Edition. The company’s current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours. Which solution will meet these requirements in the MOST cost-effective way?

A. Create a cross-region read replica and promote the read replica to the primary instance.
B. Use AWS Database Migration Service (AWS DMS) to create RDS replication between regions.
C. Use 24-hour cross-region replication to copy native backups to an Amazon S3 bucket.
D. Copy automatic snapshots to another region every 24 hours.

A

D. Copy automatic snapshots to another region every 24 hours

Amazon RDS creates and saves automated backups of your DB instance or Multi-AZ DB cluster during your DB instance backup window. RDS creates a storage volume snapshot of your database instance, backing up the entire database instance and not just individual databases. RDS saves automated backups of your DB instance according to the backup retention period that you specify. If necessary, you can recover your DB instance at any time during the backup retention period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

605# A company runs a web application on Amazon EC2 instances in an auto-scaling group behind an application load balancer that has sticky sessions enabled. The web server currently hosts the user’s session state. The company wants to ensure high availability and prevent the loss of user session state in the event of a web server outage. What solution will meet these requirements?

A. Use an Amazon ElastiCache for Memcached instance to store session data. Update the application to use ElastiCache for Memcached to store session state.
B. Use Amazon ElastiCache for Redis to store session state. Update the application to use ElastiCache for Redis to store session state.
C. Use an AWS Storage Gateway cached volume to store session data. Update the application to use the AWS Storage Gateway cached volume to store session state.
D. Use Amazon RDS to store session state. Update your application to use Amazon RDS to store session state.

A

B. Use Amazon ElastiCache for Redis to store session state. Update the application to use ElastiCache for Redis to store session state.

In summary, option B (Amazon ElastiCache for Redis) is a common and effective solution for maintaining user session state in a web application, providing high availability and preventing loss of session state during server outages. Web.

Amazon ElastiCache for Redis: Redis is an in-memory data store that can be used to store session data. It offers high availability and persistence options, making it suitable for maintaining session state. Sticky sessions and auto-scaling group: Using ElastiCache for Redis enables centralized storage of session state, ensuring that sticky sessions can still be maintained even if an EC2 instance is unavailable or replaced due to scaling automatic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

606# A company migrated a MySQL database from the company’s on-premises data center to an Amazon RDS for MySQL DB instance. The company sized the RDS database instance to meet the company’s average daily workload. Once a month, the database runs slowly when the company runs queries for a report. The company wants the ability to run reports and maintain performance of daily workloads. What solution will meet these requirements?

A. Create a read replica of the database. Directs queries to the read replica.
B. Create a backup of the database. Restore the backup to another database instance. Direct queries to the new database.
C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
D. Resize the database instance to accommodate the additional workload.

A

A. Create a read replica of the database. Directs queries to the read replica.

This is the most cost-effective solution because it does not require any additional AWS services. A read replica is a copy of a database that is synchronized with the primary database. You can route report queries to the read replica, which will not impact performance for daily workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

607# A company runs a container application using Amazon Elastic Kubernetes Service (Amazon EKS). The application includes microservices that manage customers and place orders. The business needs to direct incoming requests to the appropriate microservices. Which solution will meet this requirement in the MOST cost-effective way?

A. Use the AWS Load Balancing Controller to provision a network load balancer.
B. Use the AWS Load Balancer Controller to provision an application load balancer.
C. Use an AWS Lambda function to connect requests to Amazon EKS.
D. Use Amazon API Gateway to connect requests to Amazon EKS.

A

D. Use Amazon API Gateway to connect requests to Amazon EKS.

You are charged for each hour or partial hour that an application load balancer is running, and the number of load balancer capacity units (LCUs) used per hour. With Amazon API Gateway, you only pay when your APIs are in use. https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

608# A company uses AWS and sells access to copyrighted images. The company’s global customer base needs to be able to access these images quickly. The company must deny access to users from specific countries. The company wants to minimize costs as much as possible. What solution will meet these requirements?

A. Use Amazon S3 to store images. Enable multi-factor authentication (MFA) and public access to the bucket. Provide clients with a link to the S3 bucket.
B. Use Amazon S3 to store images. Create an IAM user for each customer. Add users to a group that has permission to access the S3 bucket.
C. Use the Amazon EC2 instances that are behind the application load balancers (ALBs) to store the images. Deploy instances only to countries where your company serves. Provide customers with links to ALBs for their country-specific instances.
D. Use Amazon S3 to store images. Use Amazon CloudFront to distribute geo-restricted images. Provide a signed URL for each client to access data in CloudFront.

A

D. Use Amazon S3 to store images. Use Amazon CloudFront to distribute geo-restricted images. Provide a signed URL for each client to access data in CloudFront.

In summary, option D (Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the geo-restricted images. Provide a signed URL for each client to access the data in CloudFront) is the most appropriate solution to meet the requirements. specified.

Amazon S3 for Storage: Amazon S3 is used to store the images. Provides scalable, durable, low-latency storage for images. Amazon

CloudFront for content delivery: CloudFront is used as a content delivery network (CDN) to distribute images globally. This reduces latency and ensures fast access for customers around the world. Geo Restrictions in CloudFront: CloudFront supports geo restrictions, allowing the company to deny access to users from specific countries. This satisfies the requirement of controlling access based on the user’s location.

Signed URLs for secure access: Signed URLs are provided to clients for secure access to images. This ensures that only authorized customers can access the content.

Cost Minimization: CloudFront is a cost-effective solution for content delivery, and can significantly reduce data transfer costs by serving content from edge locations close to end users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

609# A solutions architect is designing a solution based on Amazon ElastiCache for highly available Redis. The solutions architect must ensure that failures do not result in performance degradation or data loss locally and within an AWS Region. The solution must provide high availability at the node level and at the region level. What solution will meet these requirements?

A. Use Multi-AZ Redis replication groups with shards containing multiple nodes.
B. Use Redis shards containing multiple nodes with Redis stub-only files (AOF) enabled.
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D. Use Redis shards containing multiple nodes with auto-scaling enabled.

A

A. Use Multi-AZ Redis replication groups with shards containing multiple nodes.

In summary, option A (Use Multi-AZ Redis Replication Groups with shards containing multiple nodes) is the most appropriate option to achieve high availability at both the node level and the AWS Region level in Amazon ElastiCache for Redis.

Multi-AZ Redis Replication Groups: Amazon ElastiCache provides Multi-AZ support for Redis, allowing the creation of replication groups that span multiple availability zones (AZs) within a region. This guarantees high availability at a regional level.

Shards with Multi-node: Shards within the replication group can contain multiple nodes, providing scalability and redundancy at the node level. This contributes to high availability and performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

610# A company plans to migrate to AWS and use Amazon EC2 on-demand instances for its application. During the migration testing phase, a technical team observes that the application takes a long time to start and load memory to be fully productive. Which solution will reduce the app launch time during the next testing phase?

A. Start two or more EC2 instances on demand. Enable auto-scaling features and make EC2 on-demand instances available during the next testing phase.
B. Start EC2 Spot Instances to support the application and scale the application to be available during the next testing phase.
C. Start the EC2 on-demand instances with hibernation enabled. Configure EC2 Auto Scaling hot pools during the next testing phase.
D. Start EC2 on-demand instances with capacity reservations. Start additional EC2 instances during the next testing phase.

A

C. Start the EC2 on-demand instances with hibernation enabled. Configure EC2 Auto Scaling hot pools during the next testing phase.

In summary, option C (Start EC2 on-demand instances with hibernation enabled. Configure EC2 Auto Scaling hot pools during the next testing phase) addresses the problem of reducing launch time by utilizing hibernation and maintenance. from hot pools for faster response.

EC2 On-Demand Instances with Hibernation: Hibernation allows EC2 instances to persist their in-memory state to Amazon EBS. When an instance is hibernated, it can quickly resume with its previous memory state intact. This is particularly useful for reducing startup time and loading memory quickly.

EC2 Auto Scaling Warm Pools: Auto Scaling warm pools allow you to keep a specific number of instances running even when demand is low. Warm pools keep instances in a state where they can respond quickly to increased demand. This helps reduce the time it takes for an instance to become fully productive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

611# An enterprise’s applications run on Amazon EC2 instances in auto-scaling groups. The company notices that its apps experience traffic spikes on random days of the week. The company wants to maintain application performance during traffic surges. Which solution will meet these requirements in the MOST cost-effective way?

A. Use manual scaling to change the size of the auto-scaling group.
B. Use predictive scaling to change the size of the Auto Scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
D. Use schedule scaling to change the size of the auto-scaling group.

A

C. Use dynamic scaling to change the size of the Auto Scaling group.

Dynamic Scaling: Dynamic scaling adjusts the size of the Auto Scaling group in response to changing demand. Allows the Auto Scaling group to automatically increase or decrease the number of instances based on defined policies. This is well suited for handling surges in traffic as the group enters or exits as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

612# An e-commerce application uses a PostgreSQL database running on an Amazon EC2 instance. During a monthly sales event, database usage increases and causes database connection issues for the application. Traffic is unpredictable for subsequent monthly sales events, which affects sales forecasting. The business needs to maintain performance when there is an unpredictable increase in traffic. Which solution solves this problem in the MOST cost-effective way?

A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
B. Enable PostgreSQL database auto-scaling on the EC2 instance to accommodate increased usage.
C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage.

A

A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.

Amazon Aurora Serverless v2: Aurora Serverless v2 is designed for variable and unpredictable workloads. Automatically adjusts database capacity based on actual usage, allowing you to scale down during low demand periods and scale up during peak periods. This ensures that the application can handle increased traffic without manual intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

613# A company hosts an internal serverless application on AWS using Amazon API Gateway and AWS Lambda. Company employees report issues with high latency when they start using the app every day. The company wants to reduce latency. What solution will meet these requirements?

A. Increase the API gateway throttling limit.
B. Configure scheduled scaling to increase Lambda-provisioned concurrency before employees start using the application each day.
C. Create an Amazon CloudWatch alarm to start a Lambda function as a target for the alarm at the beginning of each day.
D. Increase the memory of the Lambda function.

A

B. Configure scheduled scaling to increase Lambda-provisioned concurrency before employees start using the application each day.

Scheduled scaling for provisioned concurrency: Provisioned concurrency ensures that a specified number of function instances are available and hot to handle requests. By configuring scheduled scaling to increase provisioned concurrency ahead of anticipated maximum usage each day, you ensure that there are enough warm instances to handle incoming requests, reducing cold starts and latency.

42
Q

614# A research company uses local devices to generate data for analysis. The company wants to use the AWS cloud to analyze the data. The devices generate .csv files and support writing the data to an SMB shared file. Business analysts must be able to use SQL commands to query data. Analysts will consult periodically throughout the day. What combination of steps will meet these requirements in the MOST cost-effective way? (Choose three.)

A. Deploy an on-premises AWS Storage Gateway in Amazon S3 File Gateway mode.
B. Deploy an on-premises AWS Storage Gateway to the Amazon FSx File Gateway.
C. Configure an AWS Glue crawler to create a table based on the data that is in Amazon S3.
D. Set up an Amazon EMR cluster with EMR File System (EMRFS) to query data that is in Amazon S3. Provide access to analysts.
E. Set up an Amazon Redshift cluster to query data in Amazon S3. Provide access to analysts.
F. Configure Amazon Athena to query data that is in Amazon S3. Provide access to analysts.

A

MORE FOLLOW UP NEEDED!

A. Deploy an on-premises AWS Storage Gateway in Amazon S3 File Gateway mode.
C. Configure an AWS Glue crawler to create a table based on the data that is in Amazon S3.
F. Configure Amazon Athena to query data that is in Amazon S3. Provide access to analysts.

Deploy an on-premises AWS Storage Gateway in Amazon S3 File Gateway mode (Option A): This allows on-premises devices to write data to an SMB file share, and the data is stored in Amazon S3. This option provides a scalable and cost-effective way to ingest data into the cloud. Configure an AWS Glue crawler to create a table based on data in Amazon S3 (Option C): AWS Glue can automatically discover the schema of the data in Amazon S3 and create a table in the AWS Glue data catalog. This makes it easier for analysts to query data using SQL commands.

Set up Amazon Athena to Query Data in Amazon S3 (Option F): Amazon Athena is a serverless query service that allows analysts to run SQL queries directly on data stored in Amazon S3. It’s cost-effective because you charge per query, and there’s no need to provision or manage infrastructure.

43
Q

615# A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon RDS DB instances to build and run a payment processing application. The company will run the application in its local data center for compliance purposes. A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with the company’s operational team to build the application. What activities are the responsibility of the company’s operational team? (Choose three.)

A. Provide resilient power and network connectivity to the Outposts racks
B. Management of the virtualization hypervisor, storage systems, and AWS services running on the Outposts
C. Physical security and access controls of the Outposts data center environment
D. Availability of Outposts infrastructure, including power supplies, servers, and networking equipment within Outposts racks
E. Physical maintenance of Outposts components
F. Provide additional capacity for clusters Amazon ECS to mitigate server failures and maintenance events

A

A. Provide resilient power and network connectivity to the Outposts racks
C. Physical security and access controls of the Outposts data center environment
F. Plan additional capacity for clusters Amazon ECS to mitigate server failures and maintenance events

From https://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/aws-outposts-high-availability-design.html With Outposts, you are responsible for providing resilient power and network connectivity to the Outpost racks to meet your availability requirements for workloads running on Outposts. You are responsible for the physical security and access controls of the data center environment. You must provide sufficient power, space, and cooling to keep the Outpost operational and network connections to connect the Outpost back to the Region. Since Outpost capacity is finite and determined by the size and number of racks AWS installs at your site, you must decide how much EC2, EBS, and S3 on Outposts capacity you need to run your initial workloads, accommodate future growth, and to provide extra capacity to mitigate server failures and maintenance events.

44
Q

616# A company is planning to migrate a TCP-based application to the company’s VPC. The application is publicly accessible on a non-standard TCP port through a hardware device in the company’s data center. This public endpoint can process up to 3 million requests per second with low latency. The company requires the same level of performance for the new public endpoint on AWS. What should a solutions architect recommend to meet this requirement?

A. Implement a network load balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires.
B. Implement an application load balancer (ALB). Configure the ALB to be publicly accessible over the TCP port that the application requires.
C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an application load balancer as a source.
D. Deploy an Amazon API Gateway API that is configured with the TCP port required by the application. Configure AWS Lambda functions with provisioned concurrency to process requests.

A

A. Implement a network load balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires.

Network Load Balancer (NLB): NLB is designed to handle TCP traffic with extremely low latency. It is a Layer 4 (TCP/UDP) load balancer that provides high performance and scales horizontally. NLB is suitable for scenarios where low latency and high throughput are critical, making it a good choice for TCP-based applications with strict performance requirements.

Publicly Accessible: NLB can be configured to be publicly accessible, allowing it to accept incoming requests from the Internet.

TCP Port Configuration: NLB allows you to configure it to listen on the specific non-standard TCP port required by the application.

Options B, C, and D are less suitable for the given requirements: Application Load Balancer (ALB) (Option B): ALB is designed for HTTP/HTTPS traffic and operates at the application layer (layer 7). It may introduce additional overhead and may not be as suitable for non-TCP HTTP based applications. Amazon CloudFront (Option C): CloudFront is a content delivery network service designed primarily for content delivery, and is typically used for HTTP/HTTPS traffic. It may not be the best option for handling arbitrary TCP traffic. Amazon API Gateway (Option D): API Gateway is designed to create RESTful APIs and is not optimized for arbitrary TCP traffic. It may not provide the low latency performance required for the described scenario.

Therefore, NLB is the recommended option to maintain high throughput and low latency for a TCP-based application on a non-standard port.

NLB is able to handle up to tens of millions of requests per second, while providing high performance and low latency. https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/

https://aws.amazon.com/elasticloadbalancing/network-load-balancer Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC, based on IP protocol data. Ideal for load balancing of both TCP and UDP traffic, Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone. It is integrated with other popular AWS services such as Auto Scaling, Amazon EC2 Container Service (ECS), Amazon CloudFormation, and AWS Certificate Manager (ACM).

45
Q

617# A company runs its critical database on an Amazon RDS for PostgreSQL DB instance. The company wants to migrate to Amazon Aurora PostgreSQL with minimal downtime and data loss. Which solution will meet these requirements with the LESS operating overhead?

A. Create a DB snapshot of the RDS instance for the PostgreSQL database to populate a new Aurora PostgreSQL DB cluster.
B. Creates an Aurora read replica of the RDS instance for the PostgreSQL database. Promote the Aurora read replica to a new Aurora PostgreSQL DB cluster.
C. Use data import from Amazon S3 to migrate the database to an Aurora PostgreSQL database cluster.
D. Use the pg_dump utility to backup the RDS for PostgreSQL database. Restore the backup to a new Aurora PostgreSQL database cluster.

A

B. Creates an Aurora read replica of the RDS instance for the PostgreSQL database. Promote the Aurora read replica to a new Aurora PostgreSQL DB cluster.

Aurora Read Replica: Creates an Aurora Read Replica of the existing RDS instance for the PostgreSQL database. This read replica is continually updated with changes to the source database.

Promotion: Promote the Aurora read replica to become the primary instance of the new Aurora PostgreSQL database cluster. This process involves minimal downtime as it does not affect the source RDS for the PostgreSQL database instance.

Advantages of Option B:
Low Downtime: Read replica can be promoted with minimal downtime, allowing for a smooth transition.

Continuous Replication: Read replication ensures continuous replication of changes from the source database to the Aurora PostgreSQL database cluster.

Operating Overhead: This approach minimizes operating overhead compared to other options. Take advantage of Aurora’s replication capabilities for seamless migration.

46
Q

618# An enterprise’s infrastructure consists of hundreds of Amazon EC2 instances using Amazon Elastic Block Store (Amazon EBS) storage. A solutions architect must ensure that each EC2 instance can be recovered after a disaster. What should the solutions architect do to meet this requirement with the LEAST amount of effort?

A. Take a snapshot of the EBS storage that is attached to each EC2 instance. Create an AWS CloudFormation template to launch new EC2 instances from EBS storage.
B. Take a snapshot of the EBS storage that is attached to each EC2 instance. Use AWS Elastic Beanstalk to establish the environment based on the EC2 template and attach the EBS storage.
C. Use AWS Backup to configure a backup plan for the entire EC2 instance group. Use the AWS Backup API or AWS CLI to speed up the process of restoring multiple EC2 instances.
D. Create an AWS Lambda function to take a snapshot of the EBS storage that is connected to each EC2 instance and copy the Amazon Machine Images (AMIs). Create another Lambda function to perform the restores with the copied AMIs and attach the EBS storage.

A

C. Use AWS Backup to configure a backup plan for the entire EC2 instance group. Use the AWS Backup API or AWS CLI to speed up the process of restoring multiple EC2 instances.

AWS Backup: AWS Backup is a fully managed backup service that centralizes and automates data backup across all AWS services. Supports backup of Amazon EBS volumes and enables efficient backup management.

Backup plan: Create a backup plan in AWS Backup that includes the entire EC2 instance group. This ensures a centralized and consistent backup strategy for all instances.

API or CLI: AWS Backup provides an API and CLI that can be used to automate and speed up the process of restoring multiple EC2 instances. This allows for a simplified disaster recovery process.

Advantages of Option C:
Centralized Management: AWS Backup provides a centralized management interface for backup plans, making it easy to manage and track backups of a large number of resources.

Automation: Using the AWS Backup API or CLI allows automation of backup and restore processes, reducing manual effort.

Consistent backups: AWS Backup ensures consistent and reliable backups of EBS volumes associated with EC2 instances.

47
Q

619# A company recently migrated to the AWS cloud. The company wants a serverless solution for large-scale on-demand parallel processing of a semi-structured data set. The data consists of logs, media files, sales transactions, and IoT sensor data that is stored in Amazon S3. The company wants the solution to process thousands of items in the data set in parallel. Which solution will meet these requirements with the MOST operational efficiency?

A. Use the AWS step functions map state in inline mode to process data in parallel.
B. Use AWS passing function map state in distributed mode to process data in parallel.
C. Use AWS Glue to process data in parallel.
D. Use multiple AWS Lambda functions to process data in parallel.

A

B. Use AWS passing function map state in distributed mode to process data in parallel.

AWS Step Functions allow you to orchestrate and scale distributed processing using map state. Map state can process elements in a large data set in parallel by distributing work across multiple resources.

Using map state in distributed mode will automatically take care of parallel processing and scaling. Step Functions will add more workers to process the data as needed.

Step Functions is serverless, so there are no servers to manage. It will automatically scale based on demand.

https://docs.aws.amazon.com/step-functions/latest/dg/use-dist-map-orchestrate-large-scale-parallel-workloads.html

48
Q

620# A company will migrate 10PB of data to Amazon S3 in 6 weeks. The current data center has a 500 Mbps uplink to the Internet. Other local applications share the uplink. The company can use 80% of the Internet bandwidth for this one-time migration task. What solution will meet these requirements?

A. Configure AWS DataSync to migrate data to Amazon S3 and verify it automatically.
B. Use rsync to transfer data directly to Amazon S3.
C. Use the AWS CLI and multiple copy processes to send data directly to Amazon S3.
D. Order multiple AWS Snowball devices. Copy data to devices. Send the devices to AWS to copy the data to Amazon S3.

A

D. Order multiple AWS Snowball devices. Copy data to devices. Send the devices to AWS to copy the data to Amazon S3.

1 Gbps will make about 7 TB in 24 hours. This means that 400 Mbps will only do 2.8 TB in 24 hours. Or, 510 weeks to transmit.

49
Q

621# A company has several local Internet Small Computer Systems Interface (ISCSI) network storage servers. The company wants to reduce the number of these servers by moving to the AWS cloud. A solutions architect must provide low-latency access to frequently used data and reduce dependency on on-premises servers with minimal infrastructure changes. What solution will meet these requirements?

A. Deploy an Amazon S3 file gateway.
B. Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3.
C. Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes.
D. Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.

A

D. Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.

AWS Storage Gateway: AWS Storage Gateway is a hybrid cloud storage service that provides seamless, secure integration between on-premises IT environments and AWS storage services. Supports different gateway configurations, including volume gateways.

Cached volumes provide low latency access to frequently used data because frequently accessed data is stored locally on premises. The entire dataset is backed up on Amazon S3, ensuring durability and accessibility.

Minimal Changes to Infrastructure:
Using a cached volume gateway minimizes the need for significant changes to existing infrastructure. It allows the company to keep frequently accessed data on-premises while taking advantage of the scalability and durability of Amazon S3.

Incorrect Option C (AWS Storage Gateway volume gateway with stored volumes): Stored volumes keep the entire data set on-premises, and may not be best suited for low-latency access to frequently used data.

Therefore, option D, which uses an AWS Storage Gateway volume gateway with cached volumes, is the most appropriate option for the given requirements.

50
Q

622# A solutions architect is designing an application that will allow business users to upload objects to Amazon S3. The solution must maximize the durability of the object. Objects must also be available at any time and for any period of time. Users will access objects frequently within the first 30 days after objects are uploaded, but users are much less likely to access objects older than 30 days. Which solution meets these requirements in the MOST cost-effective way?

A. Store all objects in S3 Standard with an S3 lifecycle rule to transition objects to the S3 glacier after 30 days.
B. Store all objects in S3 Standard with an S3 lifecycle rule to transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
C. Store all objects in S3 Standard with an S3 lifecycle rule to transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
D. Store all objects in S3 Intelligent-Tiering with an S3 lifecycle rule to transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

A

B. Store all objects in S3 Standard with an S3 lifecycle rule to transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

Before you transition objects to S3 Standard-IA or S3 One Zone-IA, you must store them for at least 30 days in Amazon S3. For example, you cannot create a Lifecycle rule to transition objects to the S3 Standard-IA storage class one day after you create them. Amazon S3 doesn’t support this transition within the first 30 days because newer objects are often accessed more frequently or deleted sooner than is suitable for S3 Standard-IA or S3 One Zone-IA storage. Similarly, if you are transitioning noncurrent objects (in versioned buckets), you can transition only objects that are at least 30 days noncurrent to S3 Standard-IA or S3 One Zone-IA storage. https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html

51
Q

623# A company has migrated a two-tier application from its on-premises data center to the AWS cloud. The data tier is a multi-AZ implementation of Amazon RDS for Oracle with 12 TB of general-purpose Amazon Elastic Block Store (Amazon EBS) SSD storage. The application is designed to process and store documents in the database as large binary objects (blobs) with an average document size of 6 MB. Database size has grown over time, reducing performance and increasing the cost of storage. The company must improve database performance and needs a solution that is highly available and resilient. Which solution will meet these requirements in the MOST cost-effective way?

A. Reduce the size of the RDS database instance. Increase storage capacity to 24 TiB. Change the storage type to Magnetic.
B. Increase the size of the RDS database instance. Increase storage capacity to 24 Ti. Change the storage type to Provisioned IOPS.
C. Create an Amazon S3 bucket. Update the app to store documents in S3 bucket. Store the object metadata in the existing database.
D. Create an Amazon DynamoDB table. Update the application to use DynamoDB. Use AWS Database Migration Service (AWS DMS) to migrate data from Oracle database to DynamoDB.

A

C. Create an Amazon S3 bucket. Update the app to store documents in S3 bucket. Store the object metadata in the existing database.

Storing the blobs in the db is more expensive than s3 with references in the db. DynamoDB’s limit on the size of each record is 400 KB, so D is wrong.

Considerations: Storing large objects (blobs) in Amazon S3 is a scalable and cost-effective solution. Storing metadata in the existing database allows you to maintain the necessary information for each document. The load on the RDS instance has been reduced as large objects are stored in S3.

Conclusion: This option is recommended as it leverages the strengths of Amazon S3 and RDS, providing scalability, cost-effectiveness, and maintaining metadata. Option C stands out as the most suitable to address the requirements while taking into account factors such as performance, scalability and cost-effectiveness.

52
Q

624# A company has an application that serves customers that are deployed in more than 20,000 retail stores around the world. The application consists of backend web services that are exposed over HTTPS on port 443. The application is hosted on Amazon EC2 instances behind an application load balancer (ALB). Points of sale communicate with the web application over the public Internet. The company allows each retail location to register the IP address to which the retail location has been assigned by its local ISP. The company’s security team recommends increasing application endpoint security by restricting access to only IP addresses registered by retail locations. What should a solutions architect do to meet these requirements?

A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets in the ALB to filter traffic. Update the IP addresses in the rule to include the registered IP addresses.
B. Deploy AWS Firewall Manager to manage ALConfigure firewall rules to restrict traffic to ALModify firewall rules to include registered IP addresses.
C. Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function in the ALB to validate that incoming requests are from the registered IP addresses.
D. Configure the network ACL on the subnet that contains the ALB public interface. Update the inbound rules in the network ACL with entries for each of the registered IP addresses.

A

A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets in the ALB to filter traffic. Update the IP addresses in the rule to include the registered IP addresses.

AWS Web Application Firewall (WAF) is designed to protect web applications from common web exploits. By associating a WAF web ACL with the ALB, you can configure IP rule sets to filter incoming traffic based on source IP addresses.
Updating the IP addresses in the rule to include registered IP addresses allows you to control and restrict access to only authorized locations. Conclusion: This option provides a secure and scalable solution to restrict web application access based on registered IP addresses.

53
Q

625# A company is building a data analytics platform on AWS using AWS Lake Formation. The platform will ingest data from different sources, such as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent access to parts of the data that contain sensitive information. Which solution will meet these requirements with the LESS operating overhead?

A. Create an IAM role that includes permissions to access Lake Formation tables.
B. Create data filters to implement row-level security and cell-level security.
C. Create an AWS Lambda function that removes sensitive information before Lake Formation ingests the data.
D. Create an AWS Lambda function that periodically queries and deletes sensitive information from Lake Formation tables.

A

B. Create data filters to implement row-level security and cell-level security.

Data filters in AWS Lake Formation are designed to implement row-level and cell-level security. This option aligns with the requirement to control access at the data level and is an appropriate approach for this scenario.

54
Q

626# A company deploys Amazon EC2 instances running in a VPC. EC2 instances upload source data to Amazon S3 buckets so that the data can be processed in the future. In accordance with compliance laws, data must not be transmitted over the public Internet. Servers in the company’s on-premises data center will consume the output of an application running on the EC2 instances. What solution will meet these requirements?

A. Deploy an interface VPC endpoint for Amazon EC2. Creates an AWS site-to-site VPN connection between the enterprise and the VPC.
B. Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between your on-premises network and the VPC.
C. Configure an AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS site-to-site VPN connection between the enterprise and the VPC.
D. Configure EC2 proxy instances that have routes to NAT gateways. Configure EC2 proxy instances to fetch data from S3 and feed the application instances.

A

B. Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between your on-premises network and the VPC.

Deploy a gateway VPC endpoint for Amazon S3: This allows EC2 instances in the VPC to access Amazon S3 directly without traversing the public Internet. Ensures that data is transmitted securely over the AWS network. Set up an AWS Direct Connect connection: Direct Connect provides a dedicated network connection between the on-premises network and the VPC, ensuring a private, trusted link.

55
Q

627# A company has an application with a REST-based interface that allows it to receive data in near real time from an external provider. Once received, the request processes and stores the data for later analysis. The application runs on Amazon EC2 instances. The third party provider has received many 503 service unavailable errors when sending data to the application. When the data volume increases, the computing capacity reaches its maximum limit and the application cannot process all requests. What design should a solutions architect recommend to provide a more scalable solution?

A. Use Amazon Kinesis Data Streams to ingest the data. Process data using AWS Lambda functions.
B. Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the third-party provider.
C. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an auto-scaling group behind an application load balancer.
D. Repackage the application as a container. Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 release type with an auto-scaling group.

A

A. Use Amazon Kinesis Data Streams to ingest the data. Process data using AWS Lambda functions.

Amazon Kinesis Data Streams can handle large volumes of streaming data, providing a scalable and resilient solution. AWS Lambda functions can be triggered by Kinesis Data Streams, allowing the application to process data in near real time. Lambda automatically scales based on the rate of incoming events, ensuring the system can handle spikes in data volume.

Amazon Kinesis Data Streams, for ingesting data and processing it with AWS Lambda functions, is the recommended design for handling near real-time streaming data at scale. Provides the scalability and resilience needed to process large volumes of data.

The keyword is “real time”. Kinesis data streams are meant for real time data processing.

https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-kinesis-data-streams-on-demand/

56
Q

628# A company has an application running on Amazon EC2 instances in a private subnet. The application needs to process sensitive information from an Amazon S3 bucket. The application must not use the Internet to connect to the S3 bucket. What solution will meet these requirements?

A. Configure an Internet gateway. Update the S3 bucket policy to allow access from the Internet gateway. Update the app to use the new Internet gateway.
B. Set up a VPN connection. Update the S3 bucket policy to allow access from the VPN connection. Update the app to use the new VPN connection.
C. Configure a NAT gateway. Update the S3 bucket policy to allow access from the NAT gateway. Update the app to use the new NAT gateway.
D. Configure a VPC endpoint. Update the S3 bucket policy to allow access from the VPC endpoint. Update the application to use the new VPC endpoint.

A

D. Configure a VPC endpoint. Update the S3 bucket policy to allow access from the VPC endpoint. Update the application to use the new VPC endpoint.

A VPC endpoint allows your EC2 instances to connect to services like Amazon S3 directly within the AWS network without traversing the Internet. An Internet gateway or NAT gateway is not required for this solution, ensuring that the application does not use the Internet to connect to the S3 bucket. Improves security by keeping traffic within the AWS network and avoiding exposure to the public Internet.

57
Q

629# A company uses Amazon Elastic Kubernetes Service (Amazon EKS) to run a container application. The EKS cluster stores sensitive information in the Kubernetes secrets object. The company wants to make sure that the information is encrypted. Which solution will meet these requirements with the LESS operating overhead?

A. Use the container application to encrypt information using AWS Key Management Service (AWS KMS).
B. Enable secret encryption on the EKS cluster using the AWS Key Management Service (AWS KMS).
C. Implement an AWS Lambda function to encrypt information using the AWS Key Management Service (AWS KMS).
D. Use the AWS Systems Manager parameter store to encrypt information using the AWS Key Management Service (AWS KMS).

A

B. Enable secret encryption on the EKS cluster using the AWS Key Management Service (AWS KMS).

Amazon EKS offers the option to encrypt Kubernetes secrets at rest using AWS Key Management Service (AWS KMS). This is a native and managed solution within the EKS service, reducing operational overhead. Kubernetes secrets are automatically encrypted using the default AWS KMS key for the EKS cluster. This ensures that sensitive information stored in Kubernetes secrets is encrypted, providing security.

58
Q

630# A company is designing a new multi-tier web application that consists of the following components:
* Web and application servers running on Amazon EC2 instances as part of Auto Scaling groups.
* An Amazon RDS DB instance for data storage.

A solutions architect needs to limit access to application servers so that only web servers can access them. What solution will meet these requirements?

A. Deploy AWS PrivateLink in front of the application servers. Configure the network ACL to allow only web servers to access application servers.

B. Deploy a VPC endpoint in front of the application servers. Configure the security group to allow only web servers to access application servers.

C. Deploy a network load balancer with a target group that contains the auto-scaling group of application servers. Configure the network ACL to allow only web servers to access application servers.

D. Deploy an application load balancer with a target group that contains the auto-scaling group of application servers. Configure the security group to allow only web servers to access application servers.

A

D. Deploy an application load balancer with a target group that contains the auto-scaling group of application servers. Configure the security group to allow only web servers to access application servers.

An application load balancer (ALB) can be used to distribute incoming web traffic across multiple Amazon EC2 instances. The ALB can be configured with a target group that contains the Auto Scaling group of application servers. Security groups can be used to control incoming and outgoing traffic to instances. By configuring the security group associated with the application servers to only allow security group traffic from the web servers, you limit access to only the web servers.

59
Q

631# A company runs a critical, customer-facing application on Amazon Elastic Kubernetes Service (Amazon EKS). The application has a microservices architecture. The company needs to implement a solution that collects, aggregates, and summarizes application metrics and logs in a centralized location. Which solution meets these requirements?

A. Run the Amazon CloudWatch agent on the existing EKS cluster. View metrics and logs in the CloudWatch console.
B. Run AWS App Mesh on the existing EKS cluster. View metrics and logs in the App Mesh console.
C. Configure AWS CloudTrail to capture data events. Query CloudTrail using the Amazon OpenSearch service.
D. Configure Amazon CloudWatch Container Insights on the existing EKS cluster. View metrics and logs in the CloudWatch console.

A

D. Configure Amazon CloudWatch Container Insights on the existing EKS cluster. View metrics and logs in the CloudWatch console.

Amazon CloudWatch Container Insights provides a comprehensive solution for monitoring and analyzing containerized applications, including those running on Amazon Elastic Kubernetes Service (Amazon EKS). Collects performance metrics, logs, and events from EKS clusters and containerized applications, allowing you to gain insight into their performance and health. CloudWatch Container Insights integrates with CloudWatch Logs, allowing you to view logs and metrics in the CloudWatch console for analysis. Provides a centralized location to collect, aggregate, and summarize metrics and logs for your customer-facing application’s microservices architecture.

60
Q

632# A company has deployed its new product on AWS. The product runs in an auto-scaling group behind a network load balancer. The company stores product objects in an Amazon S3 bucket. The company recently experienced malicious attacks against its systems. The company needs a solution that continuously monitors malicious activity in the AWS account, workloads, and S3 bucket access patterns. The solution should also report suspicious activity and display the information in a dashboard. What solution will meet these requirements?

A. Configure Amazon Macie to monitor and report findings to AWS Config.
B. Configure Amazon Inspector to monitor and report findings to AWS CloudTrail.
C. Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.
D. Configure AWS Config to monitor and report findings to Amazon EventBridge.

A

C. Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.

Amazon GuardDuty is a threat detection service that continuously monitors malicious activity and unauthorized behavior in AWS accounts. Analyzes VPC flow logs, AWS CloudTrail event logs, and DNS logs for potential threats. GuardDuty findings can be sent to AWS Security Hub, which acts as a central hub for monitoring security alerts and compliance status across all AWS accounts. AWS Security Hub consolidates and prioritizes findings from multiple AWS services, including GuardDuty, and provides a unified view of security alerts. Security Hub can integrate with third-party security tools and allows the creation of custom actions to remediate security findings. This solution provides continuous monitoring, detection, and reporting of malicious activities in your AWS account, including S3 bucket access patterns.

61
Q

633# A company wants to migrate an on-premises data center to AWS. The data center houses a storage server that stores data on an NFS-based file system. The storage server contains 200 GB of data. The company needs to migrate data without interruption to existing services. Various resources on AWS must be able to access data using the NFS protocol. What combination of steps will most cost-effectively meet these requirements? (Choose two.)

A. Create an Amazon FSx file system for Luster.

B. Create an Amazon Elastic File System (Amazon EFS) file system.

C. Create an Amazon S3 bucket to receive the data.

D. Manually use a copy operating system command to send the data to the AWS destination.

E. Install an AWS DataSync agent in the on-premises data center. Use a data synchronization task between on-premises and AWS.

A

B. Create an Amazon Elastic File System (Amazon EFS) file system.
E. Install an AWS DataSync agent in the on-premises data center. Use a data synchronization task between on-premises and AWS.

Option B: Amazon EFS provides a fully managed, scalable NFS file system that can be mounted by multiple Amazon EC2 instances simultaneously. You can create an Amazon EFS file system and then mount it to the necessary AWS resources.
Option E: AWS DataSync is a data transfer service that simplifies and accelerates data migration between on-premises storage systems and AWS. By installing a DataSync agent in your on-premises data center, you can use DataSync tasks to efficiently transfer data to Amazon EFS. This approach helps minimize downtime and ensure a smooth migration.

62
Q

634# A company wants to use Amazon FSx for Windows File Server for its Amazon EC2 instances that have an SMB file share mounted as a volume in the us-east-1 region. The company has a recovery point objective (RPO) of 5 minutes for planned system maintenance or unplanned service interruptions. The company needs to replicate the file system in the us-west-2 region. Replicated data must not be deleted by any user for 5 years. What solution will meet these requirements?

A. Create an FSx file system for Windows File Server on us-east-1 that has a Single-AZ deployment type 2. Use AWS Backup to create a daily backup plan that includes a backup rule that copy the backup to us-west-2. Configure AWS Backup Vault Lock in compliance mode for a target vault on us-west-2. Set a minimum duration of 5 years.
B. Create an FSx file system for Windows File Server on us-east-1 that has a Multi-AZ deployment type. Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in governance mode for a target vault on us-west-2. Set a minimum duration of 5 years.
C. Create an FSx file system for Windows File Server on us-east-1 that has a Multi-AZ deployment type. Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in compliance mode for a target vault on us-west-2. Set a minimum duration of 5 years.
D. Create an FSx file system for Windows File Server on us-east-1 that has a Single-AZ deployment type 2. Use AWS Backup to create a daily backup plan that includes a backup rule that copy the backup to us-west-2. Configure AWS Backup Vault Lock in governance mode for a target vault on us-west-2. Set a minimum duration of 5 years.

A

C. Create an FSx file system for Windows File Server on us-east-1 that has a Multi-AZ deployment type. Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in compliance mode for a target vault on us-west-2. Set a minimum duration of 5 years.

A Multi-AZ deployment type in us-east-1 provides high availability within the region. AWS Backup can be used to automate the backup process and create a backup copy on us-west-2. Using AWS Backup Vault Lock in compliance mode ensures that data is retained for the specified duration (5 years) and cannot be deleted by any user. In summary, Option C with Multi-AZ deployment and compliance mode for Vault Lock is considered the most robust solution to ensure high availability and long-term data retention with strict controls.

63
Q

635# A solutions architect is designing a security solution for a company that wants to provide developers with individual AWS accounts across AWS organizations, while maintaining standard security controls. Because individual developers will have AWS account root-level access to their own accounts, the solutions architect wants to ensure that the mandatory AWS CloudTrail settings that apply to new developer accounts are not changed. What action meets these requirements?

A. Create an IAM policy that prohibits changes to CloudTrail and attach it to the root user.
B. Create a new trail in CloudTrail from developer accounts with the organization trails option enabled.
C. Create a service control policy (SCP) that prohibits changes to CloudTrail and attach it to the developer accounts.
D. Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the management account.

A

C. Create a service control policy (SCP) that prohibits changes to CloudTrail and attach it to the developer accounts.

Service control policies (SCPs) are applied at the root level of an AWS organization to set fine-grained permissions for all accounts in the organization. By creating an SCP that explicitly prohibits changes to CloudTrail, you can enforce this policy across all developer accounts. This approach ensures that even if individual developers have root access to their AWS accounts, they will not be able to modify CloudTrail settings due to SCP restrictions.

64
Q

636# A company is planning to deploy a business-critical application to the AWS cloud. The application requires durable storage with consistent, low-latency performance. What type of storage should a solutions architect recommend to meet these requirements?

A. Instance store volume
B. Amazon ElastiCache for the Memcached cluster
C. SSD IOPS provisioned Amazon Elastic Block Store (Amazon EBS) volume
D. Amazon Elastic Block Store (Amazon EBS) optimized hard drive volume

A

C. SSD IOPS provisioned Amazon Elastic Block Store (Amazon EBS) volume

Provisioned IOPS SSD volumes are designed for applications that require predictable and consistent I/O performance. You can provision a specific number of IOPS when creating the volume to ensure consistent low-latency performance. These volumes provide durability and are suitable for business-critical applications.

65
Q

637# An online photo sharing company stores its photos in an Amazon S3 bucket that exists in the us-west-1 region. The company needs to store a copy of all new photos in the us-east-1 region. Which solution will meet this requirement with the LEAST operational effort?

A. Create a second S3 bucket on us-east-1. Use S3 cross-region replication to copy photos from the existing S3 bucket to the second S3 bucket.
B. Create a cross-origin share (CORS) configuration from the existing S3 bucket. Specify us-east-1 in the AllowedOrigin element of the CORS rule.
C. Create a second S3 bucket on us-east-1 in multiple availability zones. Create an S3 lifecycle rule to save photos to the second S3 bucket.
D. Create a second S3 bucket on us-east-1. Configure S3 event notifications on object creation and update events to invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.

A

A. Create a second S3 bucket on us-east-1. Use S3 cross-region replication to copy photos from the existing S3 bucket to the second S3 bucket.

This is a simple and fully managed solution.

To automatically replicate new objects as they are written to the bucket, use live replication, such as Cross-Region Replication (CRR).

66
Q

638# A company is creating a new web application for its subscribers. The application will consist of a single static page and a persistent database layer. The app will have millions of users for 4 hours in the morning, but the app will only have a few thousand users for the rest of the day. The company’s data architects have requested the ability to quickly evolve their schema. Which solutions will meet these requirements and provide the MOST scalability? (Choose two.)

A. Implement Amazon DynamoDB as a database solution. Supply of capacity on demand.
B. Deploy Amazon Aurora as the database solution. Choose Serverless Database Engine mode.
C. Implement Amazon DynamoDB as a database solution. Make sure DynamoDB auto-scaling is enabled.
D. Deploy the static content to an Amazon S3 bucket. Provision an Amazon CloudFront distribution with the S3 bucket as the origin.
E. Deploy web servers for static content to a fleet of Amazon EC2 instances in Auto Scaling groups. Configure instances to periodically update the contents of an Amazon Elastic File System (Amazon EFS) volume.

A

C. Implement Amazon DynamoDB as a database solution. Make sure DynamoDB auto-scaling is enabled.
D. Deploy the static content to an Amazon S3 bucket. Provision an Amazon CloudFront distribution with the S3 bucket as the origin.

C. DynamoDB auto-scaling dynamically adjusts provisioned capacity based on actual traffic. This is a good option to handle different workloads and ensure optimal performance.
D. This is a valid approach to serving static content with low latency globally using Amazon CloudFront. It helps in scalability and improves performance by distributing content to edge locations.

Based on scalability and ease of management, the recommended options are C (DynamoDB with auto-scaling) and D (S3 with CloudFront). These options take advantage of fully managed services and provide scalability without the need for manual intervention.

With provisioned capacity you can also use auto scaling to automatically adjust your table’s capacity based on the specified utilization rate to ensure application performance, and also to potentially reduce costs. To configure auto scaling in DynamoDB, set the minimum and maximum levels of read and write capacity in addition to the target utilization percentage.

It is important to note that DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated or depressed for a sustained period of several minutes.

This means that provisioned capacity (with on-demand) is probably best for you if you have relatively predictable application traffic, run applications whose traffic is consistent, and ramps up or down gradually.

Whereas on-demand capacity mode is probably best when you have new tables with unknown workloads, unpredictable application traffic and also if you only want to pay exactly for what you use. The on-demand pricing model is ideal for bursty, new, or unpredictable workloads whose traffic can spike in seconds or minutes, and when under-provisioned capacity would impact the user experience.

67
Q

639# A company uses Amazon API Gateway to manage its REST APIs that are accessed by third-party service providers. The enterprise must protect REST APIs from SQL injection and cross-site scripting attacks. What is the most operationally efficient solution that meets these requirements?

A. Configure AWS Shield.
B. Configure AWS WAF.
C. Configure the API gateway with an Amazon CloudFront distribution. Configure AWS Shield on CloudFront.
D. Configure the API gateway with an Amazon CloudFront distribution. Configure AWS WAF on CloudFront.

A

B. Configure AWS WAF.

68
Q

640# A company wants to provide users with access to AWS resources. The company has 1,500 users and manages their access to local resources through Active Directory user groups on the corporate network. However, the company does not want users to have to maintain another identity to access resources. A solutions architect must manage user access to AWS resources while preserving access to local resources. What should the solutions architect do to meet these requirements?

A. Create an IAM user for each user in the company. Attach the appropriate policies to each user.
B. Use Amazon Cognito with a group of Active Directory users. Create roles with the appropriate policies attached.
C. Define cross-account roles with appropriate policies attached. Assign roles to Active Directory groups.
D. Configure Security Assertion Markup Language (SAML)-based federation 2 0. Create roles with the appropriate policies attached. Assign roles to Active Directory groups.

A

D. Configure Security Assertion Markup Language (SAML)-based federation 2 0. Create roles with the appropriate policies attached. Assign roles to Active Directory groups.

SAML enables single sign-on (SSO) between the company’s Active Directory and AWS. Users can use their existing corporate credentials to access AWS resources without having to manage a separate set of credentials for AWS.

Roles and policies: With SAML-based federation, roles are created in AWS that define the permissions that users will have. Policies are attached to these roles to specify what actions users can take. Assignment to Active Directory Groups: Roles in AWS can be assigned to Active Directory groups. This allows you to centrally manage permissions across Active Directory groups, and users inherit these permissions when they assume the associated roles in AWS. In summary, SAML-based federation provides a standardized way to enable single sign-on between AWS and the enterprise Active Directory, ensuring a seamless experience for users while maintaining centralized access control across Active Directory groups.

69
Q

641# A company is hosting a website behind multiple application load balancers. The company has different distribution rights for its content around the world. A solutions architect must ensure that users receive the correct content without violating distribution rights. What configuration should the solutions architect choose to meet these requirements?

A. Configure Amazon CloudFront with AWS WAF.
B. Configure application load balancers with AWS WAF
C. Configure Amazon Route 53 with a geolocation policy
D. Configure Amazon Route 53 with a geoproximity routing policy

A

C. Configure Amazon Route 53 with a geolocation policy

With Amazon Route 53, you can create a geolocation routing policy that routes traffic based on the user’s geographic location. This allows you to serve different content or direct users to different application load balancers based on their geographic location. By setting geolocation policies on Amazon Route 53, you can achieve the desired content distribution while complying with distribution rights. In summary, for the specific requirement of serving different content based on the geographic location of users, the most appropriate option is to use geolocation routing policies with Amazon Route 53.

70
Q

642# A company stores its data on premises. The amount of data is growing beyond the company’s available capacity. The company wants to migrate its data from on-premises to an Amazon S3 bucket. The company needs a solution that automatically validates the integrity of data after transfer. What solution will meet these requirements?

A. Order an AWS Snowball Edge device. Configure the Snowball Edge device to perform online data transfer to an S3 bucket.
B. Deploy an AWS DataSync agent on-premises. Configure the DataSync agent to perform online data transfer to an S3 bucket.
C. Create an on-premises Amazon S3 file gateway. Configure the S3 File Gateway to perform online data transfer to an S3 bucket.
D. Configure an accelerator in Amazon S3 Transfer Acceleration on-premises. Configure the accelerator to perform online data transfer to an S3 bucket.

A

B. Deploy an AWS DataSync agent on-premises. Configure the DataSync agent to perform online data transfer to an S3 bucket.

AWS DataSync is a service designed for online data transfer to and from AWS. Deploying a DataSync agent on-premises enables efficient and secure transfers over the network. DataSync automatically verifies data integrity, ensuring that Amazon S3 data matches the source

71
Q

643# A company wants to migrate two DNS servers to AWS. The servers host a total of approximately 200 zones and receive 1 million requests each day on average. The company wants to maximize availability while minimizing operational overhead related to managing the two servers. What should a solutions architect recommend to meet these requirements?

A. Create 200 new hosted zones in the Amazon Route 53 console Import zone files.
B. Start a single large Amazon EC2 instance. Import zone mosaics. Configure Amazon CloudWatch alarms and notifications to alert the company of any downtime.
C. Migrate the servers to AWS using the AWS Server Migration Service (AWS SMS). Configure Amazon CloudWatch alarms and notifications to alert the company of any downtime.
D. Start an Amazon EC2 instance in an auto-scaling group in two availability zones. Import zone files. Set the desired capacity to 1 and the maximum capacity to 3 for the Auto Scaling group. Configure scaling alarms to scale based on CPU utilization.

A

A. Create 200 new hosted zones in the Amazon Route 53 console Import zone files.

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html

72
Q

644# A global company runs its applications on multiple AWS accounts in AWS organizations. The company’s applications use multi-party uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The company wants to report incomplete multiplepart uploads for cost compliance purposes. Which solution will meet these requirements with the LESS operating overhead?

A. Configure AWS Config with a rule to report the undercount of multipart payload objects.
B. Create a service control policy (SCP) to report the undercount of multipart payload objects.
C. Configure the S3 storage lens to report the incomplete multipart upload object count.
D. Create an S3 multi-region hotspot to report the incomplete multi-part upload object count.

A

C. Configure the S3 storage lens to report the incomplete multipart upload object count.

S3 Storage Lens is a fully managed analytics solution that provides organization-wide visibility into object storage usage, activity trends, and helps identify cost-saving opportunities. It is designed to minimize operational overhead and provides comprehensive information about your S3 usage.

Incompleteness Reporting: S3 Storage Lens allows you to configure metrics, including multi-part incomplete uploads, without the need for complex configuration. It provides a holistic view of your storage usage, including the status of loads from various parties, making it suitable for compliance and cost monitoring purposes.

S3 storage lens is specifically designed to obtain information about S3 usage.

73
Q

645# A company has a production database on Amazon RDS for MySQL. The company wants to update the database version for security compliance reasons. Because the database contains critical data, the company wants a quick solution to update and test functionality without losing any data. Which solution will meet these requirements with the LESS operating overhead?

A. Create a manual RDS snapshot. Upgrade to the new version of Amazon RDS for MySQL.
B. Use native backup and restore. Restores data to the new updated version of Amazon RDS for MySQL.
C. Use the AWS Database Migration Service (AWS DMS) to replicate the data to the new updated version of Amazon RDS for MySQL.
D. Use Amazon RDS blue/green deployments to deploy and test production changes.

A

D. Use Amazon RDS blue/green deployments to deploy and test production changes.

You can make changes to RDS database instances in the green environment without affecting production workloads. For example, you can upgrade the major or minor version of the database engine, update the underlying file system configuration, or change database parameters in the staging environment. You can thoroughly test the changes in the green environment. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html

74
Q

646# A solutions architect is creating a data processing job that runs once a day and can take up to 2 hours to complete. If the job is interrupted, it has to be restarted from the beginning. How should the solutions architect address this problem in the MOST cost-effective way?

A. Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job.
B. Create an AWS Lambda function triggered by an Amazon EventBridge scheduled event.
C. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge scheduled event.
D. Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon EventBridge scheduled event.

A

C. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge scheduled event.

Serverless, suitable for long-running tasks. Automatically scale and manage the underlying infrastructure. EC2 instances from option D would cost more.

75
Q

647# A social media company wants to store its database of user profiles, relationships, and interactions in the AWS cloud. The company needs an application to monitor any changes to the database. The application needs to analyze the relationships between data entities and provide recommendations to users. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes to the database.
B. Use Amazon Neptune to store the information. Use Neptune Streams to process changes to the database.
C. Use the Amazon Quantum Ledger database (Amazon QLDB) to store the information. Use Amazon Kinesis Data Streams to process changes to the database.
D. Use the Amazon Quantum Ledger database (Amazon QLDB) to store the information. Use Neptune Streams to process changes to the database.

A

B. Use Amazon Neptune to store the information. Use Neptune Streams to process changes to the database.

Amazon Neptune is a fully managed graph database, and Neptune Streams allows you to capture changes to the database. This option provides a fully managed solution for storing and monitoring database changes, minimizing operational overhead. Both storage and change monitoring are handled by Amazon Neptune.

76
Q

648# A company is creating a new application that will store a large amount of data. Data will be analyzed hourly and modified by multiple Amazon EC2 Linux instances that are deployed across multiple availability zones. The amount of storage space needed will continue to grow over the next 6 months. Which storage solution should a solutions architect recommend to meet these requirements?

A. Store data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application instances.
B. Store the data on an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances.
C. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances.
D. Store the data on an Amazon Elastic Block Store (Amazon EBS) provisioned IOPS volume shared between the application instances.

A

C. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances.

Amazon EFS is a scalable, elastic file storage service that can be mounted on multiple EC2 instances simultaneously. It is suitable for applications that require shared access to data across multiple instances. This option is a good option for the scenario described.

77
Q

649# A company manages an application that stores data in an Amazon RDS for PostgreSQL Multi-AZ DB instance. Increases in traffic are causing performance issues. The company determines that database queries are the main reason for slow performance. What should a solutions architect do to improve application performance?

A. Serves read traffic from the Multi-AZ standby replica.
B. Configure the database instance to use transfer acceleration.
C. Create a read replica from the source database instance. Serves read traffic from the read replica.
D. Use Amazon Kinesis Data Firehose between the application and Amazon RDS to increase the concurrency of database requests.

A

C. Create a read replica from the source database instance. Serves read traffic from the read replica.

78
Q

650# A company collects 10 GB of telemetry data daily from multiple machines. The company stores data in an Amazon S3 bucket in a source data account. The company has hired several consulting agencies to use this data for analysis. Every agency needs to read data access for its analysts. The company should share source data account data by choosing a solution that maximizes security and operational efficiency. What solution will meet these requirements?

A. Configure global S3 tables to replicate data for each agency.
B. Make the S3 bucket public for a limited time. Report only to agencies.
C. Configure cross-account access for S3 bucket to accounts owned by agencies.
D. Configure an IAM user for each analyst in the source data account. Grant each user access to the S3 bucket.

A

C. Configure cross-account access for S3 bucket to accounts owned by agencies.

This is a suitable option. You can configure cross-account access by creating AWS Identity and Access Management (IAM) roles in the source data account and allowing consulting agency AWS accounts to assume these roles. This way you can grant temporary and secure access to the S3 bucket.

79
Q

652# A development team is building an event-driven application that uses AWS Lambda functions. Events will be raised when files are added to an Amazon S3 bucket. The development team currently has Amazon Simple Notification Service (Amazon SNS) configured as the Amazon S3 event target. What should a solutions architect do to process Amazon S3 events in a scalable way?

A. Create an SNS subscription that processes the event in Amazon Elastic Container Service (Amazon ECS) before the event runs in Lambda.
B. Create an SNS subscription that processes the event in Amazon Elastic Kubernetes Service (Amazon EKS) before the event runs in Lambda
C. Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure the SOS queue to trigger a Lambda function.
D. Create an SNS subscription that sends the event to the AWS Server Migration Service (AWS SMS). Configure the Lambda function to poll from the SMS event.

A

C. Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure the SOS queue to trigger a Lambda function.

It follows the pattern of using an SQS queue as an intermediary for event handling, providing scalable and decoupled event processing. SQS can handle bursts of events, and the Lambda function can be triggered from the SQS queue. This solution is the recommended scalable approach to handle Amazon S3 events in a decoupled manner.

Primary advantage of having a SQS in between SNS and Lambda is Reprocessing. Assume that the Lambda fails to process certain event for some reason (e.g. timeout or lack of memory footprint), you can increase the timeout (to max 5 minutes) or memory (to max of 1.5GB) and restart your polling and you can reprocess the older events.

This would not be possible in case of SNS to Lambda, wherein if Lambda fails the event is lost. And even if you configure DLQ you would still have to make provisions for reading that separately and processing the message

80
Q

653# A solutions architect is designing a new service behind Amazon API Gateway. Request patterns for the service will be unpredictable and may suddenly change from 0 requests to over 500 per second. The total size of data that must be persisted in a backend database is currently less than 1 GB with unpredictable future growth. Data can be queried using simple key value requests. What combination of AWS services would meet these requirements? (Choose two.)

A. AWS Fargate
B. AWS Lambda
C. Amazon DynamoDB
D. Amazon EC2 Auto Scaling
E. Amazon Aurora with MySQL support

A

B. AWS Lambda
C. Amazon DynamoDB

B. AWS Lambda is a serverless computing service that automatically scales in response to incoming request traffic. It is suitable for handling unpredictable request patterns, and you only pay for the calculation time consumed. Lambda can integrate with other AWS services, including API Gateway, to handle the backend logic of your service.

C. DynamoDB is a fully managed NoSQL database that provides fast, predictable performance with seamless scalability. It is well suited for simple key-value queries and can handle different workloads. DynamoDB automatically scales based on demand, making it suitable for unpredictable request patterns. It also supports automatic scaling of read and write capacity.

81
Q

654# A company collects and shares research data with the company’s employees around the world. The company wants to collect and store the data in an Amazon S3 bucket and process it in the AWS cloud. The company will share the data with the company’s employees. The business needs a secure AWS cloud solution that minimizes operational overhead. What solution will meet these requirements?

A. Use an AWS Lambda function to create a URL pre-signed by S3. Instruct employees to use the URL.
B. Create an IAM user for each employee. Create an IAM policy for each employee to allow access to S3. Instruct employees to use the AWS Management Console.
C. Create an S3 file gateway. Create a share for uploading and a share for downloading. Allow employees to mount actions on their local computers to use S3 File Gateway.
D. Configure AWS Transfer Family SFTP endpoints. Select custom identity provider options. Use AWS Secrets Manager to manage user credentials. Instruct employees to use Transfer Family.

A

A. Use an AWS Lambda function to create a URL pre-signed by S3. Instruct employees to use the URL.

A. By using an AWS Lambda function, you can generate S3 pre-signed URLs on the fly. These URLs grant temporary access to specific S3 resources. Employees can use these URLs to securely upload or download data without requiring direct access to AWS credentials. This approach minimizes operational overhead, as you only need to manage the Lambda function, and there is no need for complex user management. Minimizing operational overhead: AWS Lambda is a serverless computing service, meaning there is no need to manage the underlying infrastructure. The Lambda function can be triggered by specific events (for example, an S3 upload trigger), and can be designed to handle the generation of pre-signed URLs automatically.

This solution simplifies the process of securely sharing data without the need for extensive user management or additional infrastructure management.

82
Q

655# A company is building a new furniture inventory application. The company has deployed the application on a fleet of Amazon EC2 instances across multiple availability zones. EC2 instances run behind an application load balancer (ALB) in your VPC. A solutions architect has observed that incoming traffic appears to favor an EC2 instance, resulting in latency for some requests. What should the solutions architect do to solve this problem?

A. Disable session affinity (sticky sessions) on the ALB
B. Replace the ALB with a network load balancer
C. Increase the number of EC2 instances in each Availability Zone
D. Adjust the frequency of health checks in the ALB target group

A

A. Disable session affinity (sticky sessions) on the ALB

Session affinity (sticky sessions): When session affinity (sticky sessions) is enabled, the load balancer routes requests from a particular client to the same backend EC2 instance. While this It can be beneficial in certain scenarios, it can lead to uneven traffic distribution and higher latency if one instance receives more requests than others. Disabling sticky sessions: By disabling session affinity, ALB distributes incoming requests more evenly across all healthy instances, helping to balance load and reduce latency.

83
Q

656# A company has an application workflow that uses an AWS Lambda function to download and decrypt files from Amazon S3. These files are encrypted using AWS Key Management Service (AWS KMS) keys. A solutions architect needs to design a solution that ensures the required permissions are set correctly. What combination of actions accomplishes this? (Choose two.)

A. Attach the kms:decrypt permission to the Lambda function’s resource policy.
B. Grant decryption permission for the Lambda IAM role in the KMS key’s policy
C. Grant the decryption permission for the Lambda resource policy in the KMS key policy.
D. Create a new IAM policy with the kms:decrypt permission and attach the policy to the Lambda function.
E. Create a new IAM role with the kms:decrypt permission and attach the execute role to the Lambda function.

A

B. Grant decryption permission for the Lambda IAM role in the KMS key’s policy
* This action ensures that the IAM role associated with the Lambda function has the necessary permission to decrypt files using the specified KMS key.

E. Create a new IAM role with the kms:decrypt permission and attach the execute role to the Lambda function.
* If the existing IAM role lacks the required kms:decrypt permission, you may need to create a new IAM role with this permission and attach it to the Lambda function.

84
Q

657# A company wants to monitor its AWS costs for its financial review. The Cloud Operations team is architecting the AWS Organizations management account to query AWS cost and usage reports for all member accounts. The team must run this query once a month and provide a detailed analysis of the bill. Which solution is the MOST scalable and cost-effective way to meet these requirements?

A. Enable cost and usage reporting in the management account. Deliver reports to Amazon Kinesis. Use Amazon EMR for analysis.
B. Enable cost and usage reporting in the management account. Deliver reports to Amazon S3. Use Amazon Athena for analysis.
C. Enable cost and usage reporting for member accounts. Deliver reports to Amazon S3. Use Amazon Redshift for analysis.
D. Enable cost and usage reporting for member accounts. Deliver the reports to Amazon Kinesis. Use Amazon QuickSight analytics.

A

B. Enable cost and usage reporting in the management account. Deliver reports to Amazon S3. Use Amazon Athena for analysis.

Directly stores reports in S3 and leverages Athena for SQL-based analysis.

85
Q

658# A company wants to run a gaming application on Amazon EC2 instances that are part of an Auto Scaling group in the AWS cloud. The application will transmit data using UDP packets. The company wants to make sure the app can come and go as traffic rises and falls. What should a solutions architect do to meet these requirements?

A. Connect a network load balancer to the Auto Scaling group.
B. Connect an application load balancer to the Auto Scaling group.
C. Deploy an Amazon Route 53 record set with a weighted policy to route traffic appropriately.
D. Deploy a NAT instance that is configured with port forwarding to the EC2 instances in the Auto Scaling group.

A

A. Connect a network load balancer to the Auto Scaling group.

Network Load Balancers (NLBs) are designed to handle TCP, UDP, and TLS traffic.

86
Q

659# A company has several websites on AWS for its different brands. Each website generates tens of gigabytes of web traffic logs every day. A solutions architect needs to design a scalable solution to give company developers the ability to analyze traffic patterns across all company websites. This analysis by the developers will be done on demand once a week over the course of several months. The solution must support standard SQL queries. Which solution will meet these requirements in the MOST cost-effective way?

A. Store logs in Amazon S3. Use Amazon Athena for analytics.
B. Store the logs in Amazon RDS. Use a database client for analysis.
C. Stores the logs in the Amazon OpenSearch service. Use the OpenSearch service for analysis.
D. Store the logs in an Amazon EMR cluster. Use a supported open source framework for SQL-based analysis.

A

A. Store logs in Amazon S3. Use Amazon Athena for analytics.

Amazon S3 is a highly scalable object storage service, and Amazon Athena allows you to run SQL queries directly on data stored in S3. This option is cost-effective as you only pay for the queries you run. It is suitable for on-demand analysis with standard SQL queries.

Given the requirement for cost-effectiveness, scalability, and on-demand analysis with standard SQL queries, option A (Amazon S3 with Amazon Athena) is probably the most suitable option. It enables efficient storage, scalable queries, and cost-effective on-demand analysis for large amounts of web traffic logs.

87
Q

660# An international company has a subdomain for each country in which the company operates. The subdomains are formatted as example.com, country1.example.com, and country2.example.com. Enterprise workloads are behind an application load balancer. The company wants to encrypt website data that is in transit. What combination of steps will meet these requirements? (Choose two.)

A. Use the AWS Certificate Manager (ACM) console to request a public certificate for the apex top com domain example and a wildcard certificate for *.example.com.
B. Use the AWS Certificate Manager (ACM) console to request a private certificate for the apex domain top example.com and a wildcard certificate for *.example.com.
C. Use the AWS Certificate Manager (ACM) console to request a public and private certificate for the top apex domain example.com.
D. Validate domain ownership by email address. Switch to DNS validation by adding the necessary DNS records to the DNS provider.
E. Validate domain ownership by adding the necessary DNS records to the DNS provider.

A

A. Use the AWS Certificate Manager (ACM) console to request a public certificate for the apex top com domain example and a wildcard certificate for *.example.com.
E. Validate domain ownership by adding the necessary DNS records to the DNS provider.

A. This option is valid to protect both the apex domain and its subdomains with a single wildcard certificate.

E. This is part of the domain validation process. DNS validation is commonly used for issuing SSL/TLS certificates.

https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html

88
Q

661# A company is required to use cryptographic keys in its local key manager. The key manager is outside of the AWS cloud due to regulatory and compliance requirements. The company wants to manage encryption and decryption by using cryptographic keys that are held outside the AWS cloud and that support a variety of external key managers from different vendors. Which solution will meet these requirements with the LESS operating overhead?

A. Use the AWS CloudHSM key vault backed by a CloudHSM cluster.
B. Use an AWS Key Management Service (AWS KMS) external keystore backed by an external key manager.
C. Use the default AWS Key Management Service (AWS KMS) managed key vault.
D. Use a custom keystore backed by an AWS CloudHSM cluster.

A

B. Use an AWS Key Management Service (AWS KMS) external keystore backed by an external key manager.

With the AWS KMS Foreign Key Vault, AWS KMS can use an external key manager for key storage. This allows the company to use its own key management infrastructure outside of the AWS cloud. This option provides flexibility and allows integration with multiple external key managers while minimizing operational overhead on the AWS side.

https://docs.aws.amazon.com/kms/latest/developerguide/keystore-external.html

89
Q

662# A solutions architect needs to host a high-performance computing (HPC) workload in the AWS cloud. The workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable distributed processing of large data sets. Data sets will be accessed through multiple instances simultaneously. The workload requires access latency within 1 ms. Once processing is complete, engineers will need access to the data set for manual post-processing. What solution will meet these requirements?

A. Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the data set from Amazon EFS.
B. Mount an Amazon S3 bucket to serve as a shared file system. Perform post-processing directly from the S3 bucket.
C. Use Amazon FSx for Luster as a shared file system. Link the file system to an Amazon S3 bucket for further processing.
D. Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted on all instances for processing and post-processing.

A

C. Use Amazon FSx for Luster as a shared file system. Link the file system to an Amazon S3 bucket for further processing.

Amazon FSx for Luster is well suited for HPC workloads, providing parallel access with low latency. Linking the file system to an S3 bucket allows for seamless integration for post-processing, leveraging the strengths of both services.

90
Q

663# A gaming company is building an application with voice over IP capabilities. The application will serve traffic to users around the world. The application must be highly available with automated failover across all AWS regions. The company wants to minimize user latency without relying on IP address caching on user devices. What should a solutions architect do to meet these requirements?

A. Use AWS Global Accelerator with health checks.
B. Use Amazon Route 53 with a geolocation routing policy.
C. Create an Amazon CloudFront distribution that includes multiple origins.
D. Create an application load balancer that uses path-based routing.

A

A. Use AWS Global Accelerator with health checks.

A. AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your applications. Supports health checks, which helps route traffic only to healthy endpoints. This can provide high availability and low latency access.

B. Amazon Route 53 can route traffic based on users’ geographic location. While it supports failover, failover may not be as automated or fast as using AWS Global Accelerator.

C. Amazon CloudFront is a content delivery network that can distribute content globally. While it can handle multiple sources, it may not provide the same level of control over failover and latency as AWS Global Accelerator.

91
Q

664# A weather forecasting company needs to process hundreds of gigabytes of data with a latency of less than milliseconds. The company has a high-performance computing (HPC) environment in its data center and wants to expand its forecasting capabilities. A solutions architect must identify a highly available cloud storage solution that can handle large amounts of sustained performance. Files stored in the solution must be accessible to thousands of compute instances that simultaneously access and process the entire data set. What should the solutions architect do to meet these requirements?

A. Use Amazon FSx for Luster scratch file systems.
B. Use Amazon FSx for Luster persistent file systems.
C. Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.
D. Use Amazon Elastic File System (Amazon EFS) with provisioned performance mode.

A

B. Use Amazon FSx for Luster persistent file systems.

Amazon FSx persistent file systems for Luster are durable and can maintain data. It provides high performance and is designed for HPC and other high-performance workloads.

A. Amazon FSx for Luster provides high-performance file systems optimized for HPC workloads. Scratch filesystems are temporary and do not persist data after the filesystem is deleted.

For large-scale HPC workloads with sub-millisecond latency requirements and the need for sustained performance, Amazon FSx for Luster Persistent File Systems (Option B) is best suited. It is designed to deliver high performance for compute-intensive workloads, making it a good choice for weather forecasting enterprise requirements.

92
Q

665# An e-commerce company runs a PostgreSQL database on premises. The database stores data using high-IOPS Amazon Elastic Block Store (Amazon EBS) block storage. Maximum daily I/O transactions per second do not exceed 15,000 IOPS. The company wants to migrate the database to Amazon RDS for PostgreSQL and provision disk IOPS performance regardless of disk storage capacity. Which solution will meet these requirements in the most cost-effective way?

A. Configure General Purpose SSD (gp2) EBS volume storage type and provision 15,000 IOPS.
B. Configure the Provisioned IOPS SSD (io1) EBS volume storage type and provision 15,000 IOPS.
C. Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.
D. Configure the EBS magnetic volume type to achieve maximum IOPS.

A

C. Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.

Gp3 is more cost effective than io1 and is designed for higher performance. Considering cost effectiveness, gp3 may be a suitable option, providing the necessary IOPS.

93
Q

Question #: 651
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company stores a large volume of image files in an Amazon S3 bucket. The images need to be readily available for the first 180 days. The images are infrequently accessed for the next 180 days. After 360 days, the images need to be archived but must be available instantly upon request. After 5 years, only auditors can access the images. The auditors must be able to retrieve the images within 12 hours. The images cannot be lost during this process.

A developer will use S3 Standard storage for the first 180 days. The developer needs to configure an S3 Lifecycle rule.

Which solution will meet these requirements MOST cost-effectively?

A. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
B. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Flexible Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
C. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
D. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Flexible Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

A

C. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

glacier instant retrieval actually going to retrieve the data instantly within milliseconds, whereas glacial flexible retrieval, this will take at least five to 12 hour. So obviously, we cannot use this for our use case, because after 360 days, we need it immediately, which is only going to be achieved by glacier retrieval.

94
Q

666# A company wants to migrate its on-premises database from Microsoft SQL Server Enterprise edition to AWS. The company’s online application uses the database to process transactions. The data analytics team uses the same production database to run reports for analytical processing. The company wants to reduce operational overhead by moving to managed services wherever possible. Which solution will meet these requirements with the LESS operating overhead?

A. Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes
B. Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes
C. Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes
D. Migrate to Amazon Aurora MySQL. Use Aurora Read Replicas for Reporting Purposes

A

A. Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes

95
Q

667# A company uses AWS CloudFormation to deploy an application that uses an Amazon API Gateway REST API with AWS Lambda function integration. The application uses Amazon DynamoDB for data persistence. The application has three stages: development, testing and production. Each stage uses its own DynamoDB table. The company has encountered unexpected problems when promoting changes at the production stage. The changes were successful in the development and testing stages. A developer needs to route 20% of traffic to the new production API with the next production release. The developer needs to direct the remaining 80% of traffic to the existing production stage. The solution should minimize the number of errors that any individual customer experiences. What approach should the developer take to meet these requirements?

A. Update 20% of planned changes to production. Deploy the new production stage. Monitor results. Repeat this process five times to test all planned changes.
B. Update the Amazon Route 53 DNS record entry for the production API to use a weighted routing policy. Set the weight to a value of 80. Add a second record for the production domain name. Change the second routing policy to a weighted routing policy. Set the weight of the second policy to a value of 20. Change the alias of the second policy to use the test stage API.
C. Deploy an application load balancer (ALB) in front of the REST API. Change the Amazon Route 53 production API registration to point traffic to the ALB. Record the production and test stages as ALB targets with weights of 80% and 20%, respectively.
D. Configure canary settings for the production stage API. Change the percentage of traffic directed to the canary deployment to 20%. Make planned updates to the production stage. Implement the changes

A

D. Configure canary settings for the production stage API. Change the percentage of traffic directed to the canary deployment to 20%. Make planned updates to the production stage. Implement the changes

Canary release is a software development strategy in which a new version of an API (as well as other software) is deployed for testing purposes, and the base version remains deployed as a production release for normal operations on the same stage. For purposes of discussion, we refer to the base version as a production release in this documentation. Although this is reasonable, you are free to apply canary release on any non-production version for testing.

In a canary release deployment, total API traffic is separated at random into a production release and a canary release with a pre-configured ratio. Typically, the canary release receives a small percentage of API traffic and the production release takes up the rest. The updated API features are only visible to API traffic through the canary. You can adjust the canary traffic percentage to optimize test coverage or performance.

https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html

96
Q

668 # A company has a large data workload that lasts 6 hours a day. The company cannot lose any data while the process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical data workload. Which solution will meet these requirements in the MOST cost-effective way?

A. Configure a long-running cluster that runs the primary node and core nodes on on-demand instances and task nodes on spot instances.
B. Configure a transient cluster that runs the primary node and core nodes on on-demand instances and task nodes on spot instances.
C. Configure a transient cluster that runs the master node on an on-demand instance and the master nodes and task nodes on spot instances.
D. Configure a long-running cluster that runs the master node on an on-demand instance, master nodes on spot instances, and task nodes on spot instances.

A

B. Configure a transient cluster that runs the primary node and core nodes on on-demand instances and task nodes on spot instances

In this option, the cluster is configured as transient, indicating that it is intended for short-lived workloads. The master node and core nodes are instantiated on-demand for stability and reliability. Task nodes are in one-time instances, which can be more profitable, but have the risk of outages.

From the documentation: When you configure termination after step execution, the cluster starts, runs bootstrap actions, and then runs the steps that you specify. As soon as the last step completes, Amazon EMR terminates the cluster’s Amazon EC2 instances. Clusters that you launch with the Amazon EMR API have step execution enabled by default. Termination after step execution is effective for clusters that perform a periodic processing task, such as a daily data processing run. Step execution also helps you ensure that you are billed only for the time required to process your data.

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-longrunning-transient.html

97
Q

669 # A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an organization in AWS Organizations. The company needs a solution that labels all resources that are created in a specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user who created the resource. What solution will meet these requirements?

A. Move the specific AWS account to a new organizational unit (OU) in Organizations from the management account. Create a service control policy (SCP) that requires all existing resources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU.
B. Create an AWS Lambda function to tag the resources after the Lambda function finds the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
C. Create an AWS CloudFormation stack to implement an AWS Lambda function. Configure the Lambda function to look up the appropriate cost center in the RDS database and to tag the resources. Creates an Amazon EventBridge scheduled rule to invoke the CloudFormation stack.
D. Create an AWS Lambda function to tag resources with a default value. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost center tag.

A

B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.

In option B, an AWS Lambda function is used to tag the resources. This Lambda function is configured to look up the appropriate cost center from the RDS database. This ensures that each resource is tagged with the correct cost center ID. Using Amazon EventBridge in conjunction with AWS CloudTrail events allows you to trigger the Lambda function when resources are created. This ensures that the tagging process is automatically started whenever a relevant event occurs.

98
Q

670 # A company recently migrated its web application to the AWS cloud. The company uses an Amazon EC2 instance to run multiple processes to host the application. The processes include an Apache web server that serves static content. The Apache web server makes requests to a PHP application that uses a local Redis server for user sessions. The company wants to redesign the architecture to be highly available and use solutions managed by AWS. What solution will meet these requirements?

A. Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to deploy your EC2 instance on a public subnet. Assign a public IP address.
B. Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API to proxy requests to the Lambda function. Set the APIs Gateway CORS configuration to respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information.
C. Keep the backend code in the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-AZ enabled. Configure the ElastiCache for Redis cluster in cluster mode. Copy the interface resources to Amazon S3. Configure the backend code to reference the EC2 instance.
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an application load balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple availability zones.

A

D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an application load balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple availability zones.

This option leverages Amazon CloudFront for global content delivery, providing low-latency access to static resources. Separating static content (hosted on S3) and dynamic content (running on ECS with Fargate) is a good practice for scalability and manageability. Using an application load balancer with AWS Fargate tasks for your PHP application provides a highly available and scalable environment. The Amazon ElastiCache for Redis cluster with Multi-AZ is used for session management, ensuring high availability. Option D is a well-designed solution that leverages multiple AWS managed services to achieve high availability, scalability, and separation of concerns. It aligns with best practices for hosting web applications on AWS.

99
Q

671 # A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company designed the app to work with session affinity (sticky sessions) for a better user experience. The application must be publicly available via the Internet as an endpoint. A WAF should be applied to the endpoint for added security. Session affinity (sticky sessions) must be configured on the endpoint. What combination of steps will meet these requirements? (Choose two.)
A. Create a public network load balancer. Specify the target group for the application.
B. Create a gateway load balancer. Specify the target group for the application.
C. Create a public application load balancer. Specify the application target group.
D. Create a second target group. Add elastic IP addresses to EC2 instances.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

A

C. Create a public application load balancer. Specify the application target group.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

An application load balancer (ALB) is designed for HTTP/HTTPS traffic and supports session affinity (sticky sessions). By creating a public ALB, you can expose your web application to the Internet with the necessary routing and load balancing capabilities.

AWS WAF (Web Application Firewall) provides protection against common web exploits and attacks. By creating a web ACL, you can define rules to filter and control web traffic.

Associating the web ACL with the endpoint ensures that the web application is protected by the specified security policies.

100
Q

672 # A company has a website that stores images of historical events. Website users need the ability to search and view images based on the year the event in the image occurred. On average, users request each image only once or twice a year. The company wants a highly available solution for storing and delivering images to users. Which solution will meet these requirements in the MOST cost-effective way?

A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server running on Amazon EC2.
B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server running on Amazon EC2.
C. Store images in Amazon S3 Standard. Use S3 Standard to deliver images directly using a static website.
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to deliver images directly using a static website.

A

D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to deliver images directly using a static website.