sthithapragnakk -- SAA Exam Dumps Jan 24 1-100 Flashcards
(100 cards)
651# A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares. Applications running on Amazon EC2 instances access file shares. The company needs a storage disaster recovery (DR) solution in a secondary region. Data that is replicated in the secondary region needs to be accessed using the same protocols as the primary region. Which solution will meet these requirements with the LESS operating overhead?
A. Create an AWS Lambda function to copy the data to an Amazon S3 bucket. Replicates the S3 bucket to the secondary region.
B. Create an FSx backup for ONTAP volumes using AWS Backup. Copy the volumes to the secondary region. Create a new FSx instance for ONTAP from the backup.
C. Create an FSx instance for ONTAP in the secondary region. Use NetApp SnapMirror to replicate data from the primary region to the secondary region.
D. Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume. Replicates the volume to the secondary region.
C. Create an FSx instance for ONTAP in the secondary region. Use NetApp SnapMirror to replicate data from the primary region to the secondary region.
NetApp SnapMirror is a data replication feature designed for ONTAP systems, enabling efficient data replication between primary and secondary systems. It meets the requirement of replicating data using the same protocols (CIFS and NFS) and involves less operational expenses compared to other options.
Question #: 652
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has a large data workload that runs for 6 hours each day. The company cannot lose any data while the process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical data workload.
Which solution will meet these requirements MOST cost-effectively?
A. Configure a long-running cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
C. Configure a transient cluster that runs the primary node on an On-Demand Instance and the core nodes and task nodes on Spot Instances.
D. Configure a long-running cluster that runs the primary node on an On-Demand Instance, the core nodes on Spot Instances, and the task nodes on Spot Instances.
B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
So BNC both looks similar, but what is the difference B is going to use primary and core nodes on demand whereas C is going to use primary argument and core node and tasmota Spot Instances obviously, we want to use this even for transient cluster The reason being if both core nodes and task nodes are on Spot Instances, then you won’t have any instances to process your data at all. Even though you have the primary node on on demand, you still need the core node at least some core nodes available because those are the worker nodes right. So, if you have bought spot then no way you are going to achieve it. But option B you own you have both primary and core node on demand, which means you will always have some available to to process the application. Even the task notes even if they are not available, that’s fine because you still have your core notes on on demand. So for that reason, we will pick the option B,
Question #: 653
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an organization in AWS Organizations. The company needs a solution that will tag all resources that are created in a specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user who created the resource.
Which solution will meet these requirements?
A. Move the specific AWS account to a new organizational unit (OU) in Organizations from the management account. Create a service control policy (SCP) that requires all existing resources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU.
B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
C. Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda function to look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon EventBridge scheduled rule to invoke the CloudFormation stack.
D. Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost center tag.
B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
option A. Let’s cross out because this is not the right answer. This is suggesting us to use service control protocol to enforce bagging before resource creation. But saps don’t directly perform tagging operations. So for that reason, we won’t use that then we have option C. This is talking about our users CloudFormation. And it is using a scheduled event bridge role which may introduce unnecessary complexity. And that does not ensure immediate tagging upon resource resource creation at all, because you’re scheduling it to do that. And then we have option D. Under this proposes tagging resources with a default value and then reacting to events to correct the attack. This introduces a potential delay and does not guarantee that resources are immediately tagged correctly. So for that reason, we are going to go with option B. What does option B says you know, lambda function is used to tag resources and then lambda function is configured to look up the appropriate cost center from the RDS database. What does this do this ensures that each resource is tagged with the correct cost center ID we are not, you know going with the default value instead we are looking up and then we are using human to bridge in conjunction with AWS cloud trail events, we are not scheduling here, we are going to have the cloud trail events to trigger the lambda function when resources are created. This ensures that time processes are automatically initiated whenever a relevant event occurs instead of scheduling like the option C is doing. So for that reason, option B is the correct one in this case.
Question #: 654
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company recently migrated its web application to the AWS Cloud. The company uses an Amazon EC2 instance to run multiple processes to host the application. The processes include an Apache web server that serves static content. The Apache web server makes requests to a PHP application that uses a local Redis server for user sessions.
The company wants to redesign the architecture to be highly available and to use AWS managed solutions.
Which solution will meet these requirements?
A. Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to deploy its EC2 instance into a public subnet. Assign a public IP address.
B. Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API to proxy requests to the Lambda function. Set the API Gateway CORS configuration to respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information.
C. Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-AZ enabled. Configure the ElastiCache for Redis cluster in cluster mode. Copy the frontend resources to Amazon S3. Configure the backend code to reference the EC2 instance.
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.
Whenever you see static content, blindly go with s3 and cloud format CloudFront. Those two are like awesome combination for handling static content. So let’s go through the options. And with the hint I just gave you we can clearly see option D is the right answer, but let’s see why the other options are not the correct ones. Option A is talking about Elastic Beanstalk and reality no it is a fully managed service that makes it easy to deploy and run applications. While it simplifies application development it may I’d be the best fit for hosting static content directly. Assigning a public IP to an easy to instance in a public subnet suggests exposure to public internet. While this is common for web servers, it’s not the most highly available architecture. And he does not leverage AWS managed solutions for static content delivery. Hence, we don’t pick that one. Then we have option B, we have lambda, it can host serverless functions, but it may not be the best fit for hosting entire web application, especially one with PHP code. And then we have Amazon API, gateway, and lambda. These are commonly used for serverless applications. But for a web application with PHP, it may introduce complexities, then configuring elastic cache for Redis. For session information, it is a good practice. But this option lacks clear separation between static content dynamic content and session management. So now we don’t pick that. Then option C. This option maintains the back end code on an easy to instance. Again, they want managed solutions not manual solutions, which may not be the most scalable and managed solution. While using elastic cache for readies with multi AZ is a good choice for session management. The overall architecture lacks the separation of concerns between static content and dynamic content content, and also copying the front end resources to Amazon s3 is a step toward better scalability for static content. But the architecture could be further optimized. So hence for that reason, we will go with option D. Why because this option leverages Amazon CloudFront for global content delivery providing low latency access to static resources. Separation of static content, which is hosted in s3 and dynamic content running on ECS with fargate is a good practice for scalability and manageability. Using an application load balancer with AWS fargate tasks for the PHP application provides a scalable and highly available environment. And Amazon elastic cache for Redis cluster with multiple availability zones is used for session management and ensuring highly availability. So overall, Option D is a well architected solution that leverages multiple AWS managed services to achieve high availability, scalability and separation of concern. Clearly, it aligns with the best practices for hosting web applications on AWS.
Question #: 655
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company designed the application to work with session affinity (sticky sessions) for a better user experience.
The application must be available publicly over the internet as an endpoint. A WAF must be applied to the endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint.
Which combination of steps will meet these requirements? (Choose two.)
A. Create a public Network Load Balancer. Specify the application target group.
B. Create a Gateway Load Balancer. Specify the application target group.
C. Create a public Application Load Balancer. Specify the application target group.
D. Create a second target group. Add Elastic IP addresses to the EC2 instances.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint
C. Create a public Application Load Balancer. Specify the application target group.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint
Option one, create a public network load balancer, specify the application target group network load balancers. This is one of the load balancer. The other one is application load balancers. And then we have classic load balancer and then also get a load balancer. But for now, let’s concentrate on network load balancer. These are designed for TCP UDP. That’s the hint you need to remember. Okay, that is a tip from me. network load balancer just remember TCP UDP, application load balancer HTTP and HTTPS traffic. Okay, so NL B’s are designed for TCP UDP traffic and they do not have native support for session affinity or sticky sessions for this ALP is suitable far more suitable for HTTP and HTTPS traffic. Then we have option B. Create a gateway load balancers specify the application target group. Okay, great. Well, load balancers are actually designed for handling traffic at the network and transport layers. They are not used for HTTP HTTPS traffic and they do not support session affinity. Then that will leave us with these three. Let’s go and look at option D. Which is talking about create a second target group Add elastic IP addresses to the easy two instances. Adding elastic IP addresses to easy two instances not directly related to achieving session affinity or applying a web application firewall. Session affinity is typically managed by load balancer and web application firewall is a separate service for the application security. So that will leave us with options C, I and II. Let’s see why they are the right options. And we have discussed so far. LB is designed for HTTP, HTTPS traffic and its support session affinity. So by creating a public ELB, you can expose your web application to the internet with the necessary routing and load balancing capabilities, which is all good but the question asked about endpoints and etc. So that will be handled by option C, or sorry, option E, creating a web ACL in the valve. So whilst our web application firewall provides protection against common web exploits and attacks, if you did the cloud practitioner you know, whenever we hear the words, SQL injection attacks are cross site scripting attacks, we will use the web application firewall because that is actually protects from those attacks. So by creating a web ACL, you can define rules to filter and control the web traffic. associating the web ACL with the endpoint ensures that the web application is protected by the specified security policies hence, is is one of the right option. So in summary, C and E represent the appropriate steps for exposing the web application with session affinity and applying waf for additional security.
Question #: 656
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a website that stores images of historical events. Website users need the ability to search and view images based on the year that the event in the image occurred. On average, users request each image only once or twice a year. The company wants a highly available solution to store and deliver the images to users.
Which solution will meet these requirements MOST cost-effectively?
A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
C. Store images in Amazon S3 Standard. Use S3 Standard to directly deliver images by using a static website.
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.
whenever they ask for storage to store images, audio video files blindly go with s3, there is no better solution or cost effective solution than s3 to store images, audio video, so that actually leaves us with options CMD. So you can immediately eliminate A and B cross them out. So in CMD, they’re both RS three. So which one do we choose? And option C is talking about using s3 standard, then use standard to directly deliver images by using a static static website. Well, you can do it but if you have to talk about most cost effective obviously going with this is a little bit too much. Don’t you assume why? Because the emails are going to users request each email once or twice a year for that VI goes standard standard means you will which is costlier than any other s3 class. Okay, and standard means 24/7 It will be available, you can query it anytime, etc. But since this is once or twice, and the better option cost effective wise would be option D because it is going to use infrequent access, infrequent access. As the name suggests, you will use this class when the files are infrequently accessed, which is our case, obviously, we’ll go with this instead of standard so that this will be cost effective. So that’s what I’m talking about. All the options are right, all the options can actually handle the scenario given. But C and D are the best ones for images, and d is the best one cost effective. So this is kind of combination of three to four questions as I mentioned. So if you don’t know which one these particular services are, etc, then it would be very hard for you to answer these questions.
Question #: 657
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has multiple AWS accounts in an organization in AWS Organizations that different business units use. The company has multiple offices around the world. The company needs to update security group rules to allow new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize the management of security group rules to minimize the administrative overhead that updating CIDR ranges requires.
Which solution will meet these requirements MOST cost-effectively?
A. Create VPC security groups in the organization’s management account. Update the security groups when a CIDR range update is necessary.
B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.
C. Create an AWS managed prefix list. Use an AWS Security Hub policy to enforce the security group update across the organization. Use an AWS Lambda function to update the prefix list automatically when the CIDR ranges change.
D. Create security groups in a central administrative AWS account. Create an AWS Firewall Manager common security group policy for the whole organization. Select the previously created security groups as primary groups in the policy.
B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.
option one, which is not the correct answer for our case. So it’s talking about creating VPC security groups in the organization’s management account, this may lead to an administrative overhead and may not be as scalable or centralized. So for that reason, we are not going to use that. Then we are going to create because how many security groups are you going to create? So right, it’s not scalable at all options, see, is talking about creating AWS manage prefix list and security hub policies, this could provide automation, it might introduce unnecessary complexity and may not be as cost effective. So for that reason, we won’t go with that as well. And then we have option D. Similar to Option A, but this is talking about creating security groups. And it is talking about you creating AWS firewall manager. Well, firewall manager is generally used for managing AWS web application firewall and AWS shield advanced. And it’s used for managing security groups might be overkill for a specific use case described. And additionally, it may introduce additional costs, right? And because they are asking about minimizing administrative overhead when they say that what does the mean means use the features of the tools that are mentioned in the question, and do it don’t try to introduce something else. So part of that reason, option B is more suitable. Why? Because it’s talking about creating a VPC customer managed request list. This will allow you to define a list of cider ranges that can be shared across AWS accounts and regions. This provides a centralized way to manage data ranges. And then they’re talking about using AWS RAM, which is just a resource access manager to share the prefix list across the organization. So what is Ram? It enables resource sharing across AWS accounts, including prefix lists. So by sharing the customer manager prefix list, you centralize the management of cider ranges. And then they’re talking about using the prefix lists and security groups across the organization. So what does this do, you can reference the shared prefix list in security group rules that you created in the previous steps. This ensures that security groups across multiple AWS accounts use the same centralized set of set of ranges. So this approach minimizes administrative overhead or loss for centralized control, and provides a scalable solution for managing security group rules globally. So this makes this option most cost effective and suitable for centralizing the management of security group roles across organizations.
674 Question #: 658
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses an on-premises network-attached storage (NAS) system to provide file shares to its high performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and its storage to the AWS Cloud. The company must be able to provide NFS and SMB multi-protocol access from the file system.
Which solution will meet these requirements with the LEAST latency? (Choose two.)
A. Deploy compute optimized EC2 instances into a cluster placement group.
B. Deploy compute optimized EC2 instances into a partition placement group.
C. Attach the EC2 instances to an Amazon FSx for Lustre file system.
D. Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
A. Deploy compute optimized EC2 instances into a cluster placement group.
E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
So this question is divided into two groups a and b as one CD as another one and between A and B. Obviously, I will go with a not B. Why? Because A is talking about cluster placement groups. Well, what are cluster placement placement goes exactly. These allow you to group instances within a single availability zone to provide low latency network performance. This is suitable for tightly coupled HPC workloads. So cluster placement groups are they go hand in hand with HPC HPC workloads, which is what our question is asking. Whereas the partition placement groups, they can provide low latency networking. But cluster placement groups are generally preferred for HPC workloads, as they provide a higher degree of network performance optimization. Then among CDE, which one do we pick? Well, we pick the correct one based on these NFS and SMB multi protocol access. And out of these three, only NetApp ONTAP supports both NFS and SMB. This doesn’t. Well, if you want me to show something, I haven’t. So if you see this is a table, you can go and see here, SMB and NFS is supported by NetApp ONTAP. Open it only supports NFS whereas the luster supports none of this, okay, so the for that reason, we will go with NetApp ONTAP file system. So this satisfies both the entire question.
675 Question #: 483
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C02 Questions]
A company is relocating its data center and wants to securely transfer 50 TB of data to AWS within 2 weeks. The existing data center has a Site-to-Site VPN connection to AWS that is 90% utilized.
Which AWS service should a solutions architect use to meet these requirements?
A. AWS DataSync with a VPC endpoint
B. AWS Direct Connect
C. AWS Snowball Edge Storage Optimized
D. AWS Storage Gateway
C. AWS Snowball Edge Storage Optimized
since it’s 90% utilized, you cannot use the 10% utilization to move the 50 DB data in two weeks. But whenever the question says most something within one week or two weeks, usually they are referring you to use the snowball family device. So not the other words. So in this case, if you want to look at the data sync with VPC, again, Data Sync, it will send the data over internet and we don’t have that much bandwidth available. So you can cross that out. Same thing with Direct Connect already 90% is useless Direct Connect even though it is a direct connect to connection between on premise and your cloud. Usually, we cannot use this for one time data transfer right this because right now we want to do a one time secure transfer of HTTP data. So for that, setting up Direct Connect is not feasible. And again, even if you want to set a bad connect, it takes at least one month to do the complete setup, which is an overkill. So that will leave us with options of C and D and even D is not the right option. Why because storage gateway it enables hybrid cloud storage between on premise environments and AWS. However, given the requirement to transfer a large amount of data quickly, and the existing VPN already being utilized, 90% using a physical Data Transfer Service like Snowball is preferred, right? Because this happens over the internet and storage gateway is actually used to access cloud storage from on prem is not for migration data one time thing is, so that will leave us with a snowball device, which is a physical device, right? It’s a physical device that you can use to transfer large amounts of data to and from AWS. The storage optimized variant is specifically designed for data transfer, it can be shipped to your data center and you can load the data onto the device. After that you ship the device back to AWS where the data is transferred into your s3 bucket. So this is a hint, whenever they say their bandwidth is utilized, they are hinting you to pick the storage device.
676 Question #: 660
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application peak hours occur at the same time each day. Application users report slow application performance at the start of peak hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure that the application works properly at the start of peak hours.
Which solution will meet these requirements?
A. Configure an Application Load Balancer to distribute traffic properly to the instances.
B. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on memory utilization.
C. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on CPU utilization.
D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.
D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.
So as you can see in the question multiple times they know what happens exactly when the peak happens, how long it will run after the peak, and so on and so forth. So what do you think they are referring to? Whenever they tell you that, you know the company knows pretty much everything about the application, etc? They’re hinting you to select something scheduled? When do you pick scheduled when you know the patterns when you know the usage patterns, when the peak happens when this etc happens? Since you already know everything. In those situations, you will use schedule. If they say like, you know, the company doesn’t know when the peak happens, etc, then they are in hinting to dynamic scaling policies because you don’t know when it will peak. So obviously, you cannot schedule something that you already don’t know. So by that you can directly go and pick answer a day. But let’s look at the options other options why they are wrong? Option A, they are using configuring lb Well, this helps distribute traffic, but it may not address the issue of slow application performance or start off peak apps. LPS distribute traffic to existing instances but don’t inherently solve the problem of insufficient capacity during peak period. So then we have option B which uses dynamic scaling policy for Auto Scaling group based on memory utilization. Scaling based on memory utilization may not necessarily align with the actual demand for the application during peak hours. It’s typically better to scale based on metrics that directly related to application performance instead of memory. And similar to B C’s use, instead of CS Using CPU utilization here, I think same answer goes here as well, because it might not be the most accurate indicator of application demand during peak hours. So that will leave us with Option D, instead of using some utilizations, we already know that is going to happen, we already know the peak is going to happen. And after the peak, it is going to run so on and so forth. So we are going to go instead with scheduled. So this option addresses the specific requirements of ensuring that new instances are launched before the start of peak hours, providing the necessary capacity to handle the increased demand at the beginning of the peak period. Okay, so by using a scheduled scaling policy, you can proactively ensure that sufficient capacity is available to handle the expected peak demand, improving application performance at the start of peak hours. This will only work if you know when the peak is going to happen. If you don’t, then the dynamic scaling would be appropriate.
677 Question #: 661
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs applications on AWS that connect to the company’s Amazon RDS database. The applications scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon DynamoDB with connection pooling with a target group configuration for the database. Change the applications to use the DynamoDB endpoint.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the applications to use the custom proxy endpoint.
D. Use an AWS Lambda function to provide connection pooling with a target group configuration for the database. Change the applications to use the Lambda function.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
As I always say, least operational overhead means use a feature that is part of the service that is mentioned in the question, don’t create your own solution. So that being said, let’s look at different options. Option A is trying to create their own solution. So using that logic, you can actually pick the appropriate or the correct one, you don’t even have to go through each one of them. Because option A C and D are creating a new solution, whereas option B, you are using the feature of rds obviously that is a free giveaway. But as usual, let’s go through why other options are wrong. So this option is not suitable because DynamoDB is a no SQL database service and is not a direct replacement for Amazon RDS, which might be necessary for the application. So hence that is gone. Then option C. Managing a custom proxy and easy to introduce his additional over operational complexity and maintenance overhead, which may not be the least effort solution, this easiest solution but it’s not least operational overhead. And option D if we talk about it. While lambda can be used for specific use cases, it may not be the most suitable solution for connection pooling and managing database connections due to its stateless nature and limitations in connecting persistence. So that will leave us with Option B, which uses RDS proxy, which is the Manage Database proxy that provides connection pooling, failover and security features for database applications. It allows applications to scale more effectively and by managing database connections on their behalf. It integrates well with RBS and reduces operational overhead.
678 Question #: 662
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
C. Delete all expired and unused snapshots to reduce snapshot costs.
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.
so again, another list operational overhead, the previous logic will apply here as well. Okay, now let’s look at the different options here. Option A is not the right option. Why? Because cloud while while CloudWatch logs can provide insights using elastic volumes to reduce the size of sorry, using elastic volumes to resize. volumes may involve manual intervention and could be operationally overhead intensive. So for that reason, we’ve already used that and Option B is kind of similar to Option A, but here it is using a custom script which introduces operational overhead and manually resizing values may not be the most efficient solution. And then we have option C. Deleting expired and unused snapshots is a good practice, but it may not directly address the observed increase in EBS storage costs. Snapshots may not be the primary contributor to increase the storage costs. So that will leave us with Option D.
What are this option D this option addresses the specific concern of non essential snapshots and uses data lifecycle manager to automate the snapshot management process, it is more streamlined and automated approach with a less operational overhead. So therefore, Option D provides a more efficient and automated solution to manage snapshots and optimize costs for the company’s current storage usage. Think of s3 lifecycle manager that’s for s3, whereas data lifecycle manager is for stories.
679 Question #: 663
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is developing a new application on AWS. The application consists of an Amazon Elastic Container Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS for MySQL database that contains the dataset for the application. The dataset contains sensitive information. The company wants to ensure that only the ECS cluster can access the data in the RDS for MySQL database and the data in the S3 bucket.
Which solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) customer managed key to encrypt both the S3 bucket and the RDS for MySQL database. Ensure that the KMS key policy includes encrypt and decrypt permissions for the ECS task execution role.
B. Create an AWS Key Management Service (AWS KMS) AWS managed key to encrypt both the S3 bucket and the RDS for MySQL database. Ensure that the S3 bucket policy specifies the ECS task execution role as a user.
C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.
considering all the requirements that our application wants, first of all, where is this rd RDS for that contains the data set for the company wants to ensure that only ECS cluster can access the data. So whenever they say only something wants access to something else, we are talking about giving permission to the security groups. Okay, for that particular service, okay, only easy to cluster so easy to cluster, you can have a security group that can provide access, you will provide access to that on the RDS and the data in s3 bucket. And data in s3 bucket. As you already know, it goes through a public. It goes to public Internet, but if you want to provide access. Since it has sensitive information, they don’t want it to go through public internet. You want it to go through private internet and we already discussed I think somewhere. If you want to connect it to s3 privately instead of publicly then the service you use is VPC endpoints, okay? And okay that and the security groups or subnet, it is indirectly hinting us towards option D because option D is the only one that is using VPC endpoint. But let’s go through all the options. Option A, which is not the right one is talking about creating a new AWS kms customer managed key to encrypt both the s3 bucket and the RDS for MySQL database. And ensuring that that came is key policy includes encrypt and decrypt from the permissions for the ECS task execution role. But it is not the most direct way to restrict access to data sources, right? Because KMS is encrypt encryption. It’s not about permissions and access restrictions, etc. So forget about that. And then even be you can just cross it out because same day, they’re talking about a mess kms same options, so we will ignore it. Okay, looks like even option C is using VPC endpoints, but let’s go through that as well, which is not the correct answer. Option C. So we are in it is creating an s3 bucket policy that restricts bucket access to the ECS task execution role, which is a good practice, but it does not address securing access to RDS and MySQL database, right. So for that reason, we can ignore that. Create a VPC endpoint for Amazon RDS for MySQL, then update the RDS for MySQL security group to allow access from only the subnets that the cluster will generate tasks in. Well, this is a good one. But this is far too make the ease only the easiest cluster to access it. But what about the s3, we are using bucket policies that restrict bucket access file, roll, then create no it doesn’t work that way. You’re going to create VPC endpoint for s3, not for rds. So if you look at option D, which actually talks about that, we are going to create a VPC endpoint for RDS for MySQL. Then also update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Then we’ll create a VPC endpoint for s3 Then we will update the s3 bucket policy to allow access Only the VPC end point. You might be thinking like, oh, CS almost, but yes, almost it’s not the same. We are restricting to ECS task execution role, but instead you should be doing giving access only to the VPC endpoint.
680 Question #: 664
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has a web application that runs on premises. The application experiences latency issues during peak hours. The latency issues occur twice each month. At the start of a latency issue, the application’s CPU utilization immediately increases to 10 times its normal amount.
The company wants to migrate the application to AWS to improve latency. The company also wants to scale the application automatically when application demand increases. The company will use AWS Elastic Beanstalk for application deployment.
Which solution will meet these requirements?
A. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale based on requests.
B. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment to scale based on requests.
C. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment to scale on a schedule.
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.
Well, basically they already gave the answer wherein they are going to use Beanstalk, but which configuration or instance type of the beanstalk you want to use? Fall if you are not aware of it, then you won’t be able to answer it. So let’s take this moment to answer that one. Looks like a and d are similar B and C ad similar. So let’s go through option A why that is not the right option. Option A is talking about configuring Bienstock with burstable performance, while using bustable performance instance in unlimited mald can help with bustable workloads. Configuring the environment to scale based on requests may not address this specific requirement of scaling automatically when the CPU utilization increases 10 times during latency issues. So for that reason, we don’t do that we don’t use that, then we have option B and C which are also wrong both of them are using Compute optimized instances. These are used to provide better performance, but scaling based on requests based on request as similar to Option A may not directly address the increase in CPU utilization during latency issues. And option C again, instead of based on request, they are scheduling it and obviously that is not dynamic at all, scheduling may not be responsive enough to handle unpredictable spikes in demand during latency issues. And it may not be the most effective solution for dynamic scaling. So for that reason, we are going to pick option D because it is scaling on predictive metrics. So what is it first of all, it is going to use burstable performance which we have already seen. And configuring the environment to scale on pre two metrics allows you to proactively scale based on anticipated demand, like you don’t know how the demand is going to be. So depending on that it is going to scare not scheduled not on requests, but dynamically. So this aligns well with the requirement to automatically scale when the CPU utilization increases 10 times during latency issues. Therefore, Option D is the most suitable solution for improving latency and automatically scaling the application during peak hours.
681 Question #: 665
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has customers located across the world. The company wants to use automation to secure its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure.
Which solution will meet these requirements?
A. Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes.
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
C. Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
D. Use AWS CloudFormation to set up the infrastructure. Use AWS Service Catalog to track changes.
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
And whenever you want to track incremental changes to the infrastructure, which is like changing the count for the you know, you want to audit the configuration, changes of the services etc. And in that case, we know we will use AWS config. AWS config is specifically used to handle that. Okay, let’s go through the options though. Clearly, two options are using organizations, combination of catalog and conflict. Let’s learn what they are so that we can pick the right option. Organizations we already know is more focused on managing multiple AWS accounts within an organization. Okay, while config is the service designed for tracking managing changes to AWS resources, Option A lacks the automation right because organizations is very common. So for that reason, even option D is wrong. We clearly know that CloudFormation is the one that actually gives you the automation capability of the infrastructure right you cannot automate, provisioning infrastructure etc. So but both option B and D has CloudFormation. So which one is the right option then we have to go see AWS configure a label service catalog. Alright well, service catalog is designed for creating and managing catalogs have IT resources. While it can be used for governance and tracking, it may not provide the same level of you know, inventory of resources like AWS config. For that reason, we will cross this out, and instead option B is the character.
683 Question #: 667
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is moving its data and applications to AWS during a multiyear migration project. The company wants to securely access data on Amazon S3 from the company’s AWS Region and from the company’s on-premises location. The data must not traverse the internet. The company has established an AWS Direct Connect connection between its Region and its on-premises location.
Which solution will meet these requirements?
A. Create gateway endpoints for Amazon S3. Use the gateway endpoints to securely access the data from the Region and the on-premises location.
B. Create a gateway in AWS Transit Gateway to access Amazon S3 securely from the Region and the on-premises location.
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.
D. Use an AWS Key Management Service (AWS KMS) key to access the data securely from the Region and the on-premises location.
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.
Amazon S3 supports both gateway endpoints and interface endpoints. With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface endpoint, which is available for an additional cost. For more information, see Types of VPC endpoints for Amazon S3 in the Amazon S3 User Guide. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
684 Question #: 668
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company created a new organization in AWS Organizations. The organization has multiple accounts for the company’s development teams. The development team members use AWS IAM Identity Center (AWS Single Sign-On) to access the accounts. For each of the company’s applications, the development teams must use a predefined application name to tag resources that are created.
A solutions architect needs to design a solution that gives the development team the ability to create resources only if the application name tag has an approved value.
Which solution will meet these requirements?
A. Create an IAM group that has a conditional Allow policy that requires the application name tag to be specified for resources to be created.
B. Create a cross-account role that has a Deny policy for any resource that has the application name tag.
C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all accounts.
D. Create a tag policy in Organizations that has a list of allowed application names.
D. Create a tag policy in Organizations that has a list of allowed application names.
Okay, they’re talking about creating it tag policies. But since we are talking about organization, we have to create it at an organization level. So let’s see which option is actually talking about that option is not the right right one because I m policies can include conditions, they are more focused on actions and resources and may not be more suitable for enforcing specific tag values. And then we have option B which talks about cross account role. And Denae policies are generally not recommended unless absolutely necessary and using them for tag enforcement might lead to complex policies. So no, usually you don’t do that through the normal account policies, and then we have option C, which is talking about AWS resource groups, they can help in organizing and searching their sources based on tags, but it does not enforce or control the tags that can be applied. So it will take us to tag policies that is an option that you can create in organizations that has a list of a load application names, which is very robust way if you want to implement tags. This is an effective way to ensure that only approved application names are used as tags. Therefore D creating a tag policy in organizations is the most appropriate solution for enforcing the required tag values.
685 Question #: 669
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the master user password by rotating the password every 30 days.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B. Use the modify-db-instance command in the AWS CLI to change the password.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
Whenever you see the question talking about database credentials, API credentials, etc. There is only one service that should come to your mind which is a secret manager. Which option see you don’t have to even look at anything else.
password rotation = AWS Secrets Manager
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html#rds-secrets-manager-overview
686 Question #: 670
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company performs tests on an application that uses an Amazon DynamoDB table. The tests run for 4 hours once a week. The company knows how many read and write operations the application performs to the table each second during the tests. The company does not currently use DynamoDB for any other use case. A solutions architect needs to optimize the costs for the table.
Which solution will meet these requirements?
A. Choose on-demand mode. Update the read and write capacity units appropriately.
B. Choose provisioned mode. Update the read and write capacity units appropriately.
C. Purchase DynamoDB reserved capacity for a 1-year term.
D. Purchase DynamoDB reserved capacity for a 3-year term.
B. Choose provisioned mode. Update the read and write capacity units appropriately.
With provisioned capacity mode, you specify the number of reads and writes per second that you expect your application to require, and you are billed based on that. Furthermore if you can forecast your capacity requirements you can also reserve a portion of DynamoDB provisioned capacity and optimize your costs even further. https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
The company knows how many read and write operations the application performs to the table each second during the tests.” so ideally they can set this as max
mentioned “Update the read and write capacity units appropriately.” which are automatically set in “on-demand”
solutions architect needs to optimize the cost for the table. And then when they literally tell you how many read and write operations it takes, and if you already know the pattern, then you don’t go for on demand on demand, you will use it when you don’t know a specific pattern like the traffic when it is big when it is not or how many read write operations it will take extra, but clearly we know that when we know that we are not going to go for on demand data. Okay. So for that, we can cancel out this one. And since we run for hours once a week, do you want to go with a reserved instance for one and three years? Of course not we are not going to usually in these cases you will go with on demand if you are only going to run that, but we have another one or another option available or another mode available for parameter which is the provision mode. Okay. So what does this do the provision mode, you manually provision the read and write capacity units based on your known workload. And in the question that clearly said they know it. Since the company knows the read and write operations during the test, they can provision the exact capacity needed for those specific periods. Optimizing costs by not paying for unused capacity during other times.
687 Question #: 671
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs its applications on Amazon EC2 instances. The company performs periodic financial assessments of its AWS costs. The company recently identified unusual spending.
The company needs a solution to prevent unusual spending. The solution must monitor costs and notify responsible stakeholders in the event of unusual spending.
Which solution will meet these requirements?
A. Use an AWS Budgets template to create a zero spend budget.
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
C. Create AWS Pricing Calculator estimates for the current running workload pricing details.
D. Use Amazon CloudWatch to monitor costs and to identify unusual spending.
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
AWS Cost Anomaly Detection is designed to automatically detect unusual spending patterns based on machine learning algorithms. It can identify anomalies and send notifications when it detects unexpected changes in spending. This aligns well with the requirement to prevent unusual spending and notify stakeholders.
https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/
clearly, there is a tool that does it but pricing calculator, you can eliminate it because this is something that you use before moving to cloud or the event the services are not yet deployed, if you want to know the costs and estimates etc, you will use that. So that’s got budgets, we already know if you want to get notified when particular cost is breached. For example, if you put $100 on the account, if it goes beyond that, then it will get notified. But again, that doesn’t give you whenever there is unusual spending, right, because what you want to do here is not limit the costs of certain things. But whenever there is unusual spending, you want to it’s and then you have cloud option D which is the cloud watch. While CloudWatch can be used for monitoring various metrics, it is not specifically designed for cost anomaly detection. So this is considered as an anomaly. This is not designed for that. For that purpose, we have actually a service or a feature which is Option B which is about cost anomaly detection. This uses machine learning to identify unexpected spending patterns and anomalies in your costs. It can automatically detect unusual spending and send notifications making it suitable for the scenario described think of this as like credit cards right whenever you swipe a card, if you swipe it in a different location that you usually stay in then obviously, it is going to send a call give it give you a call or text messages asking you is this you who made the transaction. So those are considered as anomaly detections. And even AWS has something called anomaly detection monitor here.
688 Question #: 672
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create external tables in a Spark catalog. Configure jobs in AWS Glue to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
Option B - leverages serverless services that minimise management tasks and allows the company to focus on querying and analysing the data with the LEAST operational overhead. AWS Glue with Athena (Option B): AWS Glue is a fully managed extract, transform, and load (ETL) service, and Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries. By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use Athena to query the data directly without the need to load it into a separate database. This minimizes operational overhead.
All they have to do is they have to analyze the Clickstream data in s3 quickly. So if you want to analyze data, that is honestly usually what do you do you run SQL queries, but then somebody will think like, oh, we have the SQL queries, s3 has s3 Select, but the problem we already discussed In one of the previous question is s3 Select works only on one file not on bunch of files. So what is the next quick solution if it is not as three, then Athena, right. So one of the options is right here because this is least operational overhead because Athena is serverless, etc. But let’s go through the other options as well. And option is talking about using Spark catalog and glues. While AWS glue can be used to query data using Spark with AWS glue introduces additional operational overhead. Spark jobs typically typically require more configuration and maintenance. So for that reason, we won’t use that. And then we have option C, which is talking about hive meta store and Spark job similar to Option A, this involves using Spark with EMR option A spark with glue here spark with EMR, again, it introduces additional complexity compared to serverless solution, and it’s not happening. And option D, which is somewhat similar to Option B, at least the first half of it. But using Amazon Kinesis data analytics is it’s more suitable for real time analytics on streaming data. And it might be an over engineered solution for analyzing Clickstream data stored in s3. So for that reason, we’ll go with option A, or Option B, wherein we are using glue crawler, we are not using glue jobs glue crawler, which can automatically discover and catalog metadata aboard the Clickstream data in s3, then we will use Athena being a serverless query service which allows for quick ad hoc SQL queries on the data without the need to set up and manage infrastructure. So therefore option B configuring an AWS glue crawler to crawl the data and using Amazon Athena for query is the most suitable solution for quickly analyzing the data with minimal operational overhead.
689 Question #: 673
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs an SMB file server in its data center. The file server stores large files that the company frequently accesses for up to 7 days after the file creation date. After 7 days, the company needs to be able to access the files with a maximum retrieval time of 24 hours.
Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
C. Create an Amazon FSx File Gateway to increase the company’s storage space. Create an Amazon S3 Lifecycle policy to transition the data after 7 days.
D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.
B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
S3 File Gateway will connect SMB to S3. Lifecycle policy will move objects to S3 Glacier Deep Archive which support 12 hours retrieval https://aws.amazon.com/blogs/aws/new-amazon-s3-storage-class-glacier-deep-archive/
Amazon S3 File Gateway supports SMB and NFS, Amazon FSx File Gateway SMB for windows workloads.
S3 file gateway supports SMB and S3 Glacier Deep Archive can retrieve data within 12 hours. https://aws.amazon.com/storagegateway/file/s3/ https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/amazon-s3-glacier.html
Option A clearly data sync so far we have what we have learned that data sync is usually used for transferring make Creating, copying data from on premise to the cloud. So copy data that is older than seven days agenda and than seven days from SMB files. For these scenarios, we don’t actually use Data Sync, this is more for data transfer services, then we have option B. Sorry, Option C, wherein it is using Fs X for Windows file server and transitioning data with s3 lifecycle policy after seven days, it is a suitable approach. However, this option does not explicitly address the specified maximum retrieval time of 24 hours. Right? It doesn’t talk about it, it just says lifecycle policy to transition after seven days, but what about this one retrieval time point for where are you transitioning it to which storage class already transitioning it to, etc. So let’s forget about that. It’s not good. Then we have option D, wherein it’s trying to access s3 for each user created s3 lifecycle policy to transition data to s3 glacier flexible retrieval after seven days, direct access to s3 may not be the most efficient solution for this scenario, right? Because it is storing the files where are the files, SMB file server, right? But here it’s saying access to s3 for each user etc, we have to fast you know, get those files access for them from the Assembly files are here it is not talking about it. Hence, we will go with option B wherein we are going to create an s3 file gateway to increase the company’s storage space. And we already know that file gateway is used to access the cloud storage from within on premise This is not addressing that, then we will create a lifecycle policy to transition the data to s3 glacier deep archive, because flexible retrieval is what five to 12 hours, but here this is 20 they are okay with 24 hours. So the glacier deep archive it I think it is around two well 248 hours. So we are fine with this and this is cheaper than this one. So we would go with option B instead.
690 Question #: 674
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a database that runs on an Amazon RDS for PostgreSQL DB instance. The application performs slowly when traffic increases. The database experiences a heavy read load during periods of high traffic.
Which actions should a solutions architect take to resolve these performance issues? (Choose two.)
A. Turn on auto scaling for the DB instance.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
C. Convert the DB instance to a Multi-AZ DB instance deployment. Configure the application to send read traffic to the standby DB instance.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
E. Configure the Auto Scaling group subnets to ensure that the EC2 instances are provisioned in the same Availability Zone as the DB instance.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
B: Read replicas distribute load and help improving performance D: Caching of any kind will help with performance Remember: “ The database experiences a heavy read load during periods of high traffic.”
By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the load and improving overall performance during periods of heavy read traffic. Amazon ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache rather than making repeated database queries.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica. By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the load and improving overall performance during periods of heavy read traffic. D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster. Amazon ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache rather than making repeated database queries.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/creating-elasticache-cluster-with-RDS-settings.html
Turn on auto scaling for the DB instance aka what happens when you do that? Well, this is not applicable for RDS instances, auto scaling is typically used for what EC two instances and not RDS instances. So this doesn’t work at all. Then we have option C, which is talking about converting the DB instance to a multi AZ deployment. This is mainly for enhancing availability, and fault tolerance, but might not significantly improve read performance. Okay. And then we have option II, which is talking about again, configuring Auto Scaling group subnets to ensure easy two instances are provisioned in the same availability zone as the DB instance this might not directly address the read performance issues. It’s more about optimizing the architecture for locality, which has nothing to do with improving the performance during heavy read. So whenever you see the question that talks about raid, Lord, you usually the answers options will have something called read replicas read replicas are specifically used, wherein it will help the database performance because the read workload will be completely offloaded from the main database to the read replica, which is the option B which talks about that one. So what is it saying it is saying create a read replica where you will offload your read traffic from the primary DB instance to the replica distributing the read load and improving overall performance this is a common approach to horizontally scale read heavy database workloads. And then we are another option is which is the wherein we are using Amazon elastic cache. We already know it is a managed caching service that can help improve application performance by doing what by caching frequently accessed data. So caching query results in elastic cache can reduce the load on Postgres equal database, especially for repeat Add read queries when you are reading the same data again and again. Instead of hitting the main database, not the second time onwards, it will instead hit the caching solution to get the same data.
Question #: 675
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates one snapshot of each EBS volume every day to meet compliance requirements. The company wants to implement an architecture that prevents the accidental deletion of EBS volume snapshots. The solution must not change the administrative rights of the storage administrator user.
Which solution will meet these requirements with the LEAST administrative effort?
A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
B. Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
C. Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
D. Lock the EBS snapshots to prevent deletion.
D. Lock the EBS snapshots to prevent deletion.
The “lock” feature in AWS allows you to prevent accidental deletion of resources, including EBS snapshots. This can be set at the snapshot level, providing a straightforward and effective way to meet the requirements without changing the administrative rights of the storage administrator user. Exactly what a locked EBS snapshot is used for https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-snapshot-lock.html
whenever the talk about least which means there is some feature that is available on EBS that you have to use the all you have to do is enable that don’t try to implement a new solution. Okay. So when you gaze through these options, option A and B are trying to create a create something new, whereas C and D are trying to use the features of EBS, etc. So let’s go through these options anyways, option one, or Option A, which is not the right answer. Let’s cross that out. Why is it not the right answer? Well, because this, this involves creating a new Iam role. With permission to delete snapshots and attaching it to an easy to instance, the idea is to delegate the Delete permission to an easy to instance, and the user would use the AWS CLI from the instance. Violet is a valid approach. It introduces additional components and complexity. Again, don’t try to reinvent the wheel. This works, but this is too much administrative effort. Hence, that is not the right answer. Option B. What is he trying to do? It involves creating an IAM policy that explicitly denies permission to delete snapshots. This policy is then attached to the storage administrator user. While it achieves the goal of preventing accidental deletion. It involves modifying the administrator rights of the storage administrator user which might not be desired, because this clearly says do not change the administrator heads. And then we have options see. wherein we are tagging the snapshots and using a service like AWS resource access manager, or recycling been previously called to enforce retention rules based on the tax. It adds complexity and introduces a new service which might be more than what is needed for the simple requirement of preventing accidental deletion, which is where we are heading. We don’t have to reinvent the wheel. All you have to do is use the creation that is already there for the EBS snapshots. Okay, so here, all we are doing is the last leave a snapshot so well how are you doing it because EBS provides a built in feature to lock snapshots, preventing them from being deleted. This is a straightforward and effective solution that does not involve creating additional IAM roles or policies or tags. It directly addresses the requirement of preventing accidental deletion with minimal administrative effort.