sthithapragnakk -- SAA Exam Dumps Jan 24 101-200 Flashcards
(100 cards)
673 # A company has multiple AWS accounts in an organization in AWS organizations that use different business units. The company has several offices around the world. The company needs to update the security group rules to allow new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize the management of security group rules to minimize the administrative overhead required by updating CIDR ranges. Which solution will meet these requirements in the MOST cost-effective way?
A. Create VPC security groups in the organization’s management account. Update security groups when a CIDR range update is necessary.
B. Create a VPC customer-managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization. Use the prefix list in the security groups throughout your organization.
C. Create a list of prefixes managed by AWS. Use an AWS Security Hub policy to enforce security group updates across your organization. Use an AWS Lambda function to update the prefix list automatically when CIDR ranges change.
D. Create security groups in a central AWS administrative account. Create a common AWS Firewall Manager security group policy for your entire organization. Select the previously created security groups as primary groups in the policy.
B. Create a VPC customer-managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization. Use the prefix list in the security groups throughout your organization.
A VPC customer-managed prefix list allows you to define a list of CIDR ranges that can be shared across AWS accounts and Regions. This provides a centralized way to manage CIDR ranges. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization: AWS RAM allows you to share resources between AWS accounts, including prefix lists. By sharing the list of prefixes managed by the client, the management of CIDR ranges is centralized. Use the prefix list in organization-wide security groups: You can reference the shared prefix list in security group rules. This ensures that security groups in multiple AWS accounts use the same centralized set of CIDR ranges. This approach minimizes administrative overhead, enables centralized control, and provides a scalable solution for managing security group rules globally.
https://docs.aws.amazon.com/vpc/latest/userguide/managed-prefix-lists.htm
674 # A company uses an on-premises network attached storage (NAS) system to provide file shares to its high-performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and their storage to the AWS cloud. The enterprise must be able to provide NFS and SMB multiprotocol access from the file system. Which solution will meet these requirements with the lowest latency? (Choose two.)
A. Deploy compute-optimized EC2 instances in a cluster placement group.
B. Deploy compute-optimized EC2 instances in a partition placement group.
C. Attach the EC2 instances to an Amazon FSx file system for Luster.
D. Connect the EC2 instances to an Amazon FSx file system for OpenZFS.
E. Connect the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
A. Deploy compute-optimized EC2 instances in a cluster placement group.
E. Connect the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
https://aws.amazon.com/fsx/when-to-choose-fsx/
Cluster placement groups allow you to group instances within a single availability zone to provide low-latency network performance. This is suitable for tightly coupled HPC workloads.
Amazon FSx for NetApp ONTAP supports both NFS and SMB protocols, making it suitable for multi-protocol access.
675 # A company is relocating its data center and wants to securely transfer 50TB of data to AWS within 2 weeks. The existing data center has a site-to-site VPN connection to AWS that is 90% utilized. Which AWS service should a solutions architect use to meet these requirements?
A. AWS DataSync with a VPC endpoint
B. AWS Direct Connect
C. Optimized AWS Snowball Edge Storage
D. AWS Storage Gateway
C. Optimized AWS Snowball Edge Storage
676 # A company hosts an application on Amazon EC2 on-demand instances in an auto-scaling group. The app’s peak times occur at the same time every day. App users report slow app performance at the start of peak times. The app normally works 2-3 hours after peak hours start. The company wants to make sure the app works properly at the start of peak times. What solution will meet these requirements?
A. Configure an application load balancer to correctly distribute traffic to the instances.
B. Configure a dynamic scaling policy so that the Auto Scaling group launches new instances based on memory utilization.
C. Configure a dynamic scaling policy for the Auto Scaling group to start new instances based on CPU utilization.
D. Configure a scheduled scaling policy so that the auto-scaling group starts new instances before peak times.
D. Configure a scheduled scaling policy so that the auto-scaling group starts new instances before peak times.
677 # A company runs applications on AWS that connect to the company’s Amazon RDS database. Applications scale on weekends and during peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database. Which solution will meet these requirements with the LESS operating overhead?
A. Use Amazon DynamoDB with connection pooling with a target pool configuration for the database. Switch applications to use the DynamoDB endpoint.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS proxy endpoint.
C. Use a custom proxy running on Amazon EC2 as the database broker. Change applications to use the custom proxy endpoint.
D. Use an AWS Lambda function to provide a connection pool with a target pool configuration for the database. Change applications to use the Lambda function.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS proxy endpoint.
Amazon RDS Proxy is a managed database proxy that provides connection pooling, failover, and security features for database applications. It allows applications to scale more effectively and efficiently by managing database connections on their behalf. It integrates well with RDS and reduces operating expenses.
678 # A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs are increasing every month. However, the company does not purchase additional storage from EBS every month. The company wants to optimize monthly costs for its current storage usage. Which solution will meet these requirements with the LESS operating overhead?
A. Use Amazon CloudWatch Logs to monitor Amazon EBS storage utilization. Use Amazon EBS Elastic Volumes to reduce the size of EBS volumes.
B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of EBS volumes.
C. Delete all expired and unused snapshots to reduce snapshot costs.
D. Delete all non-essential snapshots. Use Amazon Data Lifecycle Manager to create and manage snapshots according to your company’s snapshot policy requirements.
D. Delete all non-essential snapshots. Use Amazon Data Lifecycle Manager to create and manage snapshots according to your company’s snapshot policy requirements.
Delete all nonessential snapshots: This reduces costs by eliminating unnecessary snapshot storage. Use Amazon Data Lifecycle Manager (DLM): DLM can automate the creation and deletion of snapshots based on defined policies. This reduces operational overhead by automating snapshot management according to the company’s snapshot policy requirements.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
679 # A company is developing a new application on AWS. The application consists of an Amazon Elastic Container Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS for MySQL database that contains the application’s data set. The data set contains sensitive information. The company wants to ensure that only the ECS cluster can access data in the RDS for MySQL database and data in the S3 bucket. What solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) customer-managed key to encrypt both the S3 bucket and the RDS database for MySQL. Ensure that the KMS key policy includes encryption and decryption permissions for the ECS task execution role.
B. Create an AWS Key Management Service (AWS KMS) AWS Managed Key to encrypt both the S3 bucket and the RDS database for MySQL. Ensure that the S3 bucket policy specifies the ECS task execution role as user.
C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access only from the S3 VPC endpoint.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access only from the S3 VPC endpoint.
This approach controls access at the network level by ensuring that the RDS database and S3 bucket are accessible only through the specified VPC endpoints, limiting access to resources within the ECS cluster VPC.
680 # A company has a web application that runs on premises. The app experiences latency issues during peak hours. Latency issues occur twice a month. At the beginning of a latency issue, the application’s CPU utilization immediately increases to 10 times its normal amount. The company wants to migrate the application to AWS to improve latency. The company also wants to scale the app automatically when demand for the app increases. The company will use AWS Elastic Beanstalk for application deployment. What solution will meet these requirements?
A. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale based on requests.
B. Configure an Elastic Beanstalk environment to use compute-optimized instances. Configure the environment to scale based on requests.
C. Configure an Elastic Beanstalk environment to use compute-optimized instances. Configure the environment to scale on a schedule.
D. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale on predictive metrics.
D. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale on predictive metrics.
Predictive scaling works by analyzing historical load data to detect daily or weekly patterns in traffic flows. It uses this information to forecast future capacity needs so Amazon EC2 Auto Scaling can proactively increase the capacity of your Auto Scaling group to match the anticipated load.
Predictive scaling is well suited for situations where you have:
Burst performance instances are designed to handle explosive workloads, and configuring your environment to scale on predictive metrics allows you to proactively scale based on anticipated demand. This aligns well with the requirement to autoscale when CPU utilization increases 10x during latency issues. Therefore, option D is the most suitable solution to improve latency and automatically scale the application during peak hours.
Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using predictive scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature. Predictive scaling can also potentially save you money on your EC2 bill by helping you avoid the need to over provision capacity.
681 # A company has customers located all over the world. The company wants to use automation to protect its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure. What solution will meet these requirements?
A. Use AWS Organizations to configure infrastructure. Use AWS Config to track changes.
B. Use AWS CloudFormation to configure the infrastructure. Use AWS Config to track changes.
C. Use AWS Organizations to configure the infrastructure. Use the AWS Service Catalog to track changes.
D. Use AWS CloudFormation to configure the infrastructure. Use the AWS Service Catalog to track changes.
B. Use AWS CloudFormation to configure the infrastructure. Use AWS Config to track changes
AWS CloudFormation is an infrastructure as code (IaC) service that allows you to define and provision AWS infrastructure. Using CloudFormation ensures automation in infrastructure configuration, and AWS Config can be used to track changes and maintain an inventory of resources.
682 # A startup is hosting a website for its customers on an Amazon EC2 instance. The website consists of a stateless Python application and a MySQL database. The website only serves a small amount of traffic. The company is concerned about instance reliability and needs to migrate to a highly available architecture. The company cannot modify the application code. What combination of actions should a solutions architect take to achieve high availability for the website? (Choose two.)
A. Provision an Internet gateway in each availability zone in use.
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
C. Migrate the database to Amazon DynamoDB and enable DynamoDB auto-scaling.
D. Use AWS DataSync to synchronize database data across multiple EC2 instances.
E. Create an application load balancer to distribute traffic to an auto-scaling group of EC2 instances that are spread across two availability zones.
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
E. Create an application load balancer to distribute traffic to an auto-scaling group of EC2 instances that are spread across two availability zones.
Amazon RDS Multi-AZ (Availability Zone) deployments provide high availability for database instances. Automatically replicates the database to a standby instance in a different Availability Zone, ensuring failover in the event of a primary AZ failure.
This option ensures that traffic is distributed across multiple EC2 instances for the website. The combination with an auto-scaling group enables demand-based auto-scaling, providing high availability.
Therefore, options E and B together provide a solution to achieve high availability for the website without modifying the application code.
683 # A company is moving its data and applications to AWS during a multi-year migration project. The company wants to securely access Amazon S3 data from the company’s AWS region and from the company’s on-premises location. The data must not traverse the Internet. Your company has established an AWS Direct Connect connection between your region and your on-premises location. What solution will meet these requirements?
A. Create gateway endpoints for Amazon S3. Use gateway endpoints to securely access the data from the Region and the on-premises location.
B. Create a gateway on AWS Transit Gateway to access Amazon S3 securely from your region and on-premises location.
C. Create interface endpoints for Amazon S3. Use interface endpoints to securely access local location and region data.
D. Use an AWS Key Management Service (AWS KMS) key to securely access data from your region and on-premises location.
C. Create interface endpoints for Amazon S3. Use interface endpoints to securely access local location and region data.
Gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface endpoint, which is available for an additional cost. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
684 # A company created a new organization in AWS Organizations. The organization has multiple accounts for the company’s development teams. Development team members use AWS IAM Identity Center (AWS Single Sign-On) to access accounts. For each of the company’s applications, development teams must use a predefined application name to label the resources that are created. A solutions architect needs to design a solution that gives the development team the ability to create resources only if the application name tag has an approved value. What solution will meet these requirements?
A. Create an IAM group that has a conditional permission policy that requires the application name tag to be specified to create resources.
B. Create a cross-account role that has a deny policy for any resources that have the application name tag.
C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all accounts.
D. Create a tag policy in Organizations that has a list of allowed application names.
D. Create a tag policy in Organizations that has a list of allowed application names.
AWS Organizations allows you to create tag policies that define which tags should be applied to resources and what values are allowed. This is an effective way to ensure that only approved app names are used as labels. Therefore, option D, creating a tag policy in organizations with a list of allowed application names, is the most appropriate solution to enforce required tag values.
Wrong: A. Create an IAM group that has a conditional permission policy that requires the application name tag to be specified to create resources. While IAM policies may include conditions, they focus more on actions and resources, and may not be best suited to enforce specific tag values.
685 # A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the user’s master password by rotating the password every 30 days. Which solution will meet these requirements with the LESS operating overhead?
A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B. Use the modify-db-instance command in the AWS CLI to change the password.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
AWS Secrets Manager provides a managed solution for rotating database credentials, including built-in support for Amazon RDS. Enables automatic master user password rotation for RDS for PostgreSQL with minimal operational overhead.
686 # A company tests an application that uses an Amazon DynamoDB table. Testing is done for 4 hours once a week. The company knows how many read and write operations the application performs on the table each second during testing. The company does not currently use DynamoDB for any other use cases. A solutions architect needs to optimize tabletop costs. What solution will meet these requirements?
A. Choose on-demand mode. Update read and write capacity units appropriately.
B. Choose provisioned mode. Update read and write capacity units appropriately.
C. Purchase DynamoDB reserved capacity for a period of 1 year.
D. Purchase DynamoDB reserved capacity for a period of 3 years.
B. Choose provisioned mode. Update read and write capacity units appropriately.
In provisioned capacity mode, you manually provision read and write capacity units based on your known workload. Because the company knows the read and write operations during testing, it can provide the exact capacity needed for those specific periods, optimizing costs by not paying for unused capacity at other times.
On-demand mode in DynamoDB automatically adjusts read and write capacity based on actual usage. However, since the workload is known and occurs during specific periods, the provisioning mode would probably be more cost-effective.
687 # A company runs its applications on Amazon EC2 instances. The company conducts periodic financial evaluations of its AWS costs. The company recently identified an unusual expense. The company needs a solution to avoid unusual expenses. The solution should monitor costs and notify responsible stakeholders in case of unusual expenses. What solution will meet these requirements?
A. Use an AWS budget template to create a zero-spend budget.
B. Create an AWS Cost Anomaly Detection Monitor in the AWS Billing and Cost Management Console.
C. Create AWS Pricing Calculator estimates for current running workload pricing details.
D. Use Amazon CloudWatch to monitor costs and identify unusual expenses.
B. Create an AWS Cost Anomaly Detection Monitor in the AWS Billing and Cost Management Console.
AWS Cost Anomaly Detection uses machine learning to identify unexpected spending patterns and anomalies in your costs. It can automatically detect unusual expenses and send notifications, making it suitable for the described scenario.
688 # A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The business needs to analyze clickstream data in Amazon S3 quickly. Next, the business needs to determine whether to process the data further in the data pipeline. Which solution will meet these requirements with the LESS operating overhead?
A. Create external tables in a Spark catalog. Set up jobs in AWS Glue to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query data.
C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query data.
AWS Glue Crawler can automatically discover and catalog metadata about clickstream data in S3. Amazon Athena, as a serverless query service, allows you to perform fast ad hoc SQL queries on data without needing to configure and manage the infrastructure.
AWS Glue is a fully managed extract, transform, and load (ETL) service, and Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries. By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use Athena (remember, where it lives) to query the data directly without the need to load it into a separate database. This minimizes operational overhead.
689 # A company runs an SMB file server in its data center. The file server stores large files that are frequently accessed by the company for up to 7 days after the file creation date. After 7 days, the company should be able to access the files with a maximum recovery time of 24 hours. What solution will meet these requirements?
A. Use AWS DataSync to copy data older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 file gateway to increase the company’s storage space. Create an S3 lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
C. Create an Amazon FSx file gateway to increase your company’s storage space. Create an Amazon S3 lifecycle policy to transition data after 7 days.
D. Configure access to Amazon S3 for each user. Create an S3 lifecycle policy to transition data to S3 Glacier Flexible Retrieval after 7 days.
B. Create an Amazon S3 file gateway to increase the company’s storage space. Create an S3 lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
With an S3 file gateway, you can present an S3 bucket as a file share. Using an S3 lifecycle policy to transition data to Glacier Deep Archive after 7 days allows for cost savings, and recovery time is within the specified 24 hours.
690 # A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a database running on an Amazon RDS for PostgreSQL DB instance. The app runs slowly when traffic increases. The database experiences heavy read load during high traffic periods. What actions should a solutions architect take to resolve these performance issues? (Choose two.) A. Enable auto-scaling for the database instance. B. Create a read replica for the database instance. Configure the application to send read traffic to the read replica. C. Convert the database instance to a Multi-AZ DB instance deployment. Configure the application to send read traffic to the standby database instance. D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster. E. Configure Auto Scaling group subnets to ensure that EC2 instances are provisioned in the same availability zone as the database instance.
B. Create a read replica for the database instance. Configure the application to send read traffic to the read replica.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
By creating a read replica, you offload read traffic from the primary database instance to the replica, distributing the read load and improving overall performance. This is a common approach to scale out read-heavy database workloads.
Amazon ElastiCache is a managed caching service that can help improve application performance by caching frequently accessed data. Cached query results in ElastiCache can reduce the load on the PostgreSQL database, especially for repeated read queries.
691 # A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates a snapshot of each EBS volume every day to meet compliance requirements. The company wants to implement an architecture that prevents accidental deletion of EBS volume snapshots. The solution should not change the administrative rights of the storage administrator user. Which solution will meet these requirements with the LEAST administrative effort?
A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
B. Create an IAM policy that denies the deletion of snapshots. Attaches the policy to the storage administrator user.
C. Add tags to snapshots. Create Recycle Bin retention rules for EBS snapshots that have the labels.
D. Lock EBS snapshots to prevent deletion.
D. Lock EBS snapshots to prevent deletion.
Amazon EBS provides a built-in feature to lock snapshots, preventing them from being deleted. This is a straightforward and effective solution that does not involve creating additional IAM roles, policies, or tags. It directly addresses the requirement to prevent accidental deletion with minimal administrative effort.
692 # An enterprise application uses network load balancers, auto-scaling groups, Amazon EC2 instances, and databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and from network interfaces in near real time in its Amazon VPC. The company wants to send the information to Amazon OpenSearch Service for analysis. What solution will meet these requirements?
A. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Streams to stream log group logs to the OpenSearch service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Firehose to stream log group logs to the OpenSearch service.
C. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Streams to stream trace records to the OpenSearch service.
D. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Firehose to stream the track logs to the OpenSearch service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Firehose to stream log group logs to the OpenSearch service.
Other answers:
A. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Streams to stream log group logs to the OpenSearch service. This option involves configuring VPC flow logs to capture network traffic information and send the logs to an Amazon CloudWatch log group. Next, it suggests using Amazon Kinesis Data Streams to stream the log group logs to the Amazon OpenSearch service. While this is technically feasible, using Kinesis Data Streams could introduce unnecessary complexity for this use case.
C. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Streams to stream trace records to the OpenSearch service. This option involves using AWS CloudTrail to capture VPC stream logs and then using Amazon Kinesis Data Streams to stream the logs to the Amazon OpenSearch service. However, CloudTrail is typically used to log API activity and may not provide the detailed network traffic information captured by VPC flow logs.
D. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Firehose to stream the track logs to the OpenSearch service. Like option C, this option involves using AWS CloudTrail to capture VPC flow logs, but suggests using Amazon Kinesis Data Firehose instead of Kinesis Data Streams. Again, CloudTrail might not be the optimal choice for capturing detailed information about network traffic.
693 # A company is developing an application that will run on an Amazon Elastic Kubernetes Service (Amazon EKS) production cluster. The EKS cluster has managed groups of nodes that are provisioned with on-demand instances. The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resilience of the application. The EKS cluster must manage all nodes. Which solution will meet these requirements in the MOST cost-effective way?
A. Create a managed node group that contains only spot instances.
B. Create two managed node groups. Provision a group of nodes with on-demand instances. Provision the second node group with spot instances.
C. Create an auto-scaling group that has a startup configuration that uses spot instances. Configure the user details to add the nodes to the EKS cluster.
D. Create a managed node group that contains only on-demand instances.
B. Create two managed node groups. Provision a group of nodes with on-demand instances. Provision the second node group with spot instances.
This option allows the company to have a dedicated EKS cluster for development work. By creating two pools of managed nodes, one using on-demand instances and the other using spot instances, the company can manage costs effectively. On-demand instances provide stability and reliability, which is suitable for development work that requires consistent and predictable performance.
Spot instances offer cost savings, but come with the trade-off of potential short-notice termination. However, for infrequent testing and resilience experiments, one-time instances can be used to optimize costs.
694 # A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The enterprise needs to fully control users’ ability to create, rotate, and deactivate encryption keys with minimal effort for any data that needs to be encrypted. What solution will meet these requirements?
A. Use default server-side encryption with Amazon S3 Managed Encryption Keys (SSE-S3) to store sensitive data.
B. Create a customer-managed key using AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C. Create an AWS managed key using the AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects using server-side encryption with AWS KMS keys (SSE-KMS).
D. Download S3 objects to an Amazon EC2 instance. Encrypts objects using customer-managed keys. Upload the encrypted objects back to Amazon S3.
B. Create a customer-managed key using AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
AWS KMS allows you to create and manage customer managed keys (CMKs), giving you full control over the key lifecycle. This includes the ability to create, rotate, and deactivate keys as needed. Using server-side encryption with AWS KMS keys (SSE-KMS) ensures that S3 objects are encrypted with the specified customer-managed key. This provides a secure, managed approach to encrypt sensitive data in Amazon S3.
695 # A company wants to backup its on-premises virtual machines (VMs) to AWS. The company’s backup solution exports local backups to an Amazon S3 bucket as objects. S3 backups should be retained for 30 days and should be automatically deleted after 30 days. What combination of steps will meet these requirements? (Choose three.)
A. Create an S3 bucket that has S3 object locking enabled.
B. Create an S3 bucket that has object versioning enabled.
C. Set a default retention period of 30 days for the objects.
D. Configure an S3 lifecycle policy to protect objects for 30 days.
E. Configure an S3 lifecycle policy to expire the objects after 30 days.
F. Configure the backup solution to tag objects with a 30-day retention period
A. Create an S3 bucket that has S3 object locking enabled.
C. Set a default retention period of 30 days for the objects.
E. Configure an S3 lifecycle policy to expire the objects after 30 days.
S3 object locking ensures that objects in the bucket cannot be deleted or modified for a specified retention period. This helps meet the requirement to retain backups for 30 days.
Set a default retention period on the S3 bucket, specifying that objects within the bucket are locked for a duration of 30 days. This enforces the retention policy on the objects.
Use an S3 lifecycle policy to automatically expire (delete) objects in the S3 bucket after the specified 30-day retention period. This ensures that backups are automatically deleted after the retention period.
696 # A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and to another S3 bucket. Files must be copied continuously. New files are added to the original S3 bucket consistently. Copied files should be overwritten only if the source file changes. Which solution will meet these requirements with the LESS operating overhead?
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer only the data that has changed.
B. Create an AWS Lambda function. Mount the file system in the function. Configure an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer all data.
D. Start an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the source S3 bucket with the destination S3 bucket and the mounted file system.
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer only the data that has changed.
AWS DataSync is a managed service that makes it easy to automate, accelerate, and simplify data transfers between on-premises storage systems and AWS storage services. By configuring the transfer mode to transfer only data that has changed, the solution ensures that only changed data is transferred, reducing operational overhead.