sthithapragnakk -- SAA Exam Dumps Jan 24 COPY Flashcards

1
Q

773 # A company wants to deploy its containerized application workloads in a VPC across three availability zones. The business needs a solution that is highly available across all availability zones. The solution should require minimal changes to the application. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS service auto-scaling to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an availability zone attribute.
B. Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. Configure application auto-scaling to use target tracking scaling. Set the minimum capacity to 3.
C. Use Amazon EC2 Reserved Instances. Start three EC2 instances in a propagation placement group. Configure an auto-scaling group to use target tracking scaling. Set the minimum capacity to 3.
D. Use an AWS Lambda function. Configure the Lambda function to connect to a VPC. Configure application auto-scaling to use Lambda as a scalable target. Set the minimum capacity to 3.

A

A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS service auto-scaling to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an availability zone attribute.

This option involves using ECS ​​for container orchestration. Amazon ECS Service Auto Scaling allows you to automatically adjust the number of tasks running on a service. Setting the task placement strategy to be “spread” with an Availability Zone attribute ensures that tasks are distributed equally across Availability Zones. This solution is designed for high availability with minimal application changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

774 # A media company stores movies on Amazon S3. Each movie is stored in a single video file ranging from 1 GB to 10 GB in size. The company must be able to provide streaming content for a movie within 5 minutes of a user purchasing it. There is a greater demand for films less than 20 years old than for films more than 20 years old. The company wants to minimize the costs of the hosting service based on demand. What solution will meet these requirements?

A. Store all media in Amazon S3. Use S3 lifecycle policies to move media data to the infrequent access tier when demand for a movie decreases.
B. Store newer movie video files in S3 Standard. Store older movie video files in S3 Standard-Infrequent Access (S3 Standard-IA). When a user requests an older movie, recover the video file using standard retrieval.
C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using expedited retrieval.
D. Store newer movie video files in S3 Standard. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using bulk retrieval.

A

C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using expedited retrieval.

This option uses S3 Intelligent-Tiering for newer movies, automatically optimizing costs based on access patterns. Older movies are stored in S3 Glacier Flexible Retrieval, and accelerated retrieval is used when a user requests an older movie. Accelerated recovery on S3 Glacier typically provides data recovery times in 1-5 minutes, making it suitable for meeting the 5-minute recovery requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

775 # A solutions architect needs to design the architecture of an application that a vendor provides as a Docker container image. The container needs 50 GB of available storage for temporary files. The infrastructure must be serverless. Which solution meets these requirements with the LESS operating overhead?

A. Create an AWS Lambda function that uses the Docker container image with a volume mounted on Amazon S3 that has more than 50 GB of space.
B. Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space. Create a task definition for the container image. Create a service with that task definition.

A

C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.

Key here is that it requires 50GB. REMEMBER Lambda supports images up to 10 GB in size. Ephemeral Storage Restrictions

Lambdas have limited temporary storage capacity for the ephemeral directory /tmp. You can increase the default size of 512 MB up to 10 GB https://blog.awsfundamentals.com/lambda-limitations

This option involves using Amazon ECS with Fargate, a serverless computing engine for containers. Using Amazon EFS enables persistent storage across multiple containers and instances. This approach meets the requirement of providing 50GB of storage and is serverless as it uses Fargate.

AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-storage.html When provisioned, each Amazon ECS task hosted on AWS Fargate receives the following ephemeral storage (temporary file storage) for bind mounts. This can be mounted and shared among containers using the volumes, mountPoints and volumesFrom parameters in the task definition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

776 # A company needs to use its on-premises LDAP directory service to authenticate its users to the AWS Management Console. The directory service does not support Security Assertion Markup Language (SAML). Which solution meets these requirements?

A. Enable AWS IAM (AWS Single Sign-On) Identity Center between AWS and on-premises LDAP.
B. Create an IAM policy that uses AWS credentials and integrate the policy into LDAP.
C. Configure a process that rotates IAM credentials each time LDAP credentials are updated.
D. Develop an on-premises custom identity broker application or process that uses AWS Security Token Service (AWS STS) to obtain short-lived credentials.

A

D. Develop an on-premises custom identity broker application or process that uses AWS Security Token Service (AWS STS) to obtain short-lived credentials.

This option involves creating a custom on-premises identity broker application or process that communicates with AWS Security Token Service (STS) to obtain short-lived credentials. This custom solution acts as an intermediary between the on-premises LDAP directory and AWS. Provides a way to obtain temporary security credentials without requiring direct LDAP support. This is a common approach for scenarios where SAML is not an option.

**Option A: Enable AWS IAM (AWS Single Sign-On) Identity Center between AWS and on-premises LDAP. ** - Explanation: AWS Single Sign-On (SSO) is designed to simplify AWS access management for enterprise users and administrators. Supports integration with local directories, but primarily uses SAML for federation. Since the local LDAP directory does not support SAML, option A may not be suitable for the given scenario. **Option B: Create an IAM policy that uses AWS credentials and integrate the policy into LDAP. ** - Explanation: This option suggests creating an IAM policy that uses AWS credentials and integrating it into LDAP. However, AWS IAM policies are typically associated with AWS identities, they are not integrated directly into LDAP. This option does not align with common practices for federated authentication. **Option C: Configure a process that rotates IAM credentials each time LDAP credentials are updated. ** - Explanation: Rotating IAM credentials every time LDAP credentials are updated introduces complexity and operational overhead. Additionally, IAM credentials are typically long-lived, and this approach does not provide the typical single sign-on (SSO) experience that federated authentication solutions offer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

777 # A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2 instances. AMIs contain critical data and configurations that are necessary for business operations. The company wants to implement a solution that recovers accidentally deleted AMIs quickly and efficiently. Which solution will meet these requirements with the LESS operating overhead?

A. Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store snapshots in a separate AWS account.
B. Copy all AMIs to another AWS account periodically.
C. Create a Recycle Bin retention rule.
D. Upload the AMIs to an Amazon S3 bucket that has cross-region replication.

A

C. Create a Recycle Bin retention rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

778 # A company has 150TB of archived image data stored on-premises that needs to be moved to the AWS cloud within the next month. The company’s current network connection allows uploads of up to 100 Mbps for this purpose only during the night. What is the MOST cost effective mechanism to move this data and meet the migration deadline?

A. Use AWS Snowmobile to send data to AWS.
B. Order multiple AWS Snowball devices to send data to AWS.
C. Enable Amazon S3 transfer acceleration and upload data securely.
D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.

A

B. Order multiple AWS Snowball devices to send data to AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

779 # A company wants to migrate its three-tier application from on-premises to AWS. The web tier and application tier run on third-party virtual machines (VMs). The database tier is running on MySQL. The company needs to migrate the application by making as few architectural changes as possible. The company also needs a database solution that can restore data to a specific point in time. Which solution will meet these requirements with the LESS operating overhead?

A. Migrate the web tier and application tier to Amazon EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets.
B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
C. Migrate the web tier to Amazon EC2 instances on public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets.
D. Migrate the web tier and application tier to Amazon EC2 instances on public subnets. Migrate the database tier to Amazon Aurora MySQL on public subnets.

A

B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.

This option introduces Amazon Aurora MySQL for the database tier, which is a fully managed relational database service compatible with MySQL. Aurora supports Timely recovery. While it adds a managed service, it also requires changes to the database technology, which can introduce some operational considerations.

Aurora provides automated backup and point-in-time recovery, simplifying backup management and data protection. Continuous incremental backups are taken automatically and stored in Amazon S3, and data retention periods can be specified to meet compliance requirements.

NOTE: **Option A: Migrate the web tier and application tier to Amazon EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets. ** - Explanation: This option migrates the web and application tier to EC2 instances and the database tier to Amazon RDS for MySQL. RDS for MySQL provides point-in-time recovery capabilities, allowing you to restore the database to a specific point in time. This option minimizes architectural changes and operational overhead while using managed services for the database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

780 # A development team is collaborating with another company to create an integrated product. The other company needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development team’s account. The other company wants to poll the queue without giving up its own account permissions to do so. How should a solutions architect provide access to the SQS queue?

A. Create an instance profile that provides the other company with access to the SQS queue.
B. Create an IAM policy that gives the other company access to the SQS queue.
C. Create an SQS access policy that provides the other company with access to the SQS queue.
D. Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the other company with access to the SQS queue.

A

C. Create an SQS access policy that provides the other company with access to the SQS queue.

SQS access policies are specifically designed to control access to SQS resources. You can create an SQS access policy that allows the other company’s AWS account Or specific identities to access the SQS queue. This is a suitable option for sharing access to an SQS queue across all accounts.

Summary: - Option B (Create an IAM policy) and Option C (Create an SQS access policy) are valid and common approaches to granting cross-account access to an SQS queue . Choosing between them may depend on factors such as whether you want to manage access through IAM or directly through SQS policies. Both options allow you to grant fine-grained permissions for the other company to poll the SQS queue without exposing broader permissions in your AWS account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

781 # A company’s developers want a secure way to gain SSH access to the company’s Amazon EC2 instances running the latest version of Amazon Linux. Developers work remotely and in the corporate office. The company wants to use AWS services as part of the solution. EC2 instances are hosted in a private VPC subnet and access the Internet through a NAT gateway that is deployed on a public subnet. What should a solutions architect do to meet these requirements in the most cost-effective way?

A. Create a bastion host on the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection IAM permission to developers. Install EC2 Instance Connect so that developers can connect to EC2 instances.
B. Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct developers to use the site-to-site VPN connection to access EC2 instances when the developers are on the corporate network. Instruct developers to set up another VPN connection to access when working remotely.
C. Create a bastion host on the VP public subnet. Configure the bastion host’s security groups and SSH keys to only allow SSH connections and authentication from developers’ remote and corporate networks. Instruct developers to connect through the bastion host using SSH to reach the EC2 instances.
D. Attach the AmazonSSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct developers to use AWS Systems Manager Session Manager to access EC2 instances.

A

D. Attach the AmazonSSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct developers to use AWS Systems Manager Session Manager to access EC2 instances.

This option involves using AWS Systems Manager Session Manager, which provides a secure and auditable way to access EC2 instances. Eliminates the need for a bastion host and allows access directly through the AWS Management Console. This can be a cost-effective and efficient solution.

Summary: - Option A (Create a bastion host with EC2 Instance Connect), Option Create a bastion host in the public subnet) and Option D (Use AWS Systems Manager Session Manager) are all viable options for secure SSH access. - Option D (AWS Systems Manager Session Manager) is often considered a cost-effective and secure solution without the need for a separate bastion host. Simplifies access and provides audit trails.
- Option A and Option C involve bastion hosts, but have different implementation details. Option A focuses on EC2 Instance Connect, while Option C uses a traditional bastion host with restricted access. Conclusion: - Option D (AWS Systems Manager Session Manager) is probably the most cost-effective and operationally efficient solution for secure SSH access to EC2 instances on a private subnet. It aligns with AWS best practices and simplifies management without the need for a separate bastion host.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

782 # A pharmaceutical company is developing a new medicine. The volume of data that the company generates has grown exponentially in recent months. The company’s researchers regularly require that a subset of the entire data set be made available immediately with minimal delay. However, it is not necessary to access the entire data set daily. All data currently resides on local storage arrays, and the company wants to reduce ongoing capital expenditures. Which storage solution should a solutions architect recommend to meet these requirements?

A. Run AWS DataSync as a scheduled cron job to migrate data to an Amazon S3 bucket continuously.
B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
D. Configure an AWS site-to-site VPN connection from the on-premises environment to AWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.

A

C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.

This option involves using Storage Gateway with cached volumes, storing frequently accessed data locally for low-latency access, and asynchronously backing up the entire data set to Amazon S3.

  • For the specific requirement of having a subset of the data set immediately available with minimal delay, Option C (Storage Gate Volume Gateway with Cached Volumes) appears to be well aligned. Supports low-latency access to frequently accessed data stored on-premises, while ensuring durability of the overall data set in Amazon S3.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

783 # A company has a business-critical application running on Amazon EC2 instances. The application stores data in an Amazon DynamoDB table. The company must be able to revert the table to any point within the last 24 hours. Which solution meets these requirements with the LESS operating overhead?

A. Configure point-in-time recovery for the table.
B. Use AWS Backup for the table.
C. Use an AWS Lambda function to make an on-demand backup of the table every hour.
D. Turn on streams on the table to capture a log of all changes to the table in the last 24 hours. Store a copy of the stream in an Amazon S3 bucket.

A

A. Configure point-in-time recovery for the table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

784 # A company hosts an application that is used to upload files to an Amazon S3 bucket. Once uploaded, files are processed to extract metadata, which takes less than 5 seconds. The volume and frequency of uploads varies from a few files every hour to hundreds of simultaneous uploads. The company has asked a solutions architect to design a cost-effective architecture that meets these requirements. What should the solutions architect recommend?

A. Configure AWS CloudTrail trails to record S3 API calls. Use AWS AppSync to process the files.
B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.
C. Configure Amazon Kinesis data streams to process and send data to Amazon S3. Invokes an AWS Lambda function to process the files.
D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process files uploaded to Amazon S3. Invokes an AWS Lambda function to process the files.

A

B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.

  • This option leverages Amazon S3 event notifications to trigger an AWS Lambda function when an object (file) is created in the S3 bucket.
  • AWS Lambda provides a serverless computing service, enabling code execution without the need to provision or manage servers.
  • Lambda can be programmed to process the files, extract metadata and perform any other necessary tasks.
  • Lambda can automatically scale based on the number of incoming events, making it suitable for variable uploads, from a few files per hour to hundreds of simultaneous uploads.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

785 # An enterprise application is deployed on Amazon EC2 instances and uses AWS Lambda functions for an event-driven architecture. The company uses non-production development environments in a different AWS account to test new features before the company deploys the features to production. Production instances show constant usage due to clients in different time zones. The company uses non-production instances only during business hours Monday through Friday. The company does not use non-production instances on weekends. The company wants to optimize costs for running its application on AWS. Which solution will meet these requirements in the MOST cost-effective way?

A. Use on-demand instances for production instances. Use dedicated hosts for non-production instances only on weekends.
B. Use reserved instances for production and non-production instances. Shut down non-production instances when they are not in use.
C. Use compute savings plans for production instances. Use on-demand instances for non-production instances. Shut down non-production instances when they are not in use.
D. Use dedicated hosts for production instances. Use EC2 instance savings plans for non-production instances.

A

C. Use compute savings plans for production instances. Use on-demand instances for non-production instances. Shut down non-production instances when they are not in use.

  • Compute Savings Plans provide significant cost savings for a commitment to a constant amount of compute usage (measured in $/hr) over a 1 or 3 year term. This is suitable for production instances that show constant usage.
  • Using on-demand instances for non-production instances allows for flexibility without compromise, and shutting down non-production instances when they are not in use helps minimize costs.
  • This approach takes advantage of the cost-effectiveness of savings plans for predictable workloads and the flexibility of on-demand instances for sporadic use.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

786 # A company stores data in an on-premises Oracle relational database. The company needs the data to be available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS site-to-site VPN connection to connect its on-premises network to AWS. The company must capture changes that occur to the source database during migration to Aurora PostgreSQL. What solution will meet these requirements?

A. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full load migration task to migrate the data.
B. Use AWS DataSync to migrate data to an Amazon S3 bucket. Import data from S3 to Aurora PostgreSQL using the Aurora PostgreSQL aws_s3 extension.
C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate existing data and replicate ongoing changes.
D. Use an AWS Snowball device to migrate data to an Amazon S3 bucket. Import data from S3 to Aurora PostgreSQL using the Aurora PostgreSQL aws_s3 extension.

A

C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate existing data and replicate ongoing changes.

  • AWS Schema Conversion Tool (AWS SCT): This tool helps convert the source database schema to a format compatible with the target database. In this case, it will help to convert the Oracle schema to an Aurora PostgreSQL schema. - AWS Database Migration Service (AWS DMS):
  • Full Load Migration: Can be used initially to migrate existing data from on-premises Oracle database to Aurora PostgreSQL.
  • Ongoing Change Replication: AWS DMS can be configured for continuous replication, capturing changes to the source database and applying them to the target Aurora PostgreSQL database. This ensures that changes made to the Oracle database during the migration process are also reflected in Aurora PostgreSQL.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

787 # A company built an application with Docker containers and needs to run the application in the AWS cloud. The company wants to use a managed service to host the application. The solution must scale appropriately according to the demand for individual container services. The solution should also not result in additional operational overhead or infrastructure to manage. What solutions will meet these requirements? (Choose two.)

A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
C. Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.

A

A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
C. Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.

  • AWS Fargate is a serverless compute engine for containers, eliminating the need to manage the underlying EC2 instances.
  • Automatically scales to meet application demand without manual intervention.
  • Abstracts infrastructure management, providing a serverless experience for containerized applications.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

788 # An e-commerce company is running a seasonal online sale. The company hosts its website on Amazon EC2 instances that span multiple availability zones. The company wants its website to handle traffic surges during the sale. Which solution will meet these requirements in the MOST cost-effective way?

A. Create an auto-scaling group that is large enough to handle the maximum traffic load. Stop half of your Amazon EC2 instances. Configure the auto-scaling group to use stopped instances to scale when traffic increases.
B. Create an Auto Scaling group for the website. Set the minimum auto-scaling group size so that it can handle large volumes of traffic without needing to scale.
C. Use Amazon CloudFront and Amazon ElastiCache to cache dynamic content with an auto-scaling group set as the origin. Configure the Auto Scaling group with the instances necessary to populate CloudFront and ElastiCache. Scales after the cache is completely full.
D. Configure an auto-scaling group to scale as traffic increases. Create a launch template to start new instances from a preconfigured Amazon Machine Image (AMI).

A

D. Configure an auto-scaling group to scale as traffic increases. Create a launch template to start new instances from a preconfigured Amazon Machine Image (AMI).

Provides elasticity, automatically scaling to handle increased traffic. The launch template allows for consistent instance configuration.

In summary, while each option has its merits, Option D, with its focus on dynamic scaling using auto-scaling and a launch template, is often preferred for its balance of cost-effectiveness and responsiveness to different traffic patterns. . Aligns with best practices for scaling web applications on AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

789 # A solutions architect must provide an automated solution for an enterprise’s compliance policy that states that security groups cannot include a rule that allows SSH starting at 0.0.0.0/0. It is necessary to notify the company if there is any violation in the policy.

A solution is needed as soon as possible. What should the solutions architect do to meet these requirements with the least operational overhead?

A. Write an AWS Lambda script that monitors security groups so that SSH is open to 0.0.0.0/0 addresses and creates a notification whenever it finds one.

B. Enable the restricted ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a non-compliant rule is created.

C. Create an IAM role with permissions to globally open security groups and network ACLs. Create an Amazon Simple Notification Service (Amazon SNS) topic to generate a notification each time a user assumes the role.

D. Configure a service control policy (SCP) that prevents non-administrative users from creating or editing security groups. Create a notification in the ticket system when a user requests a rule that requires administrator permissions.

A

B. Enable the restricted ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a non-compliant rule is created.

Takes advantage of the AWS Config managed rule, minimizing manual scripting. Config provides automated compliance checks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

790 # Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes. A company has deployed an application to an AWS account. The application consists of microservices running on AWS Lambda and Amazon Elastic Kubernetes Service (Amazon EKS). A separate team supports each microservice. The company has multiple AWS accounts and wants to give each team their own account for their microservices. A solutions architect needs to design a solution that provides service-to-service communication over HTTPS (port 443). The solution must also provide a service registry for service discovery. Which solution will meet these requirements with the LEAST administrative overhead?

A. Create an inspection VPC. Deploy an AWS Network Firewall firewall in the inspection VPC. Attach the inspection VPC to a new transit gateway. Routes VPC-to-VPC traffic to the inspection VPC. Apply firewall rules to allow only HTTPS communication.
B. Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS listeners for each service. Register microservices computing resources as targets. Identify VPCs that need to communicate with the services. Associate those VPCs with the service network.
C. Create a network load balancer (NLB) with an HTTPS listener and target groups for each microservice. Create an AWS PrivateLink endpoint service for each microservice. Create a VPC interface endpoint in each VPC that needs to consume that microservice.
D. Create peering connections between VPCs that contain microservices. Create a list of prefixes for each service that requires a connection to a client. Create route tables to route traffic to the appropriate VPC. Create security groups to allow only HTTPS communication.

A

B. Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS listeners for each service. Register microservices computing resources as targets. Identify VPCs that need to communicate with the services. Associate those VPCs with the service network.

  • Uses a network of services for association and communication.
  • Specific HTTPS listeners and targets for each service.

Taking into account the limitations and the need for the least administrative overhead, option B provides a decentralized approach with a network of services. While it may involve some initial configuration, it allows for specific association and communication between microservices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

791 # A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game increased in popularity, developers noticed slowdowns related to game metadata loading times. Performance metrics indicate that simply scaling the database will not help. A solutions architect should explore all options including capabilities for snapshots, replication, and sub-millisecond response times. What should the solutions architect recommend to solve these problems?

A. Migrate the database to Amazon Aurora with Aurora Replicas.
B. Migrate the database to Amazon DynamoDB with global tables.
C. Add an Amazon ElastiCache for Redis layer in front of the database.
D. Add an Amazon ElastiCache layer for Memcached in front of the database.

A

B. Migrate the database to Amazon DynamoDB with global tables.

  • DynamoDB is designed for low latency access and can provide sub-millisecond response times.
  • Global tables offer multi-region replication for high availability.
  • DynamoDB’s architecture and features are well suited for scenarios with strict performance expectations.

Other Considerations:
A. Migrate the database to Amazon Aurora with Aurora Replicas:
- Aurora is known for its high performance, but sub-millisecond response times may not be guaranteed in all scenarios.
- Aurora replicas provide read scalability, but may not meet the submillisecond requirement.
C. Add an Amazon ElastiCache for Redis layer in front of the database: - ElastiCache for Redis is an in-memory caching solution. - While it may improve read performance, it may not guarantee sub-millisecond response times for all use cases.
D. Add a layer of Amazon ElastiCache for Memcached in front of the database: - Similar to Redis, ElastiCache for Memcached is a caching solution. - Caching may improve read performance, but may not guarantee sub-millisecond response times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

792 # A company uses AWS Organizations for its multi-account AWS setup. The enterprise security organizational unit (OU) needs to share approved Amazon Machine Images (AMIs) with the development OU. AMIs are created by using encrypted AWS Key Management Service (AWS KMS) snapshots. What solution will meet these requirements? (Choose two.)

A. Add the development team’s OU Amazon Resource Name (ARN) to the list of release permissions for AMIs.
B. Add the organizations root Amazon Resource Name (ARN) to the launch permissions list for AMIs.
C. Update the key policy to allow the development team’s OU to use the AWS KMS keys that are used to decrypt snapshots.
D. Add the Amazon Resource Name (ARN) development team account to the list of launch permissions for AMIs.
E. Recreate the AWS KMS key. Add a key policy to allow the root of Amazon Resource Name (ARN) organizations to use the AWS KMS key.

A

A. Add the development team’s OU Amazon Resource Name (ARN) to the list of release permissions for AMIs.
C. Update the key policy to allow the development team’s OU to use the AWS KMS keys that are used to decrypt snapshots.

  • Option A: - Add the Amazon Resource Name (ARN) of the development team’s OU to the launch permissions list for AMIs:
  • Explanation: This option is relevant to control who can start AMIs. By adding the development team’s OU to release permissions, you give them the ability to use AMIs.
  • Fits the requirement: Share AMI.

Option C:
- Update key policy to allow the development team OU to use AWS KMS keys that are used to decrypt snapshots:
- Explanation: This option addresses decryption permissions. If you want your development team’s OU to use AWS KMS keys to decrypt snapshots (required to launch AMIs), adjusting the key policy is the right approach.
- Fits the requirement: Share encrypted snapshots.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

793 # A data analysis company has 80 offices that are distributed worldwide. Each office hosts 1 PB of data and has between 1 and 2 Gbps of Internet bandwidth. The company needs to perform a one-time migration of a large amount of data from its offices to Amazon S3. The company must complete the migration within 4 weeks. Which solution will meet these requirements in the MOST cost-effective way?

A. Establish a new 10 Gbps AWS Direct Connect connection to each office. Transfer the data to Amazon S3.
B. Use multiple AWS Snowball Edge storage-optimized devices to store and transfer data to Amazon S3.
C. Use an AWS snowmobile to store and transfer the data to Amazon S3.
D. Configure an AWS Storage Gateway Volume Gateway to transfer data to Amazon S3.

A

B. Use multiple AWS Snowball Edge storage-optimized devices to store and transfer data to Amazon S3.

  • Considerations: This option can be cost-effective and efficient, especially when dealing with large data sets. Take advantage of physical transportation, reducing the impact on Internet bandwidth.

Other Options:
- Option C: Use an AWS Snowmobile:
- Explanation: AWS Snowmobile is a high-capacity data transfer service that involves a secure shipping container. It is designed for massive data migrations.
- Considerations: While Snowmobile is efficient for extremely large data volumes, it could be overkill for the described scenario of 80 offices with 1 PB of data each.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

794 # A company has an Amazon Elastic File System (Amazon EFS) file system that contains a set of reference data. The company has applications on Amazon EC2 instances that need to read the data set. However, applications should not be able to change the data set. The company wants to use IAM access control to prevent applications from modifying or deleting the data set. What solution will meet these requirements?

A. Mount the EFS file system in read-only mode from within the EC2 instances.
B. Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to IAM roles that are attached to EC2 instances.
C. Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system.
D. Create an EFS access point for each application. Use Portable Operating System Interface (POSIX) file permissions to allow read-only access to files in the root directory.

A

C. Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system.

  • Option C: Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system:
  • This option is also aligned with IAM access control, denying actions using identity policies.
  • Option C is a valid option to control modifications through IAM

Other Options:
- Option B: Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to IAM roles that are associated with EC2 instances:
- This option involves IAM roles and policies of resources, aligning with the IAM access control requirement.
- Option B is a valid option to use IAM access control to prevent modifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

795 # A company has hired a third-party vendor to perform work on the company’s AWS account. The provider uses an automated tool that is hosted in an AWS account that the provider owns. The provider does not have IAM access to the company’s AWS account. The company must grant the provider access to the company’s AWS account. Which solution will MOST securely meet these requirements?

A. Create an IAM role in the company account to delegate access to the provider IAM role. Attach the appropriate IAM policies to the role for the permissions the provider requires.
B. Create an IAM user in the company account with a password that meets the password complexity requirements. Attaches the appropriate IAM policies to the user for the permissions the provider requires.
C. Create an IAM group in the company account. Adds the automated tool IAM user from the provider account to the group. Attach the appropriate IAM policies to the group for the permissions that the provider requires.
D. Create an IAM user in the company account that has a permission limit that the provider account allows. Attaches the appropriate IAM policies to the user for the permissions the provider requires.

A

A. Create an IAM role in the company account to delegate access to the provider IAM role. Attach the appropriate IAM policies to the role for the permissions the provider requires.

  • Explanation: This option involves creating a cross-account IAM role to delegate access to the provider IAM role. The role will have policies attached for the required permissions.
  • Security: This is a secure approach as it follows the principle of least privilege and uses cross-account roles for access.

Other Options:
- Option B: Create an IAM user in the company account with a password that meets the password complexity requirements:
- Explanation: This option involves creating a local IAM user in the account of the company with the attached policies for the required permits.
- Security: Using a local IAM user with a password could introduce security risks, and it is generally recommended to use temporary roles and credentials instead.

  • Option C: Create an IAM group in the company account. Add the automated tool IAM user from the provider account to the group:
  • Explanation: This option involves grouping the provider IAM user into the enterprise IAM group and attaching policies to the group for permissions.
  • Security: While IAM groups are a good practice, directly adding external IAM users (from another account) to a group in the company account is less secure and may not be a best practice.
  • Option D: Create an IAM user in the company account that has a permission limit that allows the provider account
  • Explanation: This option involves creating an IAM User with a permission limit that allows the provider account. Policies are attached to the user to obtain the required permissions.
  • Security: This approach uses permissions limits for control, but directly creating the IAM user might not be as secure as using roles.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

796 # A company wants to run its experimental workloads in the AWS cloud. The company has a budget for cloud spending. The company’s CFO is concerned about the responsibility of each department’s cloud spending. The CFO wants to be notified when the spending threshold reaches 60% of the budget. What solution will meet these requirements?

A. Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an alert threshold to receive notification when spending exceeds 60% of the budget.
B. Use AWS Cost Explorer forecasts to determine resource owners. Use AWS Cost Anomaly Detection to create alert threshold notifications when spending exceeds 60% of budget.
C. Use cost allocation tags on AWS resources to tag owners. Use the AWS Support API in AWS Trusted Advisor to create alert threshold notifications when spending exceeds 60% of budget.
D. Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgets in AWS Budgets. Add an alert threshold to be notified when spending exceeds 60% of budget.

A

A. Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an alert threshold to receive notification when spending exceeds 60% of the budget.

  • Explanation: This option suggests using cost allocation tags to tag owners, create usage budgets in AWS budgets, and set an alert threshold for notification when spending exceeds 60% of budget.
  • Pros: Uses cost allocation tags to identify resource owners, and AWS budgets are designed specifically for budgeting and cost tracking.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

797 # A company wants to deploy an internal web application on AWS. The web application should only be accessible from the company office. The company needs to download security patches for the web application from the Internet. The company has created a VPC and configured an AWS site-to-site VPN connection to the company office. A solutions architect must design a secure architecture for the web application. What solution will meet these requirements?

A. Deploy the web application to Amazon EC2 instances on public subnets behind a public application load balancer (ALB). Connect an Internet gateway to the VPC. Set the ALB security group input source to 0.0.0.0/0.
B. Deploy the web application on Amazon EC2 instances in private subnets behind an internal application load balancer (ALB). Deploy NAT gateways on public subnets. Attach an Internet gateway to the VPC. Set the inbound source of the ALB’s security group to the company’s office network CIDR block.
C. Deploy the web application to Amazon EC2 instances on public subnets behind an internal application load balancer (ALB). Implement NAT gateways on private subnets. Connect an Internet gateway to the VPSet, the outbound destination of the ALB security group, to the CIDR block of the company’s office network.
D. Deploy the web application to Amazon EC2 instances in private subnets behind a public application load balancer (ALB). Connect an Internet gateway to the VPC. Set the ALB security group output destination to 0.0.0.0/0.

A

B. Deploy the web application on Amazon EC2 instances in private subnets behind an internal application load balancer (ALB). Deploy NAT gateways on public subnets. Attach an Internet gateway to the VPC. Set the inbound source of the ALB’s security group to the company’s office network CIDR block.

  • Explanation: This option deploys the web application on private subnets behind an internal ALB, with NAT gateways on public subnets. Allows incoming traffic from the CIDR block of the company’s office network.
  • Pros: Restricts incoming traffic to the company’s office network.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

798 # A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate data to an AWS managed service for development and maintenance of application data. The solution should require minimal operational support and provide immutable, cryptographically verifiable records of data changes. Which solution will meet these requirements in the MOST cost-effective way?

A. Copy the application logs to an Amazon Redshift cluster.
B. Copy the application logs to an Amazon Neptune cluster.
C. Copy the application logs to an Amazon Timestream database.
D. Copy the records from the application into an Amazon Quantum Ledger database (Amazon QLDB) ledger.

A

D. Copy the records from the application into an Amazon Quantum Ledger database (Amazon QLDB) ledger.

  • Explanation: Amazon QLDB is designed for ledger-style applications, providing a transparent, immutable, and cryptographically verifiable record of transactions. It is suitable for use cases where an immutable and transparent record of all changes is needed.
  • Pros: Designed specifically for immutable records and cryptographic verification.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

799 # A company’s marketing data is loaded from multiple sources into an Amazon S3 bucket. A series of data preparation jobs aggregate the data for reporting. Data preparation jobs must be run at regular intervals in parallel. Some jobs must be run in a specific order later. The company wants to eliminate the operational overhead of job error handling, retry logic, and state management. What solution will meet these requirements?

A. Use an AWS Lambda function to process the data as soon as the data is uploaded to the S3 bucket. Invokes other Lambda functions at regularly scheduled intervals.
B. Use Amazon Athena to process the data. Use Amazon EventBridge Scheduler to invoke Athena on a regular internal.
C. Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run DataBrew data preparation jobs.
D. Use AWS Data Pipeline to process the data. Schedule the data pipeline to process the data once at midnight.

A

C. Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run DataBrew data preparation jobs.

It provides detailed control over the work order and integrates with Step Functions for workflow orchestration and management.

  • Explanation: AWS Glue DataBrew can be used for data preparation, and AWS Step Functions can provide orchestration for jobs that must be executed in a specific order. Step Functions can also handle error handling, retry logic, and state management.
  • Pros: Detailed control over the work order, built-in orchestration capabilities.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

800 # A solutions architect is designing a payment processing application that runs on AWS Lambda in private subnets across multiple availability zones. The app uses multiple Lambda functions and processes millions of transactions every day. The architecture should ensure that the application does not process duplicate payments. What solution will meet these requirements?

A. Use Lambda to retrieve all payments due. Post payments due to an Amazon S3 bucket. Configure the S3 bucket with an event notification to invoke another Lambda function to process payments due.
B. Use Lambda to retrieve all payments due. Posts payments due to an Amazon Simple Queue Service (Amazon SQS) queue. Set up another Lambda function to poll the SQS queue and process payments due.
C. Use Lambda to retrieve all due payments. Publish the payments to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and process payments due.
D. Use Lambda to retrieve all payments due. Store payments due in an Amazon DynamoDB table. Configure flows in the DynamoDB table to invoke another Lambda function to process payments due.

A

C. Use Lambda to retrieve all due payments. Publish the payments to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and process payments due.

  • Explanation: Similar to Option B, but uses an SQS FIFO queue, which provides one-time ordering and processing.
  • Pros: Ensures message ordering and processing in one go.

Considering the requirement to ensure that the application does not process duplicate payments, Option C (Amazon SQS FIFO Queue) appears to be the most appropriate option. It takes advantage of the reliability, ordering and one-time processing features of an SQS FIFO queue, which align with the need to process payments without duplicates.

NOTE: Option b with regular SQS queue, Potential for message duplication if not handled correctly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

801 # A company runs multiple workloads in its on-premises data center. The company’s data center cannot scale fast enough to meet the company’s growing business needs. The company wants to collect usage and configuration data about on-premises servers and workloads to plan a migration to AWS. What solution will meet these requirements?

A. Set the starting AWS Region to AWS Migration Hub. Use AWS Systems Manager to collect data about on-premises servers.
B. Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect data about on-premises servers.
C. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use AWS Trusted Advisor to collect data about on-premises servers.
D. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use the AWS Database Migration Service (AWS DMS) to collect data about on-premises servers.

A

B. Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect data about on-premises servers.

AWS ADS is specifically designed to discover detailed information about servers, applications, and dependencies, providing a complete view of the on-premises environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

802 # A company has an organization in AWS Organizations that has all features enabled. The company requires that all API calls and logins to any existing or new AWS account be audited. The company needs a managed solution to avoid additional work and minimize costs. The business also needs to know when any AWS account does not meet the AWS Foundational Security Best Practices (FSBP) standard. Which solution will meet these requirements with the LESS operating overhead?

A. Deploy an AWS control tower environment in the Organization Management account. Enable AWS Security Hub and AWS Control Tower Account Factory in your environment.
B. Deploy an AWS Control Tower environment in a dedicated Organization Member account. Enable AWS Security Hub and AWS Control Tower Account Factory in your environment.
C. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC to the Amazon GuardDuty self-service provisioning in MALZ.
D. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC to the AWS Security Hub self-service provision on the MALZ.

A

A. Deploy an AWS control tower environment in the Organization Management account. Enable AWS Security Hub and AWS Control Tower Account Factory in your environment.

Explanation: Deploy AWS Control Tower to the Organization Management Account, enable AWS Security Hub and AWS Control Tower Account Factory. Pros: Centralized deployment to the Organization Management account provides a more efficient way to manage and govern multiple accounts. Simplifies operations and reduces the overhead of implementing and managing the Control Tower in each separate account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

803 # A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucket. From time to time, the company needs to use SQL to analyze log files. Which solution will meet these requirements in the MOST cost-effective way?

A. Create an Amazon Aurora MySQL database. Migrate S3 bucket data to Aurora using AWS Database Migration Service (AWS DMS). Issue SQL statements to the Aurora database.
B. Create an Amazon Redshift cluster. Use Redshift Spectrum to execute SQL statements directly on data in your S3 bucket.
C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to run SQL statements directly on data in your S3 bucket.
D. Create an Amazon EMR cluster. Use Apache Spark SQL to execute SQL statements directly on data in the S3 bucket.

A

C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to run SQL statements directly on data in your S3 bucket.

  • AWS Glue Crawler: AWS Glue can discover and store metadata about log files using a crawler. The crawler automatically identifies the schema and structure of the data in the S3 bucket, making it easy to query.
  • Amazon Athena: Athena is a serverless query service that allows you to run SQL queries directly on data in Amazon S3. It supports querying data in various formats, including Apache Parquet. Since Athena is serverless, you only pay for the queries you run, making it a cost-effective solution.

Other Options: Option A (using Amazon Aurora MySQL with AWS DMS) involves unnecessary data migration and may result in increased costs and complexity. Option B (using Amazon Redshift Spectrum) introduces the overhead of managing a Redshift cluster, which might be overkill for occasional SQL analysis. Option D (Using Amazon EMR with Apache Spark SQL) involves setting up and managing an EMR cluster, which may be more complex and expensive than necessary for occasional log file queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

804 # An enterprise needs a solution to prevent AWS CloudFormation stacks from deploying AWS Identity and Access Management (IAM) resources that include an inline policy or “*” in the declaration. The solution should also prohibit the deployment of Amazon EC2 instances with public IP addresses. The company has AWS Control Tower enabled in its organization in AWS organizations. What solution will meet these requirements?

A. Use proactive controls in AWS Control Tower to block the deployment of EC2 instances with public IP addresses and inline policies with elevated or “star” access.
B. Use AWS Control Tower detective controls to block the deployment of EC2 instances with public IP addresses and inline policies with elevated or “star” access.
C. Use AWS Config to create rules for EC2 and IAM compliance. Configure rules to run an AWS Systems Manager Session Manager automation to delete a resource when it is not supported.
D. Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead to noncompliance.

A

D. Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead to noncompliance.

  • Service Control Policies (SCP): SCPs are used to set fine-grained permissions for entities in an AWS organization. They allow you to set controls over what actions are allowed or denied on your accounts. In this scenario, an SCP can be created to deny specific actions related to EC2 instances and IAM resources that have inline policies with elevated or “*” access.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

805 # A company’s web application that is hosted on the AWS cloud has recently increased in popularity. The web application currently exists on a single Amazon EC2 instance on a single public subnet. The web application has not been able to meet the demand of increased web traffic. The business needs a solution that provides high availability and scalability to meet growing user demand without rewriting the web application. What combination of steps will meet these requirements? (Choose two.)

A. Replace the EC2 instance with an instance optimized for the larger compute.
B. Configure Amazon EC2 auto-scaling with multiple availability zones on private subnets.
C. Configure a NAT gateway on a public subnet to handle web requests.
D. Replace the EC2 instance with a larger memory-optimized instance.
E. Configure an application load balancer in a public subnet to distribute web traffic.

A

B. Configure Amazon EC2 auto-scaling with multiple availability zones on private subnets.
E. Configure an application load balancer in a public subnet to distribute web traffic.

  • Amazon EC2 Auto Scaling (Option B): By configuring Auto Scaling with multiple availability zones, you ensure that your web application can automatically adjust the number of instances to handle different levels of demand. This improves availability and scalability.
  • Application Load Balancer (Option E): An application load balancer (ALB) on a public subnet can distribute incoming web traffic across multiple EC2 instances. ALB is designed for high availability and can efficiently handle traffic distribution, improving the overall performance of the web application.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

806 # A company has AWS Lambda functions that use environment variables. The company does not want its developers to see environment variables in plain text. What solution will meet these requirements?

A. Deploy code to Amazon EC2 instances instead of using Lambda functions.
B. Configure SSL encryption on Lambda functions to use AWS CloudHSM to store and encrypt environment variables.
C. Create a certificate in AWS Certificate Manager (ACM). Configure Lambda functions to use the certificate to encrypt environment variables.
D. Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda functions to use the KMS key to store and encrypt environment variables.

A

D. Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda functions to use the KMS key to store and encrypt environment variables.

  • AWS Key Management Service (KMS) provides a secure and scalable way to manage keys. You can create a customer managed key (CMK) in AWS KMS to encrypt and decrypt environment variables used in Lambda functions.
  • By enabling encryption helpers in Lambda functions, you can have Lambda automatically encrypt environment variables using the KMS key. This ensures that environment variables are stored and transmitted securely.
35
Q

807 # An analytics company uses Amazon VPC to run its multi-tier services. The company wants to use RESTful APIs to offer a web analysis service to millions of users. Users must be verified by using an authentication service to access the APIs. Which solution will meet these requirements with the GREATEST operational efficiency?

A. Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs with a Cognito authorizer.
B. Configure an Amazon Cognito identity pool for user authentication. Implement Amazon API Gateway HTTP APIs with a Cognito authorizer.
C. Configure an AWS Lambda function to handle user authentication. Deploy Amazon API Gateway REST APIs with a Lambda authorizer.
D. Configure an IAM user to be responsible for user authentication. Deploy Amazon API Gateway HTTP APIs with an IAM authorizer.

A

A. Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs with a Cognito authorizer.

  • Amazon Cognito User Pools: Amazon Cognito provides a fully managed service for user identity and authentication that easily scales to millions of users. Setting up a Cognito user pool allows you to manage user authentication efficiently. It supports features such as multi-factor authentication and user management.
  • Amazon API Gateway REST APIs: The REST APIs in Amazon API Gateway are well suited for creating APIs that follow RESTful principles. REST APIs in API Gateway can be configured to use a group of Cognito users as an authorizer, providing a secure and scalable solution to verify users before they can access APIs.
36
Q

808 # A company has a mobile application for customers. Application data is sensitive and must be encrypted at rest. The company uses AWS Key Management Service (AWS KMS). The company needs a solution that prevents accidental deletion of KMS keys. The solution should use Amazon Simple Notification Service (Amazon SNS) to send an email notification to administrators when a user attempts to delete a KMS key. Which solution will meet these requirements with the LESS operating overhead?

A. Create an Amazon EventBridge rule that reacts when a user tries to delete a KMS key. Configure an AWS configuration rule that overrides any deletion of a KMS key. Adds the AWS configuration rule as a target of the EventBridge rule. Create an SNS topic that notifies administrators.
B. Create an AWS Lambda function that has custom logic to prevent deletion of KMS keys. Create an Amazon CloudWatch alarm that is triggered when a user attempts to delete a KMS key. Create an Amazon EventBridge rule that invokes the Lambda function when the DeleteKey operation is performed. Create an SNS topic. Configure the EventBridge rule to publish an SNS message notifying administrators.
C. Create an Amazon EventBridge rule that reacts when the KMS DeleteKey operation is performed. Configure the rule to start an AWS Systems Manager Automation runbook. Configure the run book to cancel the deletion of the KMS key. Create an SNS topic. Configure the EventBridge rule to publish an SNS message notifying administrators.
D. Create an AWS CloudTrail trail. Configure the trail to deliver the logs to a new Amazon CloudWatch log group. Create a CloudWatch alarm based on the metric filter for the CloudWatch log group. Configure the alarm to use Amazon SNS to notify the administrators when the KMS DeleteKey operation is performed.

A

D. Create an AWS CloudTrail trail. Configure the trail to deliver the logs to a new Amazon CloudWatch log group. Create a CloudWatch alarm based on the metric filter for the CloudWatch log group. Configure the alarm to use Amazon SNS to notify the administrators when the KMS DeleteKey operation is performed.

  • AWS CloudTrail Trail: Creates a CloudTrail trail to capture events such as KMS key deletion.
  • Amazon CloudWatch Regios: Configures the trace to deliver logs to a CloudWatch log group.
  • CloudWatch Metrics Filter: Creates a metrics filter on the log group to identify events related to KMS key deletion.
  • CloudWatch Alarm: Creates a CloudWatch alarm based on the metrics filter to notify administrators using Amazon SNS when the KMS DeleteKey operation is performed.

Explanation:
- Option D is recommended because it relies on AWS CloudTrail to capture events, which is common practice for auditing AWS API calls.
- Uses Amazon CloudWatch logs and metric filters to identify specific events (for example, KMS key deletion).
- CloudWatch alarms are used to trigger notifications via Amazon SNS when the defined event occurs.

While all options aim to prevent accidental deletion and notify administrators, Option D stands out as a more optimized and AWS-native solution, leveraging CloudTrail, CloudWatch, and SNS for monitoring and alerting.

37
Q

809 # A company wants to analyze and generate reports to track the usage of its mobile application. The app is popular and has a global user base. The company uses a custom reporting program to analyze application usage. The program generates several reports during the last week of each month. The program takes less than 10 minutes to produce each report. The company rarely uses the program to generate reports outside of the last week of each month. The company wants to generate reports in the shortest time possible when the reports are requested. Which solution will meet these requirements in the MOST cost-effective way?

A. Run the program using Amazon EC2 on-demand instances. Create an Amazon EventBridge rule to start EC2 instances when reporting is requested. Run EC2 instances continuously during the last week of each month.
B. Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambda function when reports are requested.
C. Run the program on Amazon Elastic Container Service (Amazon ECS). Schedule Amazon ECS to run when reports are requested.
D. Run the program using Amazon EC2 Spot Instances. Create an Amazon EventBndge rule to start EC2 instances when reporting is requested. Run EC2 instances continuously during the last week of each month.

A

B. Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambda function when reports are requested.

  • advantages:
  • Serverless execution: AWS Lambda allows you to execute code without provisioning or managing servers. Automatically scales based on demand.
  • Cost-Efficiency: You pay only for the calculation time consumed during the execution of the function.
  • Fast execution: Lambda functions can start quickly, and with proper design, can execute tasks in a short period of time.
  • Event-driven: Integrated with Amazon EventBridge, Lambda can be triggered by events, such as report requests.
  • Considerations:
  • Lambda has execution time limitations (default maximum is 15 minutes). Please ensure that the reporting process can be completed within this time period.

Explanation:
- AWS Lambda is well suited for short-lived and sporadic tasks, making it an ideal choice for occasional reporting requirement.
- With EventBridge, you can trigger Lambda functions based on events, ensuring that the reporting process starts quickly when needed.
- This option is cost-effective as you only pay for the actual compute time used during reporting, without the need to keep instances running continuously.

38
Q

810 # A company is designing a tightly coupled high-performance computing (HPC) environment in the AWS cloud. The enterprise needs to include features that optimize the HPC environment for networking and storage. What combination of solutions will meet these requirements? (Choose two.)

A. Create an accelerator in AWS Global Accelerator. Configure custom routing for the accelerator.
B. Create an Amazon FSx for Luster file system. Configure the file system with scratch storage.
C. Create an Amazon CloudFront distribution. Set the viewer protocol policy to be HTTP and HTTPS.
D. Launch Amazon EC2 instances. Attach an elastic fabric adapter (EFA) to the instances.
E. Create an AWS Elastic Beanstalk deployment to manage the environment.

A

B. Create an Amazon FSx for Luster file system. Configure the file system with scratch storage.
D. Launch Amazon EC2 instances. Attach an elastic fabric adapter (EFA) to the instances.

Option B (Amazon FSx for Luster file system):
- advantages: - High-performance file system: Amazon FSx for Luster provides a high-performance file system optimized for HPC workloads.
- Scratch Storage: Supports scratch storage, which is important for HPC environments to handle temporary data.

  • Considerations:
  • Scratch storage is ephemeral, so it is suitable for temporary data, and you may need additional storage solutions for persistent data.

Option D (Amazon EC2 instances with Elastic Fabric Adapter - EFA):
- advantages:
- High-performance networking: Elastic Fabric Adapter (EFA) improves networking capabilities, providing a Lower latency communication between instances in an HPC cluster.
- Tightly coupled communication: EFA enables tightly coupled communication between nodes in an HPC cluster, making it suitable for parallel computing workloads.

  • Considerations:
  • Ensure your HPC applications and software support EFA for optimal performance.
39
Q

811 # A company needs a solution to prevent photos with unwanted content from being uploaded to the company’s web application. The solution should not include training a machine learning (ML) model. What solution will meet these requirements?

A. Create and deploy a model using Amazon SageMaker Autopilot. Creates a real-time endpoint that the web application invokes when new photos are uploaded.
B. Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a Lambda function URL that the web application invokes when new photos are uploaded.
C. Create an Amazon CloudFront function that uses Amazon Comprehend to detect unwanted content. Associate the function with the web application.
D. Create an AWS Lambda function that uses Amazon Rekognition Video to detect unwanted content. Creates a Lambda function URL that the web app invokes when new photos are uploaded.

A

B. Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a Lambda function URL that the web application invokes when new photos are uploaded.

  • Advantages:
  • Pre-built ML model: Amazon Rekognition provides pre-trained models for image analysis, including content moderation to detect unwanted content.
  • Serverless Execution: AWS Lambda allows you to run code without managing servers, making it a scalable and cost-effective solution.
  • Considerations:
  • You need to handle the response of the Lambda function in the web application based on the content moderation results.

Explanation:
- Option B takes advantage of Amazon Rekognition’s capabilities to analyze images for unwanted content. By creating an AWS Lambda function that uses Rekognition, you can easily integrate this content moderation process into your web application workflow without needing to train a custom machine learning model.

40
Q

812 # A company uses AWS to run its e-commerce platform. The platform is critical to the company’s operations and has a high volume of traffic and transactions. The company sets up a multi-factor authentication (MFA) device to protect the root user credentials for your AWS account. The company wants to ensure that you will not lose access to the root user account if the MFA device is lost. What solution will meet these requirements?

A. Set up a backup administrator account that the company can use to log in if the company loses the MFA device.
B. Add multiple MFA devices for the root user account to handle the disaster scenario.
C. Create a new administrator account when the company cannot access the root account.
D. Attach the administrator policy to another IAM user when the enterprise cannot access the root account.

A

B. Add multiple MFA devices for the root user account to handle the disaster scenario.

  • Disadvantages:
  • Redundancy: Adding multiple MFA devices provides redundancy, reducing the risk of losing access if a device is lost.
  • Root User Security: The root user is a powerful account, and securing it with MFA is a recommended best practice.
  • Considerations:
  • Device Management: The company needs to manage multiple MFA devices securely.

Explanation:
- Option B is the most effective solution to address the enterprise requirement to ensure access to the root user account in the event the MFA device is lost. By setting up multiple MFAs Devices for the root user, the company establishes redundancy and any of the configured devices can be used for authentication.

41
Q

813 # A social media company is creating a rewards program website for its users. The company awards points to users when they create and upload videos to the website. Users redeem their points for gifts or discounts from the company’s affiliate partners. A unique ID identifies users. Partners refer to this ID to verify the user’s eligibility to receive rewards. Partners want to receive notifications of user IDs through an HTTP endpoint when the company gives points to users. Hundreds of suppliers are interested in becoming affiliate partners every day. The company wants to design an architecture that gives the website the ability to quickly add partners in a scalable way. Which solution will meet these requirements with the LEAST implementation effort?

A. Create an Amazon Timestream database to maintain a list of affiliate partners. Implement an AWS Lambda function to read the list. Configure the Lambda function to send user IDs to each partner when the company gives points to users.
B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Choose an endpoint protocol. Subscribe the partners to the topic. Publish user IDs to the topic when the company gives points to users.
C. Create an AWS Step Functions state machine. Create a task for each affiliate partner. Invoke state machine with user ID as input when company gives points to users.
D. Create a data stream in Amazon Kinesis Data Streams. Implement producer and consumer applications. Store a list of affiliate partners in the data stream. Send user ID when the company gives points to users.

A

B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Choose an endpoint protocol. Subscribe the partners to the topic. Publish user IDs to the topic when the company gives points to users.

  • Advantages:
  • Scalability: Amazon SNS is designed to handle high performance and can easily scale to accommodate hundreds of affiliate partners.
  • Ease of integration: Partners can subscribe to the SNS topic, and the company can publish messages on the topic, simplifying the integration process.
  • Flexibility: Supports multiple endpoint protocols, including HTTP, which aligns with partners’ requirement to receive notifications over an HTTP endpoint.
  • Considerations:
  • Security: Ensure communication between the business and partners is secure, especially when using HTTP endpoints.

Explanation:
- Option B takes advantage of Amazon SNS, which is a fully managed publish/subscribe service. This solution provides an efficient way for the company to notify multiple partners about user IDs when points are given. Partners can subscribe to the SNS topic using their preferred endpoint protocols, including HTTP, making it a scalable and simple solution.

42
Q

814 # An e-commerce company runs its application on AWS. The application uses an Amazon Aurora PostgreSQL cluster in Multi-AZ mode for the underlying database. During a recent promotional campaign, the application experienced heavy read and write load. Users experienced timeout issues when trying to access the app. A solutions architect needs to make the application architecture more scalable and highly available. Which solution will meet these requirements with the LEAST downtime?

A. Create an Amazon EventBridge rule that has the Aurora cluster as a source. Create an AWS Lambda function to log Aurora cluster state change events. Add the Lambda function as a target for the EventBridge rule. Add additional reader nodes for failover.
B. Modify the Aurora cluster and enable the Zero Downtime Reboot (ZDR) feature. Use database activity streams in the cluster to track the health of the cluster.
C. Add additional reader instances to the Aurora cluster. Creates an Amazon RDS Proxy target group for the Aurora cluster.
D. Create an Amazon ElastiCache cache for Redis. Replicates data from the Aurora cluster to Redis using the AWS Database Migration Service (AWS DMS) with a write approach.

A

C. Add additional reader instances to the Aurora cluster. Creates an Amazon RDS Proxy target group for the Aurora cluster.

  • Disadvantages:
  • Scalability: Adding additional reader instances to the Aurora cluster enables horizontal scaling of read capacity, addressing the heavy read load.
  • High availability: Aurora in Multi-AZ mode provides automatic failover for the primary instance, improving availability.
  • Amazon RDS Proxy: RDS Proxy helps manage database connections, improving application scalability and reducing downtime during failovers.
  • Considerations:
  • Cost: While scaling the Aurora cluster horizontally with additional reader instances may incur additional costs, it provides a scalable and highly available solution.

Explanation:
- Option C is a suitable option to improve scalability and availability. By adding additional reader instances, the application can distribute the reading load efficiently. Creating an Amazon RDS Proxy target group further improves the management of database connections, enabling better scalability and reducing downtime during failovers.

43
Q

Dup Number But New
814#A company needs to extract ingredient names from recipe records that are stored as text files in an Amazon S3 bucket. A web application will use the ingredient names to query an Amazon DynamoDB table and determine a nutrition score. The application can handle logs and non-food errors. The company does not have any employees who have machine learning skills to develop this solution. Which solution will meet these requirements in the MOST cost-effective way?

A. Use S3 event notifications to invoke an AWS Lambda function when PutObject requests occur. Schedule the Lambda function to parse the object and extract ingredient names using Amazon Comprehend. Store the Amazon Comprehend output in the DynamoDB table.

B. Use an Amazon EventBridge rule to invoke an AWS Lambda function when PutObject requests occur. Schedule the Lambda function to parse the object using Amazon Forecast to extract ingredient names. Store the forecast output in the DynamoDB table.

C. Use S3 event notifications to invoke an AWS Lambda function when PutObject requests occur. Use Amazon Polly to create audio recordings of the recipe records. Save the audio files to the S3 bucket. Use Amazon Simple Notification Service (Amazon SNS) to send a URL as a message to employees. Instruct employees to listen to the audio files and calculate the nutrition score. Store the ingredient names in the DynamoDB table.

D. Use an Amazon EventBridge rule to invoke an AWS Lambda function when a PutObject request occurs. Schedule the Lambda function to parse the object and extract ingredient names using Amazon SageMaker. Store the inference output from the SageMaker endpoint in the DynamoDB table.

A

A. Use S3 event notifications to invoke an AWS Lambda function when PutObject requests occur. Schedule the Lambda function to parse the object and extract ingredient names using Amazon Comprehend. Store the Amazon Comprehend output in the DynamoDB table.

This option uses S3 event notifications to trigger a Lambda function when new recipe records are uploaded to the S3 bucket. The Lambda function parses the text using Amazon Comprehend to extract ingredient names. Amazon Comprehend is a natural language processing (NLP) service that can identify entities such as food ingredients. This solution is cost-effective as it only uses AWS Lambda and Amazon Comprehend, both of which offer a pay-as-you-go pricing model.

Taking into account cost-effectiveness and compliance with requirements, option A is the most appropriate solution. It leverages AWS Lambda and Amazon Comprehend, offering an efficient and accurate method to extract ingredient names from recipe records while minimizing costs.

44
Q

815#A company needs to create an AWS Lambda function that will run in a VPC in the company’s primary AWS account. The Lambda function needs to access files that the company stores on an Amazon Elastic File System (Amazon EFS) file system. The EFS file system is located in a secondary AWS account. As the company adds files to the file system, the solution must scale to meet demand. Which solution will meet these requirements in the MOST cost-effective way?

A. Create a new EFS file system on the main account. Use AWS DataSync to copy the contents of the original EFS file system to the new EFS file system.

B. Create a VPC peering connection between the VPCs that are in the primary account and the secondary account.

C. Create a second Lambda function in the secondary account that has a mount configured for the file system. Use the parent account’s Lambda function to invoke the child account’s Lambda function.

D. Move the contents of the file system to a Lambda layer. Configure Lambda layer permissions to allow the company’s secondary account to use the Lambda layer.

A

B. Create a VPC peering connection between the VPCs that are in the primary account and the secondary account.

VPC peering allows communication between VPCs in different AWS accounts using private IP addresses. By creating a VPC peering connection between the VPCs in the primary and secondary accounts, the Lambda function in the primary account can directly access files stored in the EFS file system in the secondary account. This solution eliminates the need for data duplication and synchronization, making it cost-effective and efficient to access files across accounts.

Taking into account cost-effectiveness and compliance with requirements, option B is the most suitable solution. Leverages VPC peering to allow direct access to the EFS file system in the secondary account from the Lambda function in the primary account, eliminating the need for data duplication or complex invocations across accounts. This solution is efficient, scalable, and cost-effective for accessing files across AWS accounts.

45
Q

816#A financial company needs to handle highly confidential data. The company will store the data in an Amazon S3 bucket. The company needs to ensure that data is encrypted in transit and at rest. The company must manage encryption keys outside of the AWS cloud. What solution will meet these requirements?

A. Encrypt the data in the S3 bucket with server-side encryption (SSE) that uses a customer-managed key from the AWS Key Management Service (AWS KMS).

B. Encrypt the data in the S3 bucket with server-side encryption (SSE) that uses a key managed by AWS Key Management Service (AWS KMS).

C. Encrypts the data in the S3 bucket with the default server-side encryption (SSE).

D. Encrypt the data at the company’s data center before storing it in the S3 bucket.

A

D. Encrypt the data at the company’s data center before storing it in the S3 bucket.

In fact, option D would be the closest option to meeting the requirements specified in the documentation provided on client-side encryption. By encrypting data in the company’s data center before uploading it to S3, the company can ensure that the data is encrypted before it leaves its environment, thus achieving the goal of client-side encryption.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryption.html

Other Options:
A. Encrypt the data in the S3 bucket with server-side encryption (SSE) that uses a customer-managed key from the AWS Key Management Service (AWS KMS). Reassessment: This option focuses on server-side encryption (SSE) with a customer-managed key (CMK) stored in AWS KMS. Encrypts data at rest in the S3 bucket using company-managed keys. However, it does not directly address client-side encryption, where data is encrypted locally before transmission to S3. While this option ensures encryption at rest, it does not use client-side encryption as described in the provided documentation.

46
Q

817#A company wants to run its payment application on AWS. The application receives payment notifications from mobile devices. Payment notifications require basic validation before they are sent for further processing. The backend processing application runs for a long time and requires computing and memory to be adjusted. The company does not want to manage the infrastructure. Which solution will meet these requirements with the LESS operating overhead?

A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment notifications and send the notifications to the backend application. Deploy your backend application to Amazon Elastic Kubernetes Service (Amazon EKS) anywhere. Create a standalone cluster.

B. Create an Amazon API Gateway API. Integrate the API with an AWS Step Functions state machine to receive payment notifications from mobile devices. Invoke the state machine to validate payment notifications and send the notifications to the backend application. Deploy the backend application to Amazon Elastic Kubernetes Service (Amazon EKS). Set up an EKS cluster with self-managed nodes.

C. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment notifications and send the notifications to the backend application. Deploy the backend application to Amazon EC2 Spot Instances. Set up a spot fleet with a predetermined allocation strategy.

D. Create an Amazon API Gateway API. Integrate the API with AWS Lambda to receive payment notifications from mobile devices. Invokes a Lambda function to validate payment notifications and send the notifications to the backend application. Deploy the backend application to Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.

A

D. Create an Amazon API Gateway API. Integrate the API with AWS Lambda to receive payment notifications from mobile devices. Invokes a Lambda function to validate payment notifications and send the notifications to the backend application. Deploy the backend application to Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.

D. Amazon API Gateway with AWS Lambda and Amazon ECS with Fargate:
● Amazon API Gateway: Receives payment notifications.
● AWS Lambda: Used for basic validation of payment notifications.
● Amazon ECS with Fargate: Offers serverless container orchestration, eliminating the need to manage
infrastructure.
● Operational Overhead: This option involves the least operational overhead as it leverages fully managed services
like AWS Lambda and Fargate, where AWS manages the underlying infrastructure.
Given the requirement for the least operational overhead, Option D is the most suitable. It leverages fully managed services (AWS Lambda and Amazon ECS with Fargate) for handling payment notifications and running the backend application, minimizing the operational burden on the company.

Other Options:
● Amazon API Gateway: Receives payment notifications.
● AWS Lambda: Used for basic validation of payment notifications.
● Amazon ECS with Fargate: Offers serverless container orchestration, eliminating the need to manage
infrastructure.
● Operational Overhead: This option involves the least operational overhead as it leverages fully managed services
like AWS Lambda and Fargate, where AWS manages the underlying infrastructure.
Given the requirement for the least operational overhead, Option D is the most suitable. It leverages fully managed services (AWS Lambda and Amazon ECS with Fargate) for handling payment notifications and running the backend application, minimizing the operational burden on the company.

47
Q

818#A solutions architect is designing a user authentication solution for a company. The solution should invoke two-factor authentication for users who log in from inconsistent geographic locations, IP addresses, or devices. The solution must also be able to scale to accommodate millions of users. What solution will meet these requirements?

A. Configure Amazon Cognito user user pools for user authentication. Enable the risk-based adaptive authentication feature with multi-factor authentication (MFA).

B. Configure Amazon Cognito identity groups for user authentication. Enable multi-factor authentication (MFA).

C. Configure AWS Identity and Access Management (IAM) users for user authentication. Attach an IAM policy that allows the AllowManageOwnUserMFA action.

D. Configure AWS IAM Identity Center authentication (AWS Single Sign-On) for user authentication. Configure permission sets to require multi-factor authentication (MFA).

A

A. Configure Amazon Cognito user pools for user authentication with risk-based adaptive authentication and MFA:
● Amazon Cognito User Pools: Provides user authentication and management service.
● Risk-based Adaptive Authentication: Allows you to define authentication rules based on user behavior, such as
inconsistent geographical locations, IP addresses, or devices.
● Multi-factor Authentication (MFA): Enhances security by requiring users to provide two or more verification factors.
● Scalability: Amazon Cognito is designed to scale to accommodate millions of users.
● Explanation: This option aligns well with the requirements as it leverages Amazon Cognito’s risk-based adaptive
authentication feature to detect suspicious activities based on user behavior and trigger MFA when necessary. Additionally, Amazon Cognito is highly scalable and suitable for accommodating millions of users.

48
Q

819#A company has an Amazon S3 data lake. The company needs a solution that transforms data from the data lake and loads it into a data warehouse every day. The data warehouse must have massively parallel processing (MPP) capabilities. Next, data analysts need to create and train machine learning (ML) models by using SQL commands on the data. The solution should use serverless AWS services whenever possible. What solution will meet these requirements?

A. Run an Amazon EMR daily job to transform the data and load it into Amazon Redshift. Use Amazon Redshift ML to create and train ML models.

B. Run an Amazon EMR daily job to transform the data and load it to Amazon Aurora Serverless. Use Amazon Aurora ML to create and train ML models.

C. Run an AWS Glue daily job to transform the data and load it into Amazon Redshift Serverless. Use Amazon Redshift ML to create and train ML models.

D. Run an AWS Glue daily job to transform the data and load it into Amazon Athena tables. Use Amazon Athena ML to create and train ML models.

A

C. Run an AWS Glue daily job to transform the data and load it into Amazon Redshift Serverless. Use Amazon Redshift ML to create and train ML models.

The only data warehouse solution that is a serverless that is available on AWS is redshift. Option C, where we are using to serverless glue job to transform the data, which is a serverless option, Amazon Redshift serverless, which is a serverless data warehouse option. And you can use redshift machine learning to create and train ml models using SQL.

Other Options:

Amazon EMR is not a serverless service, you have to provision EMR and then you can create EMR jobs. That’s fast, based on that. Second one redshift it’s not serverless. You have Amazon Redshift serverless that serverless. So, that is another reason I would eliminate this one. Okay, so if you are thinking about Amazon EMR serverless, what is the service? Well, glue, AWS glue, right. AWS glue is nothing but behind the scenes. You have you run EMR jobs, but you don’t have to provision it. And right now, I think there is another version called EMR serverless, as well, just like redshift. But anyways, you can cancel this out and be not just for EMR. Aurora, Aurora is not a data warehouse solution. So you can cross that out. And you can cross d out because Athena is not a data warehouse solution. Okay, so for that reason, you can cancel out that

49
Q

820#A company runs containers in a Kubernetes environment in the company’s on-premises data center. The company wants to use Amazon Elastic Kubernetes Service (Amazon EKS) and other AWS managed services. Data must remain locally in the company’s data center and cannot be stored on any remote site or cloud to maintain compliance. What solution will meet these requirements?

A. Deploy AWS local zones in the company’s data center.

B. Use an AWS snowmobile in the company data center.

C. Install an AWS Outposts rack in the company data center.

D. Install an AWS Snowball Edge Storage Optimized node in the data center.

A

C. Install an AWS Outposts rack in the company data center.

● AWS Outposts: AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility. With AWS Outposts, companies can run AWS infrastructure and services locally on premises and use the same APIs, control plane, tools, and hardware on-premises as in the AWS cloud.
● By installing an AWS Outposts rack in the company’s data center, the company can leverage Amazon EKS (Elastic Kubernetes Service) and other AWS managed services while ensuring that all data remains within the local data center, meeting compliance requirements.
● AWS Outposts provides a consistent hybrid experience with seamless integration with AWS services, allowing the company to run containerized workloads in the local Kubernetes environment alongside AWS services without data leaving the local premises.

50
Q

821#A social media company has workloads that collect and process data. Workloads store data on local NFS storage. The data warehouse cannot scale fast enough to meet the company’s expanding business needs. The company wants to migrate the current data warehouse to AWS. Which solution will meet these requirements in the MOST cost-effective way?

A. Configure an AWS Storage Gateway Volume Gateway. Use an Amazon S3 lifecycle policy to transition data to the appropriate storage class.

B. Set up an AWS Storage Gateway, Amazon S3 File Gateway. Use an Amazon S3 lifecycle policy to transition data to the appropriate storage class.

C. Use the Amazon Elastic File System (Amazon EFS) Standard-Infrequent Access (Standard-IA) storage class. Activates the infrequent access lifecycle policy.

D. Use the Amazon Elastic File System (Amazon EFS) One Zone-Infrequent Access (One Zone-IA) storage class. Activates the infrequent access lifecycle policy.

A

B. Set up an AWS Storage Gateway, Amazon S3 File Gateway. Use an Amazon S3 lifecycle policy to transition data to the appropriate storage class.

● File Gateway allows applications to store files as objects in Amazon S3 while accessing them through a Network File System (NFS) interface.
● Similar to Option A, this solution involves using Amazon S3 Lifecycle policies to transition data to the appropriate storage class.
● Cost-effectiveness: This option could be more cost-effective compared to Option A, as it eliminates the need for managing EBS snapshots and associated costs.

Considering cost-effectiveness and the ability to meet the requirements, Option B (AWS Storage Gateway Amazon S3 File Gateway with Amazon S3 Lifecycle Policy) seems to be the most cost-effective solution. It leverages Amazon S3’s scalability and cost-effectiveness while using Storage Gateway to seamlessly integrate with the company’s existing NFS storage.

Other Options:
A. AWS Storage Gateway Volume Gateway with Amazon S3 Lifecycle Policy:
● With Volume Gateway, on-premises applications can use block storage in the form of volumes that are stored as Amazon EBS snapshots.
● This option allows the company to store data in on-premises NFS storage and synchronize it with Amazon S3 using Storage Gateway. Amazon S3 Lifecycle policies can be used to transition the data to the appropriate storage class, such as S3 Standard-IA or S3 Intelligent-Tiering.
● Cost-effectiveness: This option may incur additional costs for maintaining the Storage Gateway Volume Gateway and EBS snapshots, which might not be the most cost-effective solution depending on the volume of data and frequency of access.

51
Q

822#A company uses high-concurrency AWS Lambda functions to process an increasing number of messages in a message queue during marketing events. Lambda functions use CPU-intensive code to process messages. The company wants to reduce computing costs and maintain service latency for its customers. What solution will meet these requirements?

A. Configure reserved concurrency for Lambda functions. Decrease the memory allocated to Lambda functions.

B. Configure reserved concurrency for Lambda functions. Increase memory according to AWS Compute Optimizer recommendations.

C. Configure provisioned concurrency for Lambda functions. Decrease the memory allocated to Lambda functions.

D. Configure provisioned concurrency for Lambda functions. Increase memory according to AWS Compute Optimizer recommendations.

A

D. Configure provisioned concurrency for Lambda functions. Increase memory according to AWS Compute Optimizer recommendations.

● Provisioned concurrency can help maintain low latency by pre-warming Lambda functions.
● Increasing memory might improve performance for CPU-intensive tasks.
● AWS Compute Optimizer recommendations can guide in optimizing resources for cost and performance.
● This option combines the benefits of provisioned concurrency for low latency and AWS Compute Optimizer recommendations for cost optimization and performance improvement.

Other Options:
B. Configure reserved concurrency for the Lambda functions. Increase the memory according to AWS Compute Optimizer recommendations:
● Reserved concurrency helps control costs by limiting the number of concurrent executions.
● Increasing memory might improve performance for CPU-intensive tasks if the Lambda functions are
memory-bound.
● AWS Compute Optimizer provides recommendations for optimizing resources based on utilization metrics.
● This option addresses both cost optimization and potential performance improvements based on recommendations.

52
Q

823#A company runs its workloads on Amazon Elastic Container Service (Amazon ECS). Container images that use the ECS task definition should be scanned for common vulnerabilities and exposures (CVE). You also need to scan any new container images that are created. Which solution will meet these requirements with the LEAST changes to workloads?

A. Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository to store the container images. Specify the scan on the push filters for the basic ECR scan.

B. Store the container images in an Amazon S3 bucket. Use Amazon Macie to scan the images. Use an S3 event notification to start a Macie scan for each event with an event type of s3:ObjectCreated:Put.

C. Deploy the workloads to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository. Specify the scan in push filters for ECR enhanced scanning.

D. Store the container images in an Amazon S3 bucket that has versioning enabled. Configure an S3 event notification for s3:ObjectCreated:* events to invoke an AWS Lambda function. Configure the Lambda function to start an Amazon Inspector scan.

A

A. Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository to store the container images. Specify the scan on the push filters for the basic ECR scan.

● Amazon ECR supports scanning container images for vulnerabilities using its built-in scan on push feature.
● With scan on push filters, every new image pushed to the repository triggers a scan for vulnerabilities.
● This option requires minimal changes to the existing ECS setup as it leverages Amazon ECR, which is commonly
used with ECS for storing container images.
● It directly integrates with the container image repository without additional services.

NOTE: the other options might be technically feasible, but each adds more changes.

53
Q

824#A company uses an AWS batch job to run its sales process at the end of the day. The company needs a serverless solution that invokes a third-party reporting application when the AWS Batch job is successful. The reporting application has an HTTP API interface that uses username and password authentication. What solution will meet these requirements?

A. Configure an Amazon EventBridge rule to match incoming AWS Batch job SUCCEEDED events. Configure the third-party API as an EventBridge API destination with a username and password. Set the API destination as the EventBridge rule target.

B. Configure Amazon EventBridge Scheduler to match incoming AWS Batch job events. Configure an AWS Lambda function to invoke the third-party API using a username and password. Set the Lambda function as the target of the EventBridge rule.

C. Configure an AWS Batch job to publish SUCESED job events to an Amazon API Gateway REST API. Configure an HTTP proxy integration in the API Gateway REST API to invoke the third-party API using a username and password.

D. Configure an AWS Batch job to publish SUCESED job events to an Amazon API Gateway REST API. Set up a proxy integration on the API Gateway REST API to an AWS Lambda function. Configure the Lambda function to invoke the third-party API using a username and password.

A

A. Configure an Amazon EventBridge rule to match incoming AWS Batch job SUCCEEDED events. Configure the third-party API as an EventBridge API destination with a username and password. Set the API destination as the EventBridge rule target.

Other Options:
● This option aligns well with using CloudWatch Events to trigger actions based on AWS Batch job state changes.
● By configuring an EventBridge rule to match AWS Batch job success events and sending them to an API destination
(the third-party API with username and password authentication), you can achieve the desired outcome efficiently.
● EventBridge provides seamless integration with various AWS services, including AWS Batch and API Gateway, making it a suitable choice for event-driven architectures.
● Overall, this option remains a strong contender for meeting the requirements effectively. B. Configure EventBridge Scheduler with an AWS Lambda function:
● While EventBridge Scheduler allows triggering events at specific times or intervals, it may not be the best fit for triggering actions based on job completion events like AWS Batch job success.
● Using a Lambda function as a target for EventBridge Scheduler adds unnecessary complexity and may not align well with the event-driven nature of the requirement.
● Therefore, this option is less suitable compared to Option A.
C. Configure AWS Batch job to publish events to an Amazon API Gateway REST API:
● This option involves publishing AWS Batch job success events to an API Gateway REST API.
● While it’s feasible, it introduces additional complexity by requiring setup and management of API Gateway
resources.
● Directly invoking the third-party API from CloudWatch Events (Option A) is more straightforward and aligns better
with the requirements.
D. Configure AWS Batch job to publish events to an Amazon API Gateway REST API with a proxy integration to AWS Lambda:
● This option adds an extra layer of indirection by invoking an AWS Lambda function through API Gateway.
● While it offers flexibility, it increases complexity without significant benefits over directly invoking the third-party API
from CloudWatch Events.
● Therefore, it’s less preferable compared to Option A.

54
Q

825#A company collects and processes data from a vendor. The provider stores its data in an Amazon RDS for MySQL database in the vendor’s own AWS account. The company’s VPC does not have an Internet gateway, an AWS Direct Connect connection, or an AWS Site-to-Site VPN connection. The company needs to access the data that is in the vendor database. What solution will meet this requirement?

A. Instruct the provider to enroll in the AWS Hosted Connection Direct Connect program. Use the VPC pair to connect the company VPC and the provider VPC.

B. Configure a client VPN connection between the company’s VPC and the provider’s VPC. Use VPC peering to connect your company’s VPC and your provider’s VPC.

C. Instruct the vendor to create a network load balancer (NLB). Place the NLB in front of the Amazon RDS for MySQL database. Use AWS PrivateLink to integrate your company’s VPC and the vendor’s VPC.

D. Use AWS Transit Gateway to integrate the enterprise VPC and the provider VPC. Use VPC peering to connect your company’s VPC and your provider’s VPC.

A

C. Instruct the vendor to create a network load balancer (NLB). Place the NLB in front of the Amazon RDS for MySQL database. Use AWS PrivateLink to integrate your company’s VPC and the vendor’s VPC.

● This solution involves the vendor setting up a Network Load Balancer (NLB) in front of the RDS database and using AWS PrivateLink to integrate the company’s VPC and the vendor’s VPC.
● AWS PrivateLink provides private connectivity between VPCs without requiring internet gateways, VPN connections, or Direct Connect.
● By using PrivateLink, the company can securely access resources in the vendor’s VPC without exposing them to the public internet.
● Overall, this solution provides secure and private connectivity between the VPCs without the need for complex networking setups, making it a strong contender for meeting the requirements.

Other Options:
A. AWS Hosted Connection Direct Connect Program with VPC peering:
● This solution involves the vendor signing up for the AWS Hosted Connection Direct Connect Program, which establishes a dedicated connection between the company’s VPC and the vendor’s VPC.
● VPC peering is then used to connect the two VPCs, allowing traffic to flow securely between them.
● While this solution provides a direct and secure connection between the VPCs, it requires coordination with the
vendor to set up the direct connect connection, which might introduce additional complexity and dependencies.
● Overall, this solution can be effective but might involve more coordination and setup effort.
B. Client VPN connection with VPC peering:
● This solution involves setting up a client VPN connection between the company’s VPC and the vendor’s VPC, allowing secure access to resources in the vendor’s VPC.
● VPC peering is then used to establish connectivity between the two VPCs.
● While this solution provides secure access to the vendor’s resources, setting up and managing a client VPN
connection might introduce additional overhead and complexity.
● Moreover, client VPN connections are typically used for remote access scenarios, and using them for inter-VPC
communication might not be the most straightforward approach.
● Overall, this solution might be less optimal due to the additional complexity and overhead of managing a client VPN
connection.

D. AWS Transit Gateway with VPC peering:
● This solution involves using AWS Transit Gateway to integrate the company’s VPC and the vendor’s VPC, allowing for centralized management and routing of traffic between multiple VPCs.
● VPC peering is then used to establish connectivity between the company’s VPC and the vendor’s VPC.
● While AWS Transit Gateway provides centralized management and routing capabilities, it might introduce additional
complexity and overhead, especially if the setup is not already in place.
● Additionally, since the company’s VPC does not have internet access, Transit Gateway might not be the most
straightforward solution for this scenario.
● Overall, while Transit Gateway offers scalability and flexibility, it might be overkill for the specific requirement of
accessing a single RDS database in the vendor’s VPC.

Considering the requirements and the constraints specified (no internet gateway, Direct Connect, or VPN connection), Option C (using a Network Load Balancer with AWS PrivateLink) appears to be the most suitable solution. It provides secure and private connectivity between the VPCs without the need for complex networking setups and dependencies on external services.

55
Q

826#A company wants to set up Amazon Managed Grafana as its visualization tool. The company wants to visualize the data in its Amazon RDS database as one data source. The company needs a secure solution that does not expose data over the Internet. What solution will meet these requirements?

A. Create an Amazon Managed Grafana workspace without a VPC. Create a public endpoint for the RDS database. Configure the public endpoint as a data source in Amazon Managed Grafana.

B. Create an Amazon Managed Grafana workspace in a VPC. Create a private endpoint for the RDS database. Configure the private endpoint as a data source in Amazon Managed Grafana.

C. Create an Amazon Managed Grafana workspace without a VP. Create an AWS PrivateLink endpoint to establish a connection between Amazon Managed Grafana and Amazon RDS. Configure Amazon RDS as a data source in Amazon Managed Grafana.

D. Create an Amazon Managed Grafana workspace in a VPC. Create a public endpoint for the RDS database. Configure the public endpoint as a data source in Amazon Managed Grafana.

A

It appears B or C both can work.

B. Create an Amazon Managed Grafana workspace in a VPC. Create a private endpoint for the RDS database. Configure the private endpoint as a data source in Amazon Managed Grafana.

● This solution involves creating an Amazon Managed Grafana workspace in a VPC and configuring it with a private endpoint, ensuring that it is not accessible over the public internet.
● The RDS database also has a private endpoint within the same VPC, ensuring that data transfer between Grafana and RDS remains within the AWS network and does not traverse the public internet.
● By using private endpoints and keeping the communication within the VPC, this option provides a more secure solution compared to Option A.
● Overall, this option aligns well with the requirement for a secure solution that does not expose the data over the internet.

Other Options:
C. Public Amazon Managed Grafana workspace with AWS PrivateLink to RDS:
● This solution involves creating an Amazon Managed Grafana workspace without a VPC but establishing a connection between Grafana and RDS using AWS PrivateLink.
● AWS PrivateLink allows private connectivity between services across different VPCs or accounts without exposing the data over the internet.
● While this option could leverage AWS PrivateLink for secure communication between Grafana and RDS, it is designed for other uses: AWS PrivateLink provides private connectivity between virtual private clouds (VPCs), supported AWS services, and your on-premises networks without exposing your traffic to the public internet. Interface VPC endpoints, powered by PrivateLink, connect you to services hosted by AWS Partners and supported solutions available in AWS Marketplace.

56
Q

827#A company hosts a data lake on Amazon S3. The data lake ingests data in Apache Parquet format from various data sources. The company uses multiple transformation steps to prepare the ingested data. The steps include filtering out anomalies, normalizing the data to standard date and time values, and generating aggregates for analyses. The company must store the transformed data in S3 buckets that are accessed by data analysts. The company needs a pre-built solution for data transformation that requires no code. The solution must provide data lineage and data profiling. The company needs to share data transformation steps with employees throughout the company. What solution will meet these requirements?

A. Set up an AWS Glue Studio visual canvas to transform the data. Share transformation steps with employees using AWS Glue jobs.

B. Configure Amazon EMR Serverless to transform data. Share transformation steps with employees using EMR serverless jobs.

C. Configure AWS Glue DataBrew to transform the data. Share the transformation steps with employees using DataBrew recipes.

D. Create Amazon Athena tables for the data. Write Athena SQL queries to transform data. Share Athena SQL queries with employees.

A

C. Configure AWS Glue DataBrew to transform the data. Share the transformation steps with employees using DataBrew recipes.

● AWS Glue DataBrew is a visual data preparation tool that allows users to clean and transform data without writing code.
● DataBrew provides an intuitive interface for creating data transformation recipes, making it accessible to users without coding expertise.
● Users can easily share transformation steps (recipes) with employees by using DataBrew’s collaboration features.
● DataBrew also offers data lineage and data profiling capabilities, ensuring visibility into data transformations.
● Overall, this option aligns well with the requirements, providing a code-free solution for data transformation, along
with data lineage, data profiling, and easy sharing of transformation steps.

While Option A provides a graphical interface and could provide the solution without coding, it still requires work by the employees and isn’t as “prebuilt” as DataBrew.

57
Q

828#A solutions architect runs a web application on multiple Amazon EC2 instances that reside in individual target groups behind an application load balancer (ALB). Users can reach the application through a public website. The solutions architect wants to allow engineers to use a development version of the website to access a specific EC2 development instance to test new features for the application. The solutions architect wants to use a zone hosted on Amazon Route 53 to give engineers access to the development instance. The solution should automatically route to the development instance, even if the development instance is replaced. What solution will meet these requirements?

A. Create an A record for the development website that has the value set in the ALB. Create a listener rule on the ALB that forwards requests for the development website to the target group that contains the development instance.

B. Recreate the development instance with a public IP address. Create an A record for the development website that has the value set to the public IP address of the development instance.

C. Create an A record for the development website that has the value set in the ALB. Create a listen rule in the ALB to redirect requests from the development website to the public IP address of the development instance.

D. Place all instances in the same target group. Create an A record for the development website. Set the value to the ALB. Create a listener rule in the ALB that forwards requests for the development website to the target group.

A

A. Create an A record for the development website that has the value set in the ALB. Create a listener rule on the ALB that forwards requests for the development website to the target group that contains the development instance.

● This option sets up a DNS record pointing to the ALB, ensuring that requests for the development website are routed to the load balancer.
● By creating a listener rule on the ALB, requests for the development website can be forwarded to the target group containing the development instance.
● This setup allows for automatic routing to the development instance even if it is replaced, as long as the instance is registered in the target group.

Other Options:
C. Create an A Record for the development website pointing to the ALB, with a listener rule to redirect requests to the public IP of the development instance:
● Similar to Option A, this option uses an A Record to point to the ALB and creates a listener rule to handle requests for the development website.
● However, instead of forwarding requests to the target group, it redirects them to the public IP of the development instance.
● While this setup may work, it introduces complexity and potential performance overhead due to the redirection, and it may not provide seamless failover if the development instance is replaced.

58
Q

829#A company runs a container application in a Kubernetes cluster in the company’s data center. The application uses Advanced Message Queuing Protocol (AMQP) to communicate with a message queue. The data center cannot scale fast enough to meet the company’s growing business needs. The company wants to migrate workloads to AWS. Which solution will meet these requirements with the LESS operating overhead?

A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS). Use Amazon Simple Queue Service (Amazon SQS) to retrieve messages.

B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon MQ to retrieve messages.

C. Use highly available Amazon EC2 instances to run the application. Use Amazon MQ to retrieve messages.

D. Use AWS Lambda functions to run the application. Use Amazon Simple Queue Service (Amazon SQS) to retrieve messages.

A

B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon MQ to retrieve messages.

● Amazon MQ supports industry-standard messaging protocols, including AMQP, making it a suitable option for applications that require AMQP support.
● By using Amazon EKS for container orchestration and Amazon MQ for message queuing, the company can meet the requirements while minimizing operational overhead.

NOTE: SQS does not support AMQP.

59
Q

830#An online gaming company hosts its platform on Amazon EC2 instances behind Network Load Balancers (NLB) in multiple AWS Regions. NLBs can route requests to targets over the Internet. The company wants to improve the customer gaming experience by reducing end-to-end loading time for its global customer base. What solution will meet these requirements?

A. Create application load balancers (ALBs) in each region to replace existing NLBs. Register existing EC2 instances as targets for ALBs in each region.

B. Configure Amazon Route 53 to route traffic with equal weight to the NLBs in each region.

C. Create additional NLB and EC2 instances in other regions where the company has a large customer base.

D. Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as destination endpoints.

A

D. Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as destination endpoints.

● AWS Global Accelerator is a networking service that improves the availability and performance of your applications with local and global traffic load balancing, as well as health checks.
● By configuring the existing NLBs as target endpoints in Global Accelerator, traffic can be intelligently routed over AWS’s global network to the closest entry point to the AWS network, reducing the end-to-end load time for customers globally.
● This solution provides a centralized approach to optimize global traffic flow without requiring changes to the existing infrastructure setup.
Considering the requirement to reduce end-to-end load time for the global customer base, option D, utilizing AWS Global Accelerator, would be the most effective solution. It provides a centralized and efficient way to optimize traffic routing globally, leading to improved customer experience with reduced latency.

60
Q

831#A company has an on-premises application that uses SFTP to collect financial data from multiple vendors. The company is migrating to the AWS cloud. The company has created an application that uses Amazon S3 APIs to upload files from vendors. Some vendors run their systems on legacy applications that do not support S3 APIs. Providers want to continue using SFTP-based applications to upload data. The company wants to use managed services for the needs of vendors using legacy applications. Which solution will meet these requirements with the LESS operating overhead?

A. Create an instance of AWS Database Migration Service (AWS DMS) to replicate storage data from vendors using legacy applications to Amazon S3. Provide vendors with credentials to access the AWS DMS instance.

B. Create an AWS Transfer Family endpoint for vendors that use legacy applications.

C. Configure an Amazon EC2 instance to run an SFTP server. Instruct vendors using legacy applications to use the SFTP server to upload data.

D. Configure an Amazon S3 file gateway for vendors that use legacy applications to upload files to an SMB file share.

A

B. Create an AWS Transfer Family endpoint for vendors that use legacy applications.

● AWS Transfer Family provides fully managed SFTP, FTPS, and FTP servers for easy migration of file transfer workloads to AWS.
● With AWS Transfer Family, you can create SFTP endpoints that allow vendors with legacy applications to upload data securely to Amazon S3 using their existing SFTP clients.
● This solution eliminates the need for managing infrastructure or servers, as AWS handles the underlying infrastructure, scaling, and maintenance.

61
Q

832#A marketing team wants to build a campaign for an upcoming multi-sports event. The team has news reports for the last five years in PDF format. The team needs a solution to extract information about content and sentiment from news reports. The solution must use Amazon Textract to process news reports. Which solution will meet these requirements with the LEAST operating overhead?

A. Provide the extracted information to Amazon Athena for analysis. Store the extracted information and analysis in an Amazon S3 bucket.

B. Store the extracted knowledge in an Amazon DynamoDB table. Use Amazon SageMaker to create a sentiment model.

C. Provide the extracted insights to Amazon Comprehend for analysis. Save the analysis to an Amazon S3 bucket.

D. Store the extracted insights in an Amazon S3 bucket. Use Amazon QuickSight to visualize and analyze data.

A

C. Provide the extracted insights to Amazon Comprehend for analysis. Save the analysis to an Amazon S3 bucket.

● Amazon Comprehend is a fully managed natural language processing (NLP) service that can perform sentiment analysis on text data.
● Sending the extracted insights directly to Comprehend for sentiment analysis reduces operational overhead as Comprehend handles the analysis.
● Saving the analysis results to S3 allows for further storage and downstream processing if needed.
● This approach minimizes the need for additional setup or management, as Comprehend is fully managed by AWS.

62
Q

833#A company’s application runs on Amazon EC2 instances that are located in multiple availability zones. The application needs to ingest real-time data from third-party applications. The company needs a data ingestion solution that places the ingested raw data into an Amazon S3 bucket. What solution will meet these requirements?

A. Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume Kinesis data streams. Specify the S3 bucket as the destination for delivery streams.

B. Create database migration tasks in the AWS Database Migration Service (AWS DMS). Specify the EC2 instance replication instances as the source endpoints. Specify the S3 bucket as the destination endpoint. Set the migration type to migrate existing data and replicate ongoing changes.

C. Create and configure AWS DataSync agents on the EC2 instances. Configure DataSync tasks to transfer data from EC2 instances to the S3 bucket.

D. Create an AWS Direct Connect connection to the application for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume direct PUT operations from your application. Specify the S3 bucket as the destination for delivery streams.

A

A. Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume Kinesis data streams. Specify the S3 bucket as the destination for delivery streams.

● Kinesis Data Streams is well-suited for real-time data ingestion scenarios, allowing applications to ingest and process large streams of data in real time.
● Data Firehose can then deliver the processed data to S3, providing scalability and reliability for data delivery.
● This solution is suitable for handling continuous streams of data from third-party applications in real time.

Considering the requirement for the least operational overhead, option C is the most suitable. It leverages Amazon Comprehend, a fully managed service for sentiment analysis, and stores the results in S3 for easy access and further processing without the need for additional setup or management overhead.

63
Q

834#A company application is receiving data from multiple data sources. Data size varies and is expected to increase over time. The current maximum size is 700 KB. The volume and size of data continues to grow as more data sources are added. The company decides to use Amazon DynamoDB as the primary database for the application. A solutions architect needs to identify a solution that handles large data sizes. Which solution will meet these requirements in the MOST operationally efficient manner?

A. Create an AWS Lambda function to filter data that exceeds DynamoDB item size limits. Store larger data in an Amazon DocumentDB database (with MongoDB support).

B. Store the large data as objects in an Amazon S3 bucket. In a DynamoDB table, create an item that has an attribute that points to the S3 URL of the data.

C. Split all incoming large data into a collection of items that have the same partition key. Write data to a DynamoDB table in a single operation using the BatchWriteItem API operation.

D. Create an AWS Lambda function that uses gzip compression to compress large objects as they are written to a DynamoDB table.

A

B. Store the large data as objects in an Amazon S3 bucket. In a DynamoDB table, create an item that has an attribute that points to the S3 URL of the data.

● This approach leverages the scalability and cost-effectiveness of Amazon S3 for storing large objects.
● DynamoDB stores metadata or pointers to the objects in S3, allowing efficient retrieval when needed.
● It’s a commonly used pattern for handling large payloads in DynamoDB, providing a scalable and efficient solution.

Storing large data in Amazon S3 and referencing them in DynamoDB allows leveraging the scalability and cost-effectiveness of S3 for storing large objects while keeping DynamoDB lightweight and efficient for metadata and quick lookups. This approach simplifies data management, retrieval, and scalability, making it a practical and efficient solution for handling large data volumes in DynamoDB.

64
Q

835#A company is migrating a legacy application from an on-premises data center to AWS. The application is based on hundreds of cron jobs that run between 1 and 20 minutes at different recurring times throughout the day. The company wants a solution to schedule and run cron jobs on AWS with minimal refactoring. The solution must support running cron jobs in response to an event in the future. What solution will meet these requirements?

A. Create a container image for cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks as AWS Lambda functions.

B. Create a container image for cron jobs. Use AWS Batch on Amazon Elastic Container Service (Amazon ECS) with a scheduling policy to run cron jobs.

C. Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks on AWS Fargate.

D. Create a container image for cron jobs. Create a workflow in AWS Step Functions that uses a wait state to run cron jobs at a specific time. Use the RunTask action to run cron job tasks in AWS Fargate.

A

C. Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks on AWS Fargate.

● This option aligns with the recommended approach in the provided resource. It suggests using Fargate to run the containerized cron jobs triggered by EventBridge Scheduler. Fargate provides serverless compute for containers, allowing for easy scaling and management without the need to provision or manage servers.

Based on the provided information and the recommended approach in the resource, option C appears to be the most suitable solution. It leverages Amazon EventBridge Scheduler for scheduling and AWS Fargate for running the containerized cron jobs, providing a scalable and efficient solution with minimal operational overhead.

65
Q

836#A company uses Salesforce. The company needs to load existing data and ongoing data changes from Salesforce to Amazon Redshift for analysis. The company does not want data to be transmitted over the public Internet. Which solution will meet these requirements with the LEAST development effort?

A. Establish a VPN connection from the VPC to Salesforce. Use AWS Glue DataBrew to transfer data.

B. Establish an AWS Direct Connect connection from the VPC to Salesforce. Use AWS Glue DataBrew to transfer data.

C. Create an AWS PrivateLink connection in the VPC to Salesforce. Use Amazon AppFlow to transfer data.

D. Create a VPC peering connection to Salesforce. Use Amazon AppFlow to transfer data.

A

C. Create an AWS PrivateLink connection in the VPC to Salesforce. Use Amazon AppFlow to transfer data.

AWS PrivateLink Connection: AWS PrivateLink allows you to securely connect your VPC to supported AWS services and Salesforce privately, without using the public internet. This ensures data transfer occurs over private connections, enhancing security and compliance.
Amazon AppFlow: Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between AWS services and SaaS applications like Salesforce. It provides pre-built connectors for Salesforce, simplifying the data transfer process without the need for custom development.
Least Development Effort: Option C offers the least development effort because it leverages the capabilities of AWS PrivateLink and Amazon AppFlow, which are managed services. You do not need to build or maintain custom VPN connections (Option A and B) or manage VPC peering connections (Option D). Instead, you can quickly set up the PrivateLink connection and configure data transfer using AppFlow’s user-friendly interface, reducing development time and effort.
Therefore, Option C is the most efficient solution with the least development effort while meeting the company’s requirement to securely transfer data from Salesforce to Amazon Redshift without using the public internet.

66
Q

Company 837#A recently migrated its application to AWS. The application runs on Amazon EC2 Linux instances in an auto-scaling group across multiple availability zones. The application stores data on an Amazon Elastic File System (Amazon EFS) file system that uses EFS Standard-Infrequent Access storage. The application indexes company files. The index is stored in an Amazon RDS database. The company needs to optimize storage costs with some changes to applications and services. Which solution will meet these requirements in the MOST cost-effective way?

A. Create an Amazon S3 bucket that uses a Intelligent-Tiering lifecycle policy. Copy all files to S3 bucket. Update the application to use the Amazon S3 API to store and retrieve files.

B. Deploy Amazon FSx file shares for Windows File Server. Update the application to use the CIFS protocol to store and retrieve files.

C. Deploy Amazon FSx for OpenZFS file system shares. Update the application to use the new mount point to store and retrieve files.

D. Create an Amazon S3 bucket that uses S3 Glacier Flexible Retrieval. Copy all files to S3 bucket. Update the application to use the Amazon S3 API to store and retrieve files as standard fetches.

A

A. Create an Amazon S3 bucket that uses a Intelligent-Tiering lifecycle policy. Copy all files to S3 bucket. Update the application to use the Amazon S3 API to store and retrieve files.

  • Amazon S3 Intelligent-Tiering: This option leverages S3 Intelligent-Tiering, which automatically moves objects between two access tiers: frequent access and infrequent access, based on their access patterns. This can help optimize storage costs by moving less frequently accessed data to the infrequent access tier.
  • Application Update: The application needs to be updated to use the Amazon S3 API for storing and retrieving files instead of Amazon EFS. This requires development effort to modify the application’s file storage logic.
  • Cost-Effectiveness: This option can be cost-effective as it leverages S3 Intelligent-Tiering, which automatically adjusts storage costs based on access patterns. However, it requires effort to migrate data from Amazon EFS to S3 and update the application.

Among the options, Option A (using Amazon S3 Intelligent-Tiering) appears to be the most cost-effective solution as it leverages S3 Intelligent-Tiering’s automatic tiering based on access patterns, potentially reducing storage costs without significant application changes. However, the specific choice depends on factors such as the application’s compatibility with S3 APIs and the effort involved in migrating data and updating the application logic.

67
Q

838#A robotics company is designing a solution for medical surgery. The robots will use advanced sensors, cameras and AI algorithms to sense their surroundings and complete surgeries. The company needs a public load balancer in the AWS cloud that ensures seamless communication with backend services. The load balancer must be able to route traffic based on query strings to different target groups. Traffic must also be encrypted. What solution will meet these requirements?

A. Use a network load balancer with an attached AWS Certificate Manager (ACM) certificate. Use routing based on query parameters.

B. Use a gateway load balancer. Import a generated certificate into AWS Identity and Access Management (IAM). Attach the certificate to the load balancer. Use HTTP path-based routing.

C. Use an application load balancer with a certificate attached from AWS Certificate Manager (ACM). Use query parameters-based routing.

D. Use a network load balancer. Import a generated certificate into AWS Identity and Access Management (IAM). Attach the certificate to the load balancer. Use routing based on query parameters.

A

C. Use an application load balancer with a certificate attached from AWS Certificate Manager (ACM). Use query parameters-based routing.

● Application Load Balancer (ALB) is designed to route traffic at the application layer (Layer 7) of the OSI model. It supports advanced routing features such as HTTP and HTTPS traffic routing based on various attributes, including HTTP headers, URL paths, and query parameters.
● ACM Certificate can be attached to ALB to ensure that traffic to the load balancer is encrypted.
● ALB supports query parameter-based routing, allowing you to route traffic based on specific parameters within the HTTP request. This aligns with the requirement for routing traffic based on query strings.

Option C (Use an Application Load Balancer with query parameter-based routing and ACM certificate) aligns with the requirements of ensuring seamless communication with backend services and routing traffic based on query strings. ALB’s support for query parameter-based routing makes it suitable for the scenario described, providing flexibility and ease of configuration for routing traffic based on specific criteria.
https://exampleloadbalancer.com/advanced_request_routing_queryparam_overview.html

68
Q

839#A company has an application that runs on a single Amazon EC2 instance. The application uses a MySQL database running on the same EC2 instance. The business needs a highly available and automatically scalable solution to handle increased traffic. What solution will meet these requirements?

A. Deploy the application to EC2 instances running in an auto-scaling group behind an application load balancer. Create an Amazon Redshift cluster that has multiple MySQL-compatible nodes.

B. Deploy the application to EC2 instances that are configured as a target group behind an application load balancer. Create an Amazon RDS for MySQL cluster that has multiple instances.

C. Deploy the application to EC2 instances that run in an auto-scaling group behind an application load balancer. Create an Amazon Aurora serverless MySQL cluster for the database layer.

D. Deploy the application to EC2 instances that are configured as a target group behind an application load balancer. Create an Amazon ElastiCache for Redis cluster that uses the MySQL connector.

A

C. Deploy the application to EC2 instances that run in an auto-scaling group behind an application load balancer. Create an Amazon Aurora serverless MySQL cluster for the database layer.

● High Availability: Amazon Aurora automatically replicates data across multiple Availability Zones, providing built-in high availability. This ensures that the database remains accessible even in the event of an AZ failure.
● Scalability: Amazon Aurora Serverless automatically adjusts database capacity based on application demand, scaling compute and storage capacity up or down. This provides seamless scalability without the need for manual intervention.

69
Q

840#A company is planning to migrate data to an Amazon S3 bucket. Data must be encrypted at rest within the S3 bucket. The encryption key should be automatically rotated every year. Which solution will meet these requirements with the LESS operating overhead?

A. Migrate data to the S3 bucket. Use server-side encryption with Amazon S3 Managed Keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.

B. Create an AWS Key Management Service (AWS KMS) customer-managed key. Enable automatic key rotation. Set the default S3 bucket encryption behavior to use the customer-managed KMS key. Migrate data to S3 bucket.

C. Create an AWS Key Management Service (AWS KMS) customer-managed key. Set the default S3 bucket encryption behavior to use the customer-managed KMS key. Migrate data to S3 bucket. Manually turn the KMS key every year.

D. Use the client’s key material to encrypt the data. Migrate data to S3 bucket. Creates an AWS Key Management Service (AWS KMS) key without key material. Import the material from the customer key to the KMS key. Enable automatic key rotation.

A

A. Migrate data to the S3 bucket. Use server-side encryption with Amazon S3 Managed Keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.

● Encryption at Rest: Server-side encryption with S3 managed keys (SSE-S3) automatically encrypts objects at rest in S3 using strong encryption.
● Automatic Key Rotation: SSE-S3 keys are managed by AWS, and key rotation is handled automatically without any additional configuration or operational overhead.
● Operational Overhead: This option has the least operational overhead as key rotation is automatically managed by AWS.

Based on the evaluation, Option A (SSE-S3 with automatic key rotation) meets the requirements with the least operational overhead, as it leverages AWS-managed keys with automatic key rotation, eliminating the need for manual key management tasks.

70
Q

841#A company is migrating applications from an on-premises Microsoft Active Directory that the company manages to AWS. The company deploys the applications to multiple AWS accounts. The company uses AWS organizations to centrally manage accounts. The company’s security team needs a single sign-on solution across all of the company’s AWS accounts. The company must continue to manage the users and groups that are in the on-premises Active Directory. What solution will meet these requirements?

A. Create an Active Directory Enterprise Edition in AWS Directory Service for Microsoft Active Directory. Configure Active Directory to be the identity source for AWS IAM Identity Center.

B. Enable AWS IAM Identity Center. Set up a two-way forest trust relationship to connect the company’s self-managed Active Directory with IAM Identity Center by using AWS Directory Service for Microsoft Active Directory.

C. Use the AWS directory service and create a two-way trust relationship with the company’s self-managed Active Directory.

D. Implement an identity provider (IdP) on Amazon EC2. Link the IdP as an identity source within the AWS IAM Identity Center.

A

B. Enable AWS IAM Identity Center. Set up a two-way forest trust relationship to connect the company’s self-managed Active Directory with IAM Identity Center by using AWS Directory Service for Microsoft Active Directory.

● This option involves establishing a trust relationship between the company’s on-premises Active Directory and AWS IAM Identity Center using AWS Directory Service for Microsoft AD.
● It allows for single sign-on across AWS accounts by leveraging the existing on-premises Active Directory for user authentication.
● With a two-way trust relationship, users and groups managed in the on-premises Active Directory can be used to access AWS resources without needing to duplicate user management efforts.

Based on the requirements and evaluation, Option B (establishing a two-way trust relationship between the company’s on-premises Active Directory and AWS IAM Identity Center) appears to be the most suitable solution. It allows for single sign-on across AWS accounts while leveraging the existing user management capabilities of the on-premises Active Directory.

Other Options:
A. Create an Enterprise Edition Active Directory in AWS Directory Service for Microsoft Active Directory. Configure the Active Directory to be the identity source for AWS IAM Identity Center.
● This option involves deploying an AWS Managed Microsoft AD in AWS Directory Service and configuring it as the identity source for IAM Identity Center.
● While this setup can provide integration between AWS IAM and the AWS Managed Microsoft AD, it does not directly integrate with the company’s on-premises Active Directory.
● Users and groups from the on-premises Active Directory would need to be synchronized or manually managed in the AWS Managed Microsoft AD, which could add complexity and overhead.

C. Use AWS Directory Service and create a two-way trust relationship with the company’s self-managed Active Directory.
● Similar to option B, this option involves establishing a trust relationship between the company’s on-premises Active Directory and AWS Directory Service.
● However, AWS Directory Service does not inherently provide IAM Identity Center capabilities. Additional configuration would be needed to integrate with IAM for single sign-on across AWS accounts.
D. Deploy an identity provider (IdP) on Amazon EC2. Link the IdP as an identity source within AWS IAM Identity Center.
● This option involves deploying and managing an identity provider (IdP) on Amazon EC2, which adds operational overhead.
● It also requires manual configuration to link the IdP as an identity source within AWS IAM Identity Center.
● While it’s technically feasible, it may not be the most efficient or scalable solution compared to utilizing AWS
Directory Service or IAM Identity Center directly.

71
Q

842#A company is planning to deploy its application to an Amazon Aurora PostgreSQL Serverless v2 cluster. The application will receive large amounts of traffic. The company wants to optimize the cluster’s storage performance as the load on the application increases. Which solution will meet these requirements in the MOST cost-effective way?

A. Configure the cluster to use the standard Aurora storage configuration.

B. Set the cluster storage type to Provisioned IOPS.

C. Configure the cluster storage type to General Purpose.

D. Configure the cluster to use the Aurora I/O optimized storage configuration.

A

C. Configure the cluster storage type to General Purpose.

Based on the requirements for optimizing storage performance while maintaining cost-effectiveness, Option C (configuring the cluster storage type as General Purpose) seems to be the most suitable choice. It offers a good balance between performance and cost, making it well-suited for handling varying levels of traffic without incurring excessive expenses.

Configure the cluster storage type as General Purpose.
● General Purpose storage provides a baseline level of performance with the ability to burst to higher levels when needed.
● It offers a good balance of performance and cost, making it suitable for workloads with varying levels of activity.
● This option can provide adequate performance for the application while optimizing costs, especially if the workload
experiences periodic spikes in traffic.

Other Options:
A. Configure the cluster to use the Aurora Standard storage configuration.
● The Aurora Standard storage configuration provides a balance of performance and cost.
● It dynamically adjusts storage capacity based on the workload’s needs.
● However, it may not provide the highest level of performance during peak traffic periods, as it prioritizes
cost-effectiveness over performance.

B. Configure the cluster storage type as Provisioned IOPS.
● Provisioned IOPS (input/output operations per second) allows you to specify a consistent level of I/O performance by provisioning a specific amount of IOPS.
● While this option ensures predictable performance, it may not be the most cost-effective solution, especially if the application workload varies significantly over time.
● Provisioned IOPS typically incurs higher costs compared to other storage types.

D. Configure the cluster to use the Aurora I/O-Optimized storage configuration.
● Aurora I/O-Optimized storage is designed to deliver high levels of I/O performance for demanding workloads.
● It is optimized for applications with high throughput and low latency requirements.
● While this option may provide the best performance, it may also come with higher costs compared to other storage
configurations.

72
Q

843#A financial services company running on AWS has designed its security controls to meet industry standards. Industry standards include the National Institute of Standards and Technology (NIST) and the Payment Card Industry Data Security Standard (PCI DSS). The company’s external auditors need evidence that the designed controls have been implemented and are working correctly. The company has hundreds of AWS accounts in a single organization in AWS Organizations. The company needs to monitor the current status of controls across all accounts. What solution will meet these requirements?

A. Designate an account as the Amazon Inspector delegated administrator account for your organization’s management account. Integrate Inspector with organizations to discover and scan resources across all AWS accounts. Enable inspector industry standards for NIST and PCI DSS.

B. Designate an account as the Amazon GuardDuty delegated administrator account from the organization management account. In the designated GuardDuty administrator account, enable GuardDuty to protect all member accounts. Enable GuardDuty industry standards for NIST and PCI DSS.

C. Configure an AWS CloudTrail organization trail in the organization management account. Designate one account as the compliance account. Enable CloudTrail security standards for NIST and PCI DSS in the compliance account.

D. Designate one account as the AWS Security Hub delegated administrator account from the organization management account. In the designated Security Hub administrator account, enable Security Hub for all member accounts. Enable Security Hub standards for NIST and PCI DSS.

A

D. Designate one account as the AWS Security Hub delegated administrator account from the organization management account. In the designated Security Hub administrator account, enable Security Hub for all member accounts. Enable Security Hub standards for NIST and PCI DSS.

D. AWS Security Hub (Correct):
● Explanation: This option designates one AWS account as the AWS Security Hub delegated administrator account and enables Security Hub for all member accounts within the organization. NIST and PCI DSS standards are enabled for compliance checks.
● Why it’s correct: AWS Security Hub is specifically designed for centralized security monitoring and compliance checks across AWS environments. It aggregates findings from various security services and third-party tools, providing a comprehensive view of security alerts and compliance status. By enabling Security Hub standards for NIST and PCI DSS, the company can ensure continuous evaluation of its security posture against these industry standards across all AWS accounts within the organization.
In summary, option D (AWS Security Hub) is the correct choice for meeting the company’s requirements of monitoring security controls across multiple AWS accounts while ensuring compliance with industry standards like NIST and PCI DSS. It offers centralized monitoring, comprehensive security insights, and continuous compliance checks, making it the most suitable solution for the scenario provided.

Other Options:
A. Amazon Inspector (Incorrect):
● Explanation: This option involves designating one AWS account as the Amazon Inspector delegated administrator account and integrating it with AWS Organizations. Inspector would be used to discover and scan resources across all AWS accounts, with NIST and PCI DSS industry standards enabled for security assessments.
● Why it’s incorrect: While Amazon Inspector can perform security assessments, it’s primarily focused on host and application-level vulnerabilities. While it can provide valuable insights into specific vulnerabilities, it may not offer the comprehensive monitoring and compliance checks required across the entire AWS environment. Additionally, Inspector is not optimized for continuous monitoring and compliance reporting across multiple accounts.
B. Amazon GuardDuty (Incorrect):
● Explanation: This option designates one AWS account as the GuardDuty delegated administrator account, enabling GuardDuty to protect all member accounts within the organization. NIST and PCI DSS industry standards are enabled for threat detection.
● Why it’s incorrect: Amazon GuardDuty is a threat detection service rather than a comprehensive compliance monitoring tool. While it can detect suspicious activity and potential threats, it may not provide the extensive compliance checks needed to ensure adherence to industry standards like NIST and PCI DSS. GuardDuty is more focused on detecting malicious activity rather than ensuring compliance with specific security standards.
C. AWS CloudTrail (Incorrect):
● Explanation: This option involves configuring an AWS CloudTrail organization trail in the Organizations management account and designating one account as the compliance account. CloudTrail security standards for NIST and PCI DSS are enabled in the compliance account.
● Why it’s incorrect: While AWS CloudTrail is essential for auditing and logging AWS API activity, it’s primarily focused on providing an audit trail rather than actively monitoring and ensuring compliance. CloudTrail can capture API activity and changes made to AWS resources, but it may not offer the comprehensive compliance checks and real-time monitoring capabilities needed to ensure adherence to industry standards across multiple accounts.

73
Q

844#A company uses an Amazon S3 bucket as its data lake storage platform. The S3 bucket contains a large amount of data that is accessed randomly by multiple computers and hundreds of applications. The company wants to reduce S3 storage costs and provide immediate availability for frequently accessed objects. What is the most operationally efficient solution that meets these requirements?

A. Create an S3 lifecycle rule to transition objects to the S3 intelligent tiering storage class.

B. Store objects in Amazon S3 Glacier. Use S3 Select to provide applications with access to data.

C. Use the S3 storage class analysis data to create S3 lifecycle rules to automatically transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.

D. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an AWS Lambda function to transition objects to the S3 Standard storage class when an application accesses them.

A

A. Create an S3 lifecycle rule to transition objects to the S3 intelligent tiering storage class.

● S3 Intelligent-Tiering automatically moves objects between two access tiers: frequent access and infrequent access.
● Objects that are frequently accessed remain in the frequent access tier, providing immediate availability.
● Objects that are infrequently accessed are moved to the infrequent access tier, reducing storage costs.
● This option provides a balance between cost optimization and immediate availability for frequently accessed
objects without requiring manual management of storage classes.

Based on the requirements for reducing storage costs and providing immediate availability for frequently accessed objects in a data lake scenario, Option A (Create an S3 Lifecycle rule to transition objects to the S3 Intelligent-Tiering storage class) appears to be the most operationally efficient solution. It automatically manages the storage tiers based on access patterns, optimizing costs while ensuring immediate availability for frequently accessed data without the need for manual intervention or complex setups.

Other Options:
B. Store objects in Amazon S3 Glacier. Use S3 Select to provide applications with access to the data.
● Storing objects in Amazon S3 Glacier offers the lowest storage costs among S3 storage classes.
● However, retrieving data from Glacier can have retrieval latency, which may not meet the requirement for immediate
availability for frequently accessed objects.
● Using S3 Select allows applications to retrieve specific data from objects stored in Glacier without needing to
restore the entire object.
● While this option offers cost savings, it may not provide the required immediate availability for frequently accessed
objects.
C. Use data from S3 storage class analysis to create S3 Lifecycle rules to automatically transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
● S3 storage class analysis provides insights into the access patterns of objects, helping identify objects that are candidates for transition to the infrequent access storage class.
● Automatically transitioning objects to S3 Standard-IA reduces storage costs for infrequently accessed data while maintaining availability.
● This option efficiently optimizes storage costs based on access patterns without manual intervention.
D. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an AWS Lambda
function to transition objects to the S3 Standard storage class when they are accessed by an application.
● Transitioning objects to S3 Standard-IA reduces storage costs for infrequently accessed data.
● Using an AWS Lambda function to transition objects back to the S3 Standard storage class when accessed by an
application ensures immediate availability for frequently accessed objects.
● However, this approach adds complexity with the need to manage and maintain the Lambda function, and it may
not be as operationally efficient as other options.

74
Q

845#A company has 5 TB of data sets. The data sets consist of 1 million user profiles and 10 million connections. User profiles have many-to-many relationship connections. The company needs an efficient way to find mutual connections of up to five levels. What solution will meet these requirements?

A. Use an Amazon S3 bucket to store the data sets. Use Amazon Athena to perform SQL JOIN queries and find connections.

B. Use Amazon Neptune to store the data sets with edges and vertices. Query at the data to find connections.

C. Use an Amazon S3 bucket to store the data sets. Use Amazon QuickSight to view connections.

D. Use Amazon RDS to store your data sets with multiple tables. Perform SQL JOIN queries to find connections.

A

B. Use Amazon Neptune to store the data sets with edges and vertices. Query at the data to find connections.

B. Use Amazon Neptune to store the datasets with edges and vertices. Query the data to find connections.
● Amazon Neptune is a fully managed graph database service that is optimized for storing and querying graph data.
● Graph databases like Neptune are specifically designed to handle complex relationships such as many-to-many
relationships and multi-level connections efficiently.
● With Neptune, you can use graph traversal algorithms to find mutual connections up to five levels with high
performance.
● This solution is well-suited for the requirements of efficiently querying complex relationship data.

Based on the requirements for efficiently finding mutual connections up to five levels in a dataset with many-to-many relationships, Option B (Use Amazon Neptune to store the datasets with edges and vertices) is the most suitable solution. Neptune is specifically designed for handling graph data and complex relationship queries efficiently, making it well-suited for this scenario.

Other Options:
A. Use an Amazon S3 bucket to store the datasets. Use Amazon Athena to perform SQL JOIN queries to find connections.
● Amazon Athena allows you to run SQL queries directly on data stored in Amazon S3, making it suitable for querying large datasets.
● However, performing complex JOIN operations on large datasets might not be the most efficient approach, especially when dealing with many-to-many relationships and multiple levels of connections.
● While Amazon Athena is capable of handling SQL JOINs, the performance may not be optimal for complex relationship queries involving multiple levels.

C. Use an Amazon S3 bucket to store the datasets. Use Amazon QuickSight to visualize connections.
● Amazon QuickSight is a business intelligence (BI) service that allows you to visualize and analyze data.
● While QuickSight can visualize connections, it is not designed for performing complex relationship queries like
finding mutual connections up to five levels.
● This solution may provide visualization capabilities but lacks the necessary querying capabilities for the given
requirements.
D. Use Amazon RDS to store the datasets with multiple tables. Perform SQL JOIN queries to find connections.
● Amazon RDS is a managed relational database service that supports SQL JOIN queries.
● Similar to Option A, while RDS supports JOIN operations, it may not be the most efficient approach for querying
complex relationship data with many-to-many relationships and multiple levels of connections. ● RDS might struggle with performance when dealing with large datasets and complex queries.

75
Q

846#A company needs a secure connection between its on-premises environment and AWS. This connection does not need a lot of bandwidth and will handle a small amount of traffic. The connection should be set up quickly. What is the MOST cost effective method to establish this type of connection?

A. Deploy a client VPN.

B. Implement AWS Direct Connect.

C. Deploy a bastion host on Amazon EC2.

D. Implement an AWS site-to-site VPN connection.

A

D. Implement an AWS site-to-site VPN connection.

76
Q

847#A company has a local SFTP file transfer solution. The company is migrating to the AWS cloud to scale the file transfer solution and optimize costs by using Amazon S3. Company employees will use their on-premises Microsoft Active Directory (AD) credentials to access the new solution. The company wants to maintain the current authentication and file access mechanisms. Which solution will meet these requirements with the LESS operating overhead?

A. Set up an S3 file gateway. Create SMB file shares on the file gateway that use the existing Active Directory to authenticate.

B. Configure an auto-scaling group with Amazon EC2 instances to run an SFTP solution. Configure the pool to scale at 60% CPU utilization.

C. Create an AWS Transfer Family server with SFTP endpoints. Choose the AWS Directory Service option as the identity provider. Use AD Connector to connect the on-premises Active Directory.

D. Create an AWS Transfer Family SFTP endpoint. Configure the endpoint to use the AWS Directory Service option as an identity provider to connect to the existing Active Directory.

A

C. Create an AWS Transfer Family server with SFTP endpoints. Choose the AWS Directory Service option as the identity provider. Use AD Connector to connect the on-premises Active Directory.

● This option utilizes AWS Transfer Family, which simplifies the setup of SFTP endpoints and supports integration with AWS Directory Service for Microsoft Active Directory.
● By using AD Connector, it enables seamless authentication against the on-premises Active Directory without the need to change user credentials.
● This approach minimizes operational overhead and provides a straightforward solution for migrating the file transfer solution to AWS.

Based on the requirement to seamlessly integrate with the existing on-premises Active Directory without changing user credentials and minimizing operational overhead, Option C is the most suitable choice. It leverages AWS Transfer Family with AD Connector to achieve this integration efficiently.
https://docs.aws.amazon.com/transfer/latest/userguide/directory-services-users.html#dir-services-ms-ad

Other Options:
A. Configure an S3 File Gateway. Create SMB file shares on the file gateway that use the existing Active Directory to authenticate.
● This option involves using AWS Storage Gateway to create an S3 File Gateway, which enables accessing objects in S3 via SMB shares. Users can authenticate using their existing Active Directory credentials.
● The operational overhead is relatively low as it leverages existing Active Directory credentials for authentication.
● However, this option doesn’t directly support SFTP file transfers, which may be a requirement in some scenarios.
B. Configure an Auto Scaling group with Amazon EC2 instances to run an SFTP solution. Configure the group to scale up at 60% CPU utilization.
● This option involves setting up and managing EC2 instances to host an SFTP solution.
● It offers flexibility and control over the SFTP environment, but it comes with higher operational overhead, including
managing server instances, scaling, and maintenance.
● Additionally, it may require additional configurations for integrating with Active Directory for user authentication.

D. Create an AWS Transfer Family SFTP endpoint. Configure the endpoint to use the AWS Directory Service option as the identity provider to connect to the existing Active Directory.
● Similar to Option C, this option involves using AWS Transfer Family for setting up SFTP endpoints and integrating with AWS Directory Service.
● However, it connects to an AWS-managed Active Directory (AWS Managed Microsoft AD) rather than the on-premises Active Directory, which might not align with the requirement to use existing on-premises credentials.

77
Q

848#A company is designing an event-based order processing system. Each order requires several validation steps after the order is created. An idempotent AWS Lambda function performs each validation step. Each validation step is independent of the other validation steps. The individual validation steps only need a subset of the order event information. The company wants to ensure that each validation step of the Lambda function has access to only the order event information that the function requires. The components of the order processing system must be loosely coupled to adapt to future business changes. What solution will meet these requirements?

A. Create an Amazon Simple Queue Service (Amazon SQS) queue for each validation step. Create a new Lambda function to transform the order data into the format required by each validation step and to post the messages to the appropriate SQS queues. Subscribe to each validation step of the Lambda function in its corresponding SQS queue.

B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe to the Lambda functions from the validation step to the SNS topic. Use message body filtering to send only the necessary data to each subscribed Lambda function.

C. Create an Amazon EventBridge event bus. Create an event rule for each validation step. Configure the input transformer to send only the required data to each target validation step Lambda function.

D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Create a new Lambda function to subscribe to the SQS queue and transform the order data into the format required by each validation step. Use the new Lambda function to perform synchronous invocations of the validation step of Lambda functions in parallel on separate threads.

A

C. Create an Amazon EventBridge event bus. Create an event rule for each validation step. Configure the input transformer to send only the required data to each target validation step Lambda function.

C. Amazon EventBridge Event Bus Approach:
● EventBridge allows creating event rules for each validation step and configuring input transformers to send only the required data to each target Lambda function.
● This approach provides loose coupling and enables fine-grained control over the event data sent to each validation step Lambda function.
● EventBridge’s input transformation capabilities make it suitable for this scenario, as it allows for extracting subsets of event data efficiently.

Evaluation:
Given the requirement for loosely coupled components and the need to provide each validation step Lambda function with only the necessary data, Option C, utilizing Amazon EventBridge Event Bus with input transformation, appears to be the most suitable solution. It offers the flexibility to tailor event data for each validation step efficiently while maintaining loose coupling between components.

Other Options:
A. Amazon Simple Queue Service (Amazon SQS) Approach:
● In this approach, each validation step has its own SQS queue, and a Lambda function transforms the order data and publishes messages to the appropriate SQS queues.
● This solution offers loose coupling between validation steps and allows each Lambda function to receive only the information it needs.
● However, setting up and managing multiple SQS queues and orchestrating the transformation and message publishing logic could introduce complexity.

B. Amazon Simple Notification Service (Amazon SNS) Approach:
● This approach involves using an SNS topic to which all validation step Lambda functions subscribe.
● SNS message body filtering is utilized to send only the required data to each subscribed Lambda function.
● While this approach offers loose coupling and message filtering capabilities, SNS filtering is limited compared to
EventBridge’s transformation capabilities.

D. Amazon Simple Queue Service (Amazon SQS) Queue with Lambda Approach:
● This approach involves using a single SQS queue and a Lambda function to transform the order data and invoke validation step Lambda functions synchronously.
● While this solution may simplify the architecture by using a single queue, the synchronous invocation of Lambda functions may not be the most efficient approach, especially if the validation steps can be executed concurrently.

78
Q

849#A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past, app users have reported poor app performance when creating new entries. These performance issues were caused by users generating different real-time reports from the application during work hours. Which solution will improve application performance when moved to AWS?

A. Import the data into a provisioned Amazon DynamoDB table. Refactored the application to use DynamoDB for reporting.

B. Create the database on a compute-optimized Amazon EC2 instance. Ensure that computing resources exceed the local database.

C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports.

D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the cluster backup instance as the endpoint for reporting.

A

C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports.

● Amazon Aurora is a high-performance relational database engine that is fully compatible with MySQL.
● Multi-AZ deployment ensures high availability, and read replicas can offload read queries, improving overall
performance.
● Using the reader endpoint for reports allows read traffic to be distributed among read replicas, reducing the load on
the primary instance during report generation.
● This solution provides scalability, high availability, and improved performance for read-heavy workloads without
significant application changes.

Evaluation:
Option C, creating an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas and configuring the application to use the reader endpoint for reports, is the most suitable solution. It offers scalability, high availability, and improved performance for read-heavy workloads without requiring significant application changes. Additionally, Aurora’s performance and reliability make it a strong candidate for supporting the application’s database needs.

Other Options:
A. Import data into Amazon DynamoDB:
● DynamoDB is a fully managed NoSQL database service that can provide high performance and scalability.
● By importing data into DynamoDB and refactoring the application to use DynamoDB for reports, the application can
benefit from DynamoDB’s scalability and low-latency read operations.
● However, DynamoDB may require significant application refactoring, especially if the application relies heavily on
SQL queries that are not easily translated to DynamoDB’s query model.
B. Create database on a compute optimized Amazon EC2 instance:
● While running MySQL on a compute-optimized EC2 instance might provide better performance compared to on-premises hardware, it may not fully address the scalability and performance issues during peak usage periods.
● Scaling resources vertically (by increasing the instance size) may have limits and could become costly.
D. Create Amazon Aurora MySQL Multi-AZ DB cluster with backup instance for reports:
● Configuring the application to use the backup instance for reports may not be the most efficient approach.
● While the backup instance can serve read traffic, it may not be optimized for performance, especially during peak
usage periods when the primary instance is under load.

79
Q

850#A company is extending a secure on-premises network to the AWS cloud by using an AWS Direct Connect connection. The local network does not have direct access to the Internet. An application running on the local network needs to use an Amazon S3 bucket. Which solution will meet these requirements in the MOST cost-effective way?

A. Create a public virtual interface (VIF). Routes AWS traffic over the public VIF.

B. Create a VPC and NAT gateway. Routes AWS traffic from the on-premises network to the NAT gateway.

C. Create a VPC and an Amazon S3 interface endpoint. Routes AWS traffic from the on-premises network to the S3 interface endpoint.

D. Create a VPC peering connection between the on-premises network and Direct Connect. Routes AWS traffic through the peering connection.

A

C. Create a VPC and an Amazon S3 interface endpoint. Routes AWS traffic from the on-premises network to the S3 interface endpoint.

80
Q

Company 851#A serves its website using an auto-scaling group of Amazon EC2 instances in a single AWS Region. The website does not require a database. The company is expanding, and the company’s engineering team deploys the website in a second region. The company wants to spread traffic across both regions to accommodate growth and for disaster recovery purposes. The solution should not serve traffic from a region where the website is unhealthy. What policy or resource should the company use to meet these requirements?

A. An Amazon Route 53 simple routing policy

B. An Amazon Route 53 multivalue answer routing policy

C. An application load balancer in a region with a target group that specifies the EC2 instance IDs of both regions

D. An application load balancer in a region with a target group that specifies the IP addresses of the EC2 instances in both regions

A

B. An Amazon Route 53 multivalue answer routing policy

B. Amazon Route 53 multivalue answer routing policy:
● The multivalue answer routing policy allows you to specify multiple healthy records for a single DNS name. Route 53 responds to DNS queries with up to eight healthy records selected at random.
● This policy supports routing traffic to multiple endpoints, which can be EC2 instances in different Regions.
● However, it does not perform health checks on the endpoints, so it cannot avoid routing traffic to unhealthy
Regions.

However, none of the other options meet the requirements of distributing traffic across multiple Regions and avoiding unhealthy Regions. Therefore, this is the best option for achieving the desired outcome. By using this routing policy, you can specify multiple healthy records (such as EC2 instance endpoints) for a single DNS name. While it does not perform health checks on the endpoints, it enables distributing traffic across multiple endpoints, including those in different Regions, which aligns with the company’s requirement for distributing traffic across Regions. However, you would need to implement additional mechanisms to ensure that unhealthy Regions are not used, such as combining it with health checks or failover configurations.

81
Q

852#A company runs its applications on Amazon EC2 instances that are backed by the Amazon Elastic Block Store (Amazon EBS). EC2 instances run the latest version of Amazon Linux. Applications are experiencing availability issues when company employees store and retrieve files that are 25 GB or larger. The company needs a solution that does not require the company to transfer files between EC2 instances. The files must be available on many EC2 instances and in multiple availability zones. What solution will meet these requirements?

A. Migrate all files to an Amazon S3 bucket. Instruct employees to access files from the S3 bucket.

B. Take a snapshot of the existing EBS volume. Mount the snapshot as an EBS volume across the EC2 instances. Instruct employees to access files from EC2 instances.

C. Mount an Amazon Elastic File System (Amazon EFS) file system across all EC2 instances. Instruct employees to access files from EC2 instances.

D. Creates an Amazon Machine Image (AMI) from the EC2 instances. Configure new EC2 instances from the AMI that use an instance store volume. Instruct employees to access files from EC2 instances.

A

C. Mount an Amazon Elastic File System (Amazon EFS) file system across all EC2 instances. Instruct employees to access files from EC2 instances.

● Amazon EFS allows concurrent access to files from multiple EC2 instances and offers low-latency access to data.
● While it may incur slightly higher costs compared to Amazon S3, it provides better performance for applications
requiring frequent access to large files.

While Amazon S3 offers scalability and durability, the frequent access to large files may introduce latency and incur data transfer costs. In scenarios where performance and cost are critical factors, using Amazon EFS may provide a better balance by offering low-latency access to data while still ensuring scalability and durability. Therefore, Option C, mounting an Amazon EFS file system, may be a more suitable solution considering both performance and cost considerations.

82
Q

853#A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database. Compliance regulations require that all personally identifiable information (PII) be encrypted at rest. Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of infrastructure changes?

A. Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume.

B. Deploy AWS CloudHSM, generate encryption keys, and use the keys to encrypt the database volumes.

C. Configure SSL encryption using AWS Key Management Service (AWS KMS) keys to encrypt database volumes.

D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

A

D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

Explanation of Option D:
Amazon EBS encryption: Amazon EBS encryption allows you to encrypt the Amazon EBS volumes attached to your EC2 instances. By enabling EBS encryption, you can ensure that data stored on these volumes, including the operating system and application data, is encrypted at rest. This encryption is transparent to your application and
does not require any changes to the application itself. It’s a straightforward configuration change at the volume level.
Amazon RDS encryption: Amazon RDS supports encryption of data at rest using AWS Key Management Service (AWS KMS) keys. By enabling RDS encryption, you can ensure that data stored in your RDS databases, including sensitive PII, is encrypted at rest. This encryption is also transparent to your application and does not require any changes to the database schema or application code. You simply enable encryption for your RDS instance using AWS KMS keys.
Combining Amazon EBS encryption and Amazon RDS encryption with AWS KMS keys provides a comprehensive solution for encrypting both the instance volumes (where the application runs) and the database volumes (where the data resides). This ensures that all sensitive data, including PII, is encrypted at rest, thereby meeting compliance regulations.
While options A, B, and C also involve encryption, they may require more changes to the infrastructure or introduce additional complexities:
● Option A involves using AWS Certificate Manager to generate SSL/TLS certificates for encrypting data in transit, but it does not directly address encrypting data at rest on the volumes.
● Option B involves deploying AWS CloudHSM, which is a hardware security module (HSM) service, and generating encryption keys. This option introduces additional infrastructure and management overhead.
● Option C mentions configuring SSL encryption using AWS KMS keys, but it’s not clear how this specifically addresses encrypting data at rest on the volumes.

Therefore, Option D is the most appropriate choice for meeting the encryption requirements with the least amount of changes to the infrastructure.

83
Q

854#A company runs an AWS Lambda function on private subnets in a VPC. Subnets have a default route to the Internet through an Amazon EC2 NAT instance. The Lambda function processes the input data and saves its output as an object in Amazon S3. Intermittently, the Lambda function times out while attempting to load the object due to busy network traffic on the NAT instance. The company wants to access Amazon S3 without going through the Internet. What solution will meet these requirements?

A. Replace the EC2 NAT instance with an AWS managed NAT gateway.

B. Increase the size of the NAT EC2 instance in the VPC to a network-optimized instance type.

C. Provision a gateway endpoint for Amazon S3 on the VPC. Update the subnet route tables accordingly.

D. Provision a transit gateway. Place the transit gateway attachments on the private subnets where the Lambda function is running.

A

C. Provision a gateway endpoint for Amazon S3 on the VPC. Update the subnet route tables accordingly.

  • Gateway Endpoint for Amazon S3: Amazon VPC endpoints enable private connectivity between your VPC and supported AWS services. By provisioning a gateway endpoint for Amazon S3 in the VPC, the Lambda function can access S3 directly without traffic leaving the AWS network or traversing the internet. This ensures that the Lambda function can upload objects to S3 without encountering timeouts due to network congestion on the NAT instance.
  • Private Connectivity: The S3 gateway endpoint provides a private connection to Amazon S3 from within the VPC, eliminating the need for internet gateway or NAT instances for S3 access. Traffic between the Lambda function and S3 stays within the AWS network, enhancing security and reducing latency.
  • Route Table Update: After provisioning the S3 gateway endpoint, you need to update the route tables of the private subnets to route S3 traffic through the endpoint. This ensures that traffic intended for S3 is directed to the endpoint, allowing the Lambda function to communicate with S3 securely and efficiently.

Other Options:
A. Replace the EC2 NAT instance with an AWS managed NAT gateway:
AWS Managed NAT Gateway: NAT gateways are managed services provided by AWS that allow instances in private subnets to initiate outbound traffic to the internet while preventing inbound traffic from initiating a connection with them. Unlike EC2 NAT instances, NAT gateways are fully managed by AWS, providing higher availability, scalability, and better performance.
Traversing Public Internet: However, NAT gateways still route traffic through the public internet. Therefore, while replacing the EC2 NAT instance with a managed NAT gateway might improve availability and scalability, it does not address the company’s requirement to access S3 without traversing the public internet.
B. Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type:
Larger NAT Instance: Increasing the size of the EC2 NAT instance might alleviate some of the performance issues caused by network saturation. By using a larger instance type, the NAT instance can handle more network traffic, potentially reducing timeouts experienced by the Lambda function.
Limitations Remain: However, even with a larger instance type, the EC2 NAT instance still routes traffic through the public internet. As a result, this solution does not address the company’s requirement to access S3 without traversing the internet.
D. Provision a transit gateway. Place transit gateway attachments in the private subnets where the Lambda function is running:
Transit Gateway: Transit Gateway is a service that simplifies network connectivity between VPCs and on-premises networks. It acts as a hub that connects multiple VPCs and VPN connections, allowing for centralized management of network routing.
Transit Gateway Attachments: While transit gateways can provide centralized routing, they do not inherently address the requirement to access S3 without traversing the public internet. In this scenario, placing transit
gateway attachments in the private subnets would still result in traffic passing through the public internet when accessing S3.

In summary, options A, B, and D do not directly address the company’s requirement to access Amazon S3 without traversing the public internet, making option C the most appropriate solution for the given scenario.

84
Q

855#A news company that has reporters all over the world is hosting its broadcast system on AWS. Reporters send live broadcasts to the broadcast system. Reporters use software on their phones to send live streams via Real-Time Messaging Protocol (RTMP). A solutions architect must design a solution that gives reporters the ability to send the highest quality broadcasts. The solution should provide accelerated TCP connections back to the transmission system. What should the solutions architect use to meet these requirements?

A. Amazon CloudFront

B. AWS Global Accelerator

C. AWS VPN Client

D. Amazon EC2 Instances and AWS Elastic IP Addresses

A

B. AWS Global Accelerator

AWS Global Accelerator is a networking service that improves the availability and performance of applications with global users. It uses the AWS global network to optimize the path from users to applications, improving the performance of TCP and UDP traffic. Global Accelerator can provide accelerated TCP connections back to the broadcast system, making it suitable for real-time streaming scenarios where low-latency and high-performance connections are crucial. This option aligns well with the requirements of the news company.