SAA L2P 501-600 v24.021 Flashcards

1
Q

QUESTION 600
A company has multiple Windows file servers on premises. The company wants to migrate and
consolidate its files into an Amazon FSx for Windows File Server file system. File permissions
must be preserved to ensure that access rights do not change.
Which solutions will meet these requirements? (Choose two.)
A. Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the
FSx for Windows File Server file system.
B. Copy the shares on each file server into Amazon S3 buckets by using the AWS CLI. Schedule
AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
C. Remove the drives from each file server. Ship the drives to AWS for import into Amazon S3.
Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file
system.
D. Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS
DataSync agents on the device. Schedule DataSync tasks to transfer the data to the FSx for
Windows File Server file system.
E. Order an AWS Snowball Edge Storage Optimized device. Connect the device to the on-premises
network. Copy data to the device by using the AWS CLI. Ship the device back to AWS for import
into Amazon S3. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File
Server file system.

A

A. Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the
FSx for Windows File Server file system.
D. Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS
DataSync agents on the device. Schedule DataSync tasks to transfer the data to the FSx for
Windows File Server file system.

Explanation:
A - This option involves deploying DataSync agents on your on-premises file servers and using
DataSync to transfer the data directly to the FSx for Windows File Server. DataSync ensures that
file permissions are preserved during the migration process.
D - This option involves using an AWS Snowcone device, a portable data transfer device. You
would connect the Snowcone device to your on-premises network, launch DataSync agents on
the device, and schedule DataSync tasks to transfer the data to FSx for Windows File Server.
DataSync handles the migration process while preserving file permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

QUESTION 599
A company needs to minimize the cost of its 1 Gbps AWS Direct Connect connection. The
company’s average connection utilization is less than 10%. A solutions architect must
recommend a solution that will reduce the cost without compromising security.
Which solution will meet these requirements?
A. Set up a new 1 Gbps Direct Connect connection. Share the connection with another AWS
account.
B. Set up a new 200 Mbps Direct Connect connection in the AWS Management Console.
C. Contact an AWS Direct Connect Partner to order a 1 Gbps connection. Share the connection with
another AWS account.
D. Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing
AWS account.

A

D. Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing
AWS account.

Explanation:
For Dedicated Connections, 1 Gbps, 10 Gbps, and 100 Gbps ports are available. For Hosted
Connections, connection speeds of 50 Mbps, 100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500
Mbps, 1 Gbps, 2 Gbps, 5 Gbps and 10 Gbps may be ordered from approved AWS Direct
Connect Partners.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

QUESTION 598
A company uses Amazon S3 to store high-resolution pictures in an S3 bucket. To minimize
application changes, the company stores the pictures as the latest version of an S3 object. The
company needs to retain only the two most recent versions of the pictures.
The company wants to reduce costs. The company has identified the S3 bucket as a large
expense.
Which solution will reduce the S3 costs with the LEAST operational overhead?
A. Use S3 Lifecycle to delete expired object versions and retain the two most recent versions.
B. Use an AWS Lambda function to check for older versions and delete all but the two most recent
versions.
C. Use S3 Batch Operations to delete noncurrent object versions and retain only the two most recent
versions.
D. Deactivate versioning on the S3 bucket and retain the two most recent versions.

A

A. Use S3 Lifecycle to delete expired object versions and retain the two most recent versions.

Explanation:
S3 Lifecycle policies allow you to define rules that automatically transition or expire objects based
on their age or other criteria. By configuring an S3 Lifecycle policy to delete expired object
versions and retain only the two most recent versions, you can effectively manage the storage
costs while maintaining the desired retention policy. This solution is highly automated and
requires minimal operational overhead as the lifecycle management is handled by S3 itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

QUESTION 597
A company has a service that reads and writes large amounts of data from an Amazon S3 bucket
in the same AWS Region. The service is deployed on Amazon EC2 instances within the private
subnet of a VPC. The service communicates with Amazon S3 over a NAT gateway in the public
subnet. However, the company wants a solution that will reduce the data output costs.
Which solution will meet these requirements MOST cost-effectively?
A. Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the
private subnet to use the elastic network interface of this instance as the destination for all S3
traffic.
B. Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the
public subnet to use the elastic network interface of this instance as the destination for all S3
traffic.
C. Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the
gateway endpoint as the route for all S3 traffic.
D. Provision a second NAT gateway. Configure the route table for the private subnet to use this NAT
gateway as the destination for all S3 traffic.

A

C. Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the
gateway endpoint as the route for all S3 traffic.

Explanation:
A VPC gateway endpoint allows you to privately access Amazon S3 from within your VPC without
using a NAT gateway or NAT instance. By provisioning a VPC gateway endpoint for S3, the
service in the private subnet can directly communicate with S3 without incurring data transfer
costs for traffic going through a NAT gateway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

QUESTION 596
A company uses on-premises servers to host its applications. The company is running out of
storage capacity. The applications use both block storage and NFS storage. The company needs
a high-performing solution that supports local caching without re-architecting its existing
applications.
Which combination of actions should a solutions architect take to meet these requirements?
(Choose two.)
A. Mount Amazon S3 as a file system to the on-premises servers.
B. Deploy an AWS Storage Gateway file gateway to replace NFS storage.
C. Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers.
D. Deploy an AWS Storage Gateway volume gateway to replace the block storage.
E. Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to on-premises
servers.

A

B. Deploy an AWS Storage Gateway file gateway to replace NFS storage.
D. Deploy an AWS Storage Gateway volume gateway to replace the block storage.

Explanation:
By combining the deployment of an AWS Storage Gateway file gateway and an AWS Storage
Gateway volume gateway, the company can address both its block storage and NFS storage
needs, while leveraging local caching capabilities for improved performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

QUESTION 595
A company is conducting an internal audit. The company wants to ensure that the data in an
Amazon S3 bucket that is associated with the company’s AWS Lake Formation data lake does
not contain sensitive customer or employee data. The company wants to discover personally
identifiable information (PII) or financial information, including passport numbers and credit card
numbers.

Which solution will meet these requirements?
A. Configure AWS Audit Manager on the account. Select the Payment Card Industry Data Security
Standards (PCI DSS) for auditing.
B. Configure Amazon S3 Inventory on the S3 bucket Configure Amazon Athena to query the
inventory.
C. Configure Amazon Macie to run a data discovery job that uses managed identifiers for the
required data types.
D. Use Amazon S3 Select to run a report across the S3 bucket.

A

C. Configure Amazon Macie to run a data discovery job that uses managed identifiers for the
required data types.

Explanation:
Amazon Macie is a service that helps discover, classify, and protect sensitive data stored in
AWS. It uses machine learning algorithms and managed identifiers to detect various types of
sensitive information, including personally identifiable information (PII) and financial information.
By configuring Amazon Macie to run a data discovery job with the appropriate managed
identifiers for the required data types (such as passport numbers and credit card numbers), the
company can identify and classify any sensitive data present in the S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

QUESTION 594
A company uses Amazon EC2 instances to host its internal systems. As part of a deployment
operation, an administrator tries to use the AWS CLI to terminate an EC2 instance. However, the
administrator receives a 403 (Access Denied) error message.

What is the cause of the unsuccessful request?
A. The EC2 instance has a resource-based policy with a Deny statement.
B. The principal has not been specified in the policy statement.
C. The “Action” field does not grant the actions that are required to terminate the EC2 instance.
D. The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24
or 203.0.113.0/24.

A

D. The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24
or 203.0.113.0/24.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

QUESTION 593
A company wants to use artificial intelligence (AI) to determine the quality of its customer service
calls. The company currently manages calls in four different languages, including English. The
company will offer new languages in the future. The company does not have the resources to
regularly maintain machine learning (ML) models.
The company needs to create written sentiment analysis reports from the customer service call
recordings. The customer service call recording text must be translated into English.
Which combination of steps will meet these requirements? (Choose three.)
A. Use Amazon Comprehend to translate the audio recordings into English.
B. Use Amazon Lex to create the written sentiment analysis reports.
C. Use Amazon Polly to convert the audio recordings into text.
D. Use Amazon Transcribe to convert the audio recordings in any language into text.
E. Use Amazon Translate to translate text in any language to English.
F. Use Amazon Comprehend to create the sentiment analysis reports.

A

D. Use Amazon Transcribe to convert the audio recordings in any language into text.
E. Use Amazon Translate to translate text in any language to English.
F. Use Amazon Comprehend to create the sentiment analysis reports.

Explanation:
Amazon Transcribe will convert the audio recordings into text, Amazon Translate will translate the
text into English, and Amazon Comprehend will perform sentiment analysis on the translated text
to generate sentiment analysis reports.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

QUESTION 592
A company has multiple AWS accounts for development work. Some staff consistently use
oversized Amazon EC2 instances, which causes the company to exceed the yearly budget for the
development accounts. The company wants to centrally restrict the creation of AWS resources in
these accounts.

Which solution will meet these requirements with the LEAST development effort?
A. Develop AWS Systems Manager templates that use an approved EC2 creation process. Use the
approved Systems Manager templates to provision EC2 instances.
B. Use AWS Organizations to organize the accounts into organizational units (OUs). Define and
attach a service control policy (SCP) to control the usage of EC2 instance types.
C. Configure an Amazon EventBridge rule that invokes an AWS Lambda function when an EC2
instance is created. Stop disallowed EC2 instance types.
D. Set up AWS Service Catalog products for the staff to create the allowed EC2 instance types.
Ensure that staff can deploy EC2 instances only by using the Service Catalog products.

A

B. Use AWS Organizations to organize the accounts into organizational units (OUs). Define and
attach a service control policy (SCP) to control the usage of EC2 instance types.

Explanation:
AWS Organizations: AWS Organizations is a service that helps you centrally manage multiple
AWS accounts. It enables you to group accounts into organizational units (OUs) and apply
policies across those accounts.
Service Control Policies (SCPs): SCPs in AWS Organizations allow you to define fine-grained
permissions and restrictions at the account or OU level. By attaching an SCP to the development
accounts, you can control the creation and usage of EC2 instance types.
Least Development Effort: Option B requires minimal development effort as it leverages the built-
in features of AWS Organizations and SCPs. You can define the SCP to restrict the use of
oversized EC2 instance types and apply it to the appropriate OUs or accounts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

QUESTION 591
A solutions architect is designing an asynchronous application to process credit card data
validation requests for a bank. The application must be secure and be able to process each
request at least once.
Which solution will meet these requirements MOST cost-effectively?
A. Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
standard queues as the event source. Use AWS Key Management Service (SSE-KMS) for
encryption. Add the kms:Decrypt permission for the Lambda execution role.
B. Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS)
FIFO queues as the event source. Use SQS managed encryption keys (SSE-SQS) for encryption.
Add the encryption key invocation permission for the Lambda function.
C. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
FIFO queues as the event source. Use AWS KMS keys (SSE-KMS). Add the kms:Decrypt
permission for the Lambda execution role.
D. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
standard queues as the event source. Use AWS KMS keys (SSE-KMS) for encryption. Add the
encryption key invocation permission for the Lambda function.

A

A. Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
standard queues as the event source. Use AWS Key Management Service (SSE-KMS) for
encryption. Add the kms:Decrypt permission for the Lambda execution role.

Explanation:
https://docs.aws.amazon.com/zh_tw/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-
least-privilege-policy.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

QUESTION 590
A gaming company uses Amazon DynamoDB to store user information such as geographic
location, player data, and leaderboards. The company needs to configure continuous backups to
an Amazon S3 bucket with a minimal amount of coding. The backups must not affect availability of the application and must not affect the read capacity units (RCUs) that are defined for the
table.
Which solution meets these requirements?
A. Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
B. Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-
in-time recovery for the table.
C. Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the
stream and export the data to an Amazon S3 bucket.
D. Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a
regular basis. Turn on point-in-time recovery for the table.

A

B. Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-
in-time recovery for the table.

Explanation:
Continuous backups is a native feature of DynamoDB, it works at any scale without having to
manage servers or clusters and allows you to export data across AWS Regions and accounts to
any point-in-time in the last 35 days at a per-second granularity. Plus, it doesn’t affect the read
capacity or the availability of your production tables.
https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-
amazon-s3/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

QUESTION 589
An ecommerce company runs an application in the AWS Cloud that is integrated with an on-
premises warehouse solution. The company uses Amazon Simple Notification Service (Amazon
SNS) to send order messages to an on-premises HTTPS endpoint so the warehouse application
can process the orders. The local data center team has detected that some of the order
messages were not received.
A solutions architect needs to retain messages that are not delivered and analyze the messages
for up to 14 days.
Which solution will meet these requirements with the LEAST development effort?
A. Configure an Amazon SNS dead letter queue that has an Amazon Kinesis Data Stream target
with a retention period of 14 days.
B. Add an Amazon Simple Queue Service (Amazon SQS) queue with a retention period of 14 days
between the application and Amazon SNS.
C. Configure an Amazon SNS dead letter queue that has an Amazon Simple Queue Service
(Amazon SQS) target with a retention period of 14 days.
D. Configure an Amazon SNS dead letter queue that has an Amazon DynamoDB target with a TTL
attribute set for a retention period of 14 days.

A

C. Configure an Amazon SNS dead letter queue that has an Amazon Simple Queue Service
(Amazon SQS) target with a retention period of 14 days.

Explanation:
The message retention period in Amazon SQS can be set between 1 minute and 14 days (the
default is 4 days). Therefore, you can configure your SQS DLQ to retain undelivered SNS
messages for 14 days. This will enable you to analyze undelivered messages with the least
development effort.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

QUESTION 588
A 4-year-old media company is using the AWS Organizations all features feature set to organize
its AWS accounts. According to the company’s finance team, the billing information on the
member accounts must not be accessible to anyone, including the root user of the member accounts.
Which solution will meet these requirements?
A. Add all finance team users to an IAM group. Attach an AWS managed policy named Billing to the
group.
B. Attach an identity-based policy to deny access to the billing information to all users, including the
root user.
C. Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to
the root organizational unit (OU).
D. Convert from the Organizations all features feature set to the Organizations consolidated billing
feature set.

A

C. Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to
the root organizational unit (OU).

Explanation:
Service control policy are a type of organization policy that you can use to manage permissions in
your organization. SCPs offer central control over the maximum available permissions for all
accounts in your organization. SCPs help you to ensure your accounts stay within your
organization’s access control guidelines. SCPs are available only in an organization that has all
features enabled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

QUESTION 587
A company seeks a storage solution for its application. The solution must be highly available and
scalable. The solution also must function as a file system be mountable by multiple Linux
instances in AWS and on premises through native protocols, and have no minimum size
requirements. The company has set up a Site-to-Site VPN for access from its on-premises
network to its VPC.
Which storage solution meets these requirements?
A. Amazon FSx Multi-AZ deployments
B. Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes
C. Amazon Elastic File System (Amazon EFS) with multiple mount targets
D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points

A

C. Amazon Elastic File System (Amazon EFS) with multiple mount targets

Explanation:
Amazon EFS is a fully managed file system service that provides scalable, shared storage for
Amazon EC2 instances. It supports the Network File System version 4 (NFSv4) protocol, which is
a native protocol for Linux-based systems. EFS is designed to be highly available, durable, and
scalable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

QUESTION 586
A company is building a three-tier application on AWS. The presentation tier will serve a static
website The logic tier is a containerized application. This application will store data in a relational
database. The company wants to simplify deployment and to reduce operational costs.
Which solution will meet these requirements?
A. Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS)
with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.
B. Use Amazon CloudFront to host static content. Use Amazon Elastic Container Service (Amazon
ECS) with Amazon EC2 for compute power. Use a managed Amazon RDS cluster for the
database.
C. Use Amazon S3 to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS)
with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.
D. Use Amazon EC2 Reserved Instances to host static content. Use Amazon Elastic Kubernetes
Service (Amazon EKS) with Amazon EC2 for compute power. Use a managed Amazon RDS
cluster for the database.

A

A. Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS)
with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.

Explanation:
Amazon S3 is a highly scalable and cost-effective storage service that can be used to host static
website content. It provides durability, high availability, and low latency access to the static files.
Amazon ECS with AWS Fargate eliminates the need to manage the underlying infrastructure. It
allows you to run containerized applications without provisioning or managing EC2 instances.
This reduces operational overhead and provides scalability.
By using a managed Amazon RDS cluster for the database, you can offload the management
tasks such as backups, patching, and monitoring to AWS. This reduces the operational burden
and ensures high availability and durability of the database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

QUESTION 585
A company is looking for a solution that can store video archives in AWS from old news footage.
The company needs to minimize costs and will rarely need to restore these files. When the files
are needed, they must be available in a maximum of five minutes.
What is the MOST cost-effective solution?
A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.
C. Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
D. Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).

A

A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.

Explanation:
By choosing Expedited retrievals in Amazon S3 Glacier, you can reduce the retrieval time to
minutes, making it suitable for scenarios where quick access is required. Expedited retrievals
come with a higher cost per retrieval compared to standard retrievals but provide faster access to
your archived data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

QUESTION 584
A company wants to move from many standalone AWS accounts to a consolidated, multi-account
architecture. The company plans to create many new AWS accounts for different business units.
The company needs to authenticate access to these AWS accounts by using a centralized
corporate directory service.
Which combination of actions should a solutions architect recommend to meet these
requirements? (Choose two.)
A. Create a new organization in AWS Organizations with all features turned on. Create the new
AWS accounts in the organization.
B. Set up an Amazon Cognito identity pool. Configure AWS IAM Identity Center (AWS Single Sign-
On) to accept Amazon Cognito authentication.
C. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS IAM Identity
Center (AWS Single Sign-On) to AWS Directory Service.
D. Create a new organization in AWS Organizations. Configure the organization’s authentication
mechanism to use AWS Directory Service directly.
E. Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM
Identity Center, and integrate it with the company’s corporate directory service.

A

A. Create a new organization in AWS Organizations with all features turned on. Create the new
AWS accounts in the organization.
E. Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM
Identity Center, and integrate it with the company’s corporate directory service.

Explanation:
A. By creating a new organization in AWS Organizations, you can establish a consolidated multi-
account architecture. This allows you to create and manage multiple AWS accounts for different
business units under a single organization.
E. Setting up AWS IAM Identity Center (AWS Single Sign-On) within the organization enables
you to integrate it with the company’s corporate directory service. This integration allows for
centralized authentication, where users can sign in using their corporate credentials and access
the AWS accounts within the organization.
Together, these actions create a centralized, multi-account architecture that leverages AWS
Organizations for account management and AWS IAM Identity Center (AWS Single Sign-On) for
authentication and access control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

QUESTION 583
A company containerized a Windows job that runs on .NET 6 Framework under a Windows
container. The company wants to run this job in the AWS Cloud. The job runs every 10 minutes.
The job’s runtime varies between 1 minute and 3 minutes.
Which solution will meet these requirements MOST cost-effectively?
A. Create an AWS Lambda function based on the container image of the job. Configure Amazon
EventBridge to invoke the function every 10 minutes.
B. Use AWS Batch to create a job that uses AWS Fargate resources. Configure the job scheduling
to run every 10 minutes.
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a
scheduled task based on the container image of the job to run every 10 minutes.
D. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a
standalone task based on the container image of the job. Use Windows task scheduler to run the
job every 10 minutes.

A

C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a
scheduled task based on the container image of the job to run every 10 minutes.

Explanation:
By leveraging AWS Fargate and ECS, you can achieve cost-effective scaling and resource
allocation for your containerized Windows job running on .NET 6 Framework in the AWS Cloud.
The serverless nature of Fargate ensures that you only pay for the actual resources consumed by
your containers, allowing for efficient cost management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

QUESTION 582
A company wants to migrate 100 GB of historical data from an on-premises location to an
Amazon S3 bucket. The company has a 100 megabits per second (Mbps) internet connection on
premises. The company needs to encrypt the data in transit to the S3 bucket. The company will
store new data directly in Amazon S3.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket
B. Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket
C. Use AWS Snowball to move the data to an S3 bucket
D. Set up an IPsec VPN from the on-premises location to AWS. Use the s3 cp command in the AWS
CLI to move the data directly to an S3 bucket

A

B. Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket

Explanation:
AWS DataSync is a fully managed data transfer service that simplifies and automates the
process of moving data between on-premises storage and Amazon S3. It provides secure and
efficient data transfer with built-in encryption, ensuring that the data is encrypted in transit.
By using AWS DataSync, the company can easily migrate the 100 GB of historical data from their
on-premises location to an S3 bucket. DataSync will handle the encryption of data in transit and
ensure secure transfer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

QUESTION 581
A company hosts a three-tier web application in the AWS Cloud. A Multi-AZAmazon RDS for
MySQL server forms the database layer Amazon ElastiCache forms the cache layer. The
company wants a caching strategy that adds or updates data in the cache when a customer adds
an item to the database. The data in the cache must always match the data in the database.
Which solution will meet these requirements?
A. Implement the lazy loading caching strategy
B. Implement the write-through caching strategy
C. Implement the adding TTL caching strategy
D. Implement the AWS AppConfig caching strategy

A

B. Implement the write-through caching strategy

Explanation:
In the write-through caching strategy, when a customer adds or updates an item in the database, the application first writes the data to the database and then updates the cache with the same
data. This ensures that the cache is always synchronized with the database, as every write
operation triggers an update to the cache.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

QUESTION 580
A business application is hosted on Amazon EC2 and uses Amazon S3 for encrypted object
storage. The chief information security officer has directed that no application traffic between the
two services should traverse the public internet.
Which capability should the solutions architect use to meet the compliance requirements?
A. AWS Key Management Service (AWS KMS)
B. VPC endpoint
C. Private subnet
D. Virtual private gateway

A

B. VPC endpoint

Explanation:
A VPC endpoint enables you to privately access AWS services without requiring internet
gateways, NAT gateways, VPN connections, or AWS Direct Connect connections. It allows you to
connect your VPC directly to supported AWS services, such as Amazon S3, over a private
connection within the AWS network.
By creating a VPC endpoint for Amazon S3, the traffic between your EC2 instances and S3 will
stay within the AWS network and won’t traverse the public internet. This provides a more secure
and compliant solution, as the data transfer remains within the private network boundaries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

QUESTION 579
A company is making a prototype of the infrastructure for its new website by manually
provisioning the necessary infrastructure. This infrastructure includes an Auto Scaling group, an
Application Load Balancer and an Amazon RDS database. After the configuration has been
thoroughly validated, the company wants the capability to immediately deploy the infrastructure
for development and production use in two Availability Zones in an automated fashion.
What should a solutions architect recommend to meet these requirements?
A. Use AWS Systems Manager to replicate and provision the prototype infrastructure in two
Availability Zones
B. Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy
the infrastructure with AWS CloudFormation.
C. Use AWS Config to record the inventory of resources that are used in the prototype infrastructure.
Use AWS Config to deploy the prototype infrastructure into two Availability Zones.
D. Use AWS Elastic Beanstalk and configure it to use an automated reference to the prototype
infrastructure to automatically deploy new environments in two Availability Zones.

A

B. Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy
the infrastructure with AWS CloudFormation.

Explanation:
AWS CloudFormation is a service that allows you to define and provision infrastructure as code.
This means that you can create a template that describes the resources you want to create, and
then use CloudFormation to deploy those resources in an automated fashion.
In this case, the solutions architect should define the infrastructure as a template by using the
prototype infrastructure as a guide. The template should include resources for an Auto Scaling
group, an Application Load Balancer, and an Amazon RDS database. Once the template is
created, the solutions architect can use CloudFormation to deploy the infrastructure in two
Availability Zones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

QUESTION 578
A law firm needs to share information with the public. The information includes hundreds of files
that must be publicly readable. Modifications or deletions of the files by anyone before a
designated future date are prohibited.
Which solution will meet these requirements in the MOST secure way?
A. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-
only IAM permissions to any AWS principals that access the S3 bucket until the designated date.
B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a
retention period in accordance with the designated date. Configure the S3 bucket for static
website hosting. Set an S3 bucket policy to allow read-only access to the objects.
C. Create a new Amazon S3 bucket with S3 Versioning enabled. Configure an event trigger to run
an AWS Lambda function in case of object modification or deletion. Configure the Lambda
function to replace the objects with the original versions from a private S3 bucket.
D. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the
folder that contains the files. Use S3 Object Lock with a retention period in accordance with the
designated date. Grant read-only IAM permissions to any AWS principals that access the S3
bucket.

A

B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a
retention period in accordance with the designated date. Configure the S3 bucket for static
website hosting. Set an S3 bucket policy to allow read-only access to the objects.

Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

QUESTION 577
A group requires permissions to list an Amazon S3 bucket and delete objects from that bucket.
An administrator has created the following IAM policy to provide access to the bucket and applied
that policy to the group. The group is not able to delete objects in the bucket. The company
follows least-privilege access rules.

Which statement should a solutions architect add to the policy to correct bucket access?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

QUESTION 576
A company is expecting rapid growth in the near future. A solutions architect needs to configure
existing users and grant permissions to new users on AWS. The solutions architect has decided
to create IAM groups. The solutions architect will add the new users to IAM groups based on
department.
Which additional action is the MOST secure way to grant permissions to the new users?
A. Apply service control policies (SCPs) to manage access permissions
B. Create IAM roles that have least privilege permission. Attach the roles to the IAM groups
C. Create an IAM policy that grants least privilege permission. Attach the policy to the IAM groups
D. Create IAM roles. Associate the roles with a permissions boundary that defines the maximum
permissions

A

C. Create an IAM policy that grants least privilege permission. Attach the policy to the IAM groups

Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_attach-policy.html
Attaching a policy to an IAM user group.

You can assign an existing IAM role to an AWS Directory Service user or group. Not to IAM groups.

create role=for resource like EC2 and lambda …. create a Policy =for groups or user access policy for the resources like S3 bucket

A is wrong SCPs are mainly used along with AWS Organizations organizational units (OUs). SCPs do not replace IAM Policies such that they do not provide actual permissions. To perform an action, you would still need to grant appropriate IAM Policy permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

QUESTION 575
A company is designing a containerized application that will use Amazon Elastic Container
Service (Amazon ECS). The application needs to access a shared file system that is highly
durable and can recover data to another AWS Region with a recovery point objective (RPO) of 8
hours. The file system needs to provide a mount target m each Availability Zone within a Region.
A solutions architect wants to use AWS Backup to manage the replication to another Region.
Which solution will meet these requirements?
A. Amazon FSx for Windows File Server with a Multi-AZ deployment
B. Amazon FSx for NetApp ONTAP with a Multi-AZ deployment
C. Amazon Elastic File System (Amazon EFS) with the Standard storage class
D. Amazon FSx for OpenZFS

A

C. Amazon Elastic File System (Amazon EFS) with the Standard storage class

Explanation:
https://aws.amazon.com/efs/faq/
Q: What is Amazon EFS Replication?
EFS Replication can replicate your file system data to another Region or within the same Region
without requiring additional infrastructure or a custom process. Amazon EFS Replication
automatically and transparently replicates your data to a second file system in a Region or AZ of
your choice. You can use the Amazon EFS console, AWS CLI, and APIs to activate replication on
an existing file system. EFS Replication is continual and provides a recovery point objective
(RPO) and a recovery time objective (RTO) of minutes, helping you meet your compliance and
business continuity goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

QUESTION 574
A company has multiple VPCs across AWS Regions to support and run workloads that are
isolated from workloads in other Regions. Because of a recent application launch requirement,
the company’s VPCs must communicate with all other VPCs across all Regions.
Which solution will meet these requirements with the LEAST amount of administrative effort?
A. Use VPC peering to manage VPC communication in a single Region. Use VPC peering across
Regions to manage VPC communications.
B. Use AWS Direct Connect gateways across all Regions to connect VPCs across regions and
manage VPC communications.
C. Use AWS Transit Gateway to manage VPC communication in a single Region and Transit
Gateway peering across Regions to manage VPC communications.
D. Use AWS PrivateLink across all Regions to connect VPCs across Regions and manage VPC
communications

A

C. Use AWS Transit Gateway to manage VPC communication in a single Region and Transit
Gateway peering across Regions to manage VPC communications.

Explanation:
AWS Transit Gateway: Transit Gateway is a highly scalable service that simplifies network
connectivity between VPCs and on-premises networks. By using a Transit Gateway in a single
Region, you can centralize VPC communication management and reduce administrative effort.
Transit Gateway Peering: Transit Gateway supports peering connections across AWS Regions,
allowing you to establish connectivity between VPCs in different Regions without the need for
complex VPC peering configurations. This simplifies the management of VPC communications
across Regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

QUESTION 573
A company hosts a website on Amazon EC2 instances behind an Application Load Balancer
(ALB). The website serves static content. Website traffic is increasing, and the company is
concerned about a potential increase in cost.
A. Create an Amazon CloudFront distribution to cache state files at edge locations
B. Create an Amazon ElastiCache cluster. Connect the ALB to the ElastiCache cluster to serve
cached files
C. Create an AWS WAF web ACL and associate it with the ALB. Add a rule to the web ACL to cache
static files
D. Create a second ALB in an alternative AWS Region. Route user traffic to the closest Region to
minimize data transfer costs

A

A. Create an Amazon CloudFront distribution to cache state files at edge locations

Explanation:
Amazon CloudFront: CloudFront is a content delivery network (CDN) service that caches content
at edge locations worldwide. By creating a CloudFront distribution, static content from the website
can be cached at edge locations, reducing the load on the EC2 instances and improving the
overall performance.
Caching Static Files: Since the website serves static content, caching these files at CloudFront
edge locations can significantly reduce the number of requests forwarded to the EC2 instances.
This helps to lower the overall cost by offloading traffic from the instances and reducing the data
transfer costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

QUESTION 572
A company has a mobile chat application with a data store based in Amazon DynamoDB. Users
would like new messages to be read with as little latency as possible. A solutions architect needs
to design an optimal solution that requires minimal application changes.
Which method should the solutions architect select?
A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code
to use the DAX endpoint.
B. Add DynamoDB read replicas to handle the increased read load. Update the application to point
to the read endpoint for the read replicas.
C. Double the number of read capacity units for the new messages table in DynamoDB. Continue to
use the existing DynamoDB endpoint.
D. Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to
point to the Redis cache endpoint instead of DynamoDB.

A

A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code
to use the DAX endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

QUESTION 571
A company is creating an application that runs on containers in a VPC. The application stores
and accesses data in an Amazon S3 bucket. During the development phase, the application will
store and access 1 TB of data in Amazon S3 each day. The company wants to minimize costs
and wants to prevent traffic from traversing the internet whenever possible.
Which solution will meet these requirements?
A. Enable S3 Intelligent-Tiering for the S3 bucket
B. Enable S3 Transfer Acceleration for the S3 bucket
C. Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in
the VPC
D. Create an interface endpoint for Amazon S3 in the VPC. Associate this endpoint with all route
tables in the VPC

A

C. Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in
the VPC

Explanation:
Prevent traffic from traversing the internet = Gateway VPC endpoint for S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

QUESTION 570
A company has applications hosted on Amazon EC2 instances with IPv6 addresses. The
applications must initiate communications with other external applications using the internet. However the company’s security policy states that any external service cannot initiate a
connection to the EC2 instances.
What should a solutions architect recommend to resolve this issue?
A. Create a NAT gateway and make it the destination of the subnet’s route table
B. Create an internet gateway and make it the destination of the subnet’s route table
C. Create a virtual private gateway and make it the destination of the subnet’s route table
D. Create an egress-only internet gateway and make it the destination of the subnet’s route table

A

D. Create an egress-only internet gateway and make it the destination of the subnet’s route table

Explanation:
An egress-only internet gateway (EIGW) is specifically designed for IPv6-only VPCs and provides
outbound IPv6 internet access while blocking inbound IPv6 traffic. It satisfies the requirement of
preventing external services from initiating connections to the EC2 instances while allowing the
instances to initiate outbound communications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

QUESTION 569
A company stores raw collected data in an Amazon S3 bucket. The data is used for several types
of analytics on behalf of the company’s customers. The type of analytics requested determines
the access pattern on the S3 objects.
The company cannot predict or control the access pattern. The company wants to reduce its S3
costs.
Which solution will meet these requirements?
A. Use S3 replication to transition infrequently accessed objects to S3 Standard-Infrequent Access
(S3 Standard-IA)
B. Use S3 Lifecycle rules to transition objects from S3 Standard to Standard-Infrequent Access (S3
Standard-IA)
C. Use S3 Lifecycle rules to transition objects from S3 Standard to S3 Intelligent-Tiering
D. Use S3 Inventory to identify and transition objects that have not been accessed from S3 Standard
to S3 Intelligent-Tiering

A

C. Use S3 Lifecycle rules to transition objects from S3 Standard to S3 Intelligent-Tiering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

QUESTION 568
A company is developing a microservices application that will provide a search catalog for
customers. The company must use REST APIs to present the frontend of the application to users.
The REST APIs must access the backend services that the company hosts in containers in
private VPC subnets.
Which solution will meet these requirements?
A. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a private VPC link for API Gateway
to access Amazon ECS.
B. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a private VPC link for API Gateway
to access Amazon ECS.
C. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a security group for API Gateway to
access Amazon ECS.
D. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a security group for API Gateway to
access Amazon ECS.

A

B. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a private VPC link for API Gateway
to access Amazon ECS.

Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-private-integration.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

QUESTION 567
A company uses AWS Organizations. A member account has purchased a Compute Savings
Plan. Because of changes in the workloads inside the member account, the account no longer
receives the full benefit of the Compute Savings Plan commitment. The company uses less than
50% of its purchased compute power.
A. Turn on discount sharing from the Billing Preferences section of the account console in the
member account that purchased the Compute Savings Plan.
B. Turn on discount sharing from the Billing Preferences section of the account console in the
company’s Organizations management account.
C. Migrate additional compute workloads from another AWS account to the account that has the
Compute Savings Plan.
D. Sell the excess Savings Plan commitment in the Reserved Instance Marketplace.

A

B. Turn on discount sharing from the Billing Preferences section of the account console in the
company’s Organizations management account.

Explanation:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-off.html
Sign in to the AWS Management Console and open the AWS Billing console at
https://console.aws.amazon.com/billing/
Ensure you’re logged in to the management account of your AWS Organizations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

QUESTION 566
A company designed a stateless two-tier application that uses Amazon EC2 in a single
Availability Zone and an Amazon RDS Multi-AZ DB instance. New company management wants
to ensure the application is highly available.
What should a solutions architect do to meet this requirement?
A. Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load
Balancer
B. Configure the application to take snapshots of the EC2 instances and send them to a different
AWS Region
C. Configure the application to use Amazon Route 53 latency-based routing to feed requests to the
application
D. Configure Amazon Route 53 rules to handle incoming requests and create a Multi-AZ Application
Load Balancer

A

A. Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load
Balancer

Explanation:
By combining Multi-AZ EC2 Auto Scaling and an Application Load Balancer, you achieve high
availability for the EC2 instances hosting your stateless two-tier application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

QUESTION 565
A company is developing an application to support customer demands. The company wants to
deploy the application on multiple Amazon EC2 Nitro-based instances within the same Availability
Zone. The company also wants to give the application the ability to write to multiple block storage
volumes in multiple EC2 Nitro-based instances simultaneously to achieve higher application
availability.
Which solution will meet these requirements?

A. Use General Purpose SSD (gp3) EBS volumes with Amazon Elastic Block Store (Amazon EBS)
Multi-Attach
B. Use Throughput Optimized HDD (st1) EBS volumes with Amazon Elastic Block Store (Amazon
EBS) Multi-Attach
C. Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS)
Multi-Attach
D. Use General Purpose SSD (gp2) EBS volumes with Amazon Elastic Block Store (Amazon EBS)
Multi-Attach

A

C. Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS)
Multi-Attach

Explanation:
Multi-Attach is supported exclusively on Provisioned IOPS SSD (io1 and io2) volumes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

QUESTION 564
A company hosts an online shopping application that stores all orders in an Amazon RDS for
PostgreSQL Single-AZ DB instance. Management wants to eliminate single points of failure and
has asked a solutions architect to recommend an approach to minimize database downtime
without requiring any changes to the application code.
Which solution meets these requirements?
A. Convert the existing database instance to a Multi-AZ deployment by modifying the database
instance and specifying the Multi-AZ option.
B. Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and
restore the new Multi-AZ deployment with the snapshot.
C. Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon
Route 53 weighted record sets to distribute requests across the databases.
D. Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum
group size of two. Use Amazon Route 53 weighted record sets to distribute requests across
instances.

A

A. Convert the existing database instance to a Multi-AZ deployment by modifying the database
instance and specifying the Multi-AZ option.

Explanation:
Compared to other solutions that involve creating new instances, restoring snapshots, or setting
up replication manually, converting to a Multi-AZ deployment is a simpler and more streamlined
approach with lower overhead.
Overall, option A offers a cost-effective and efficient way to minimize database downtime without
requiring significant changes or additional complexities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

QUESTION 563
An IoT company is releasing a mattress that has sensors to collect data about a user’s sleep. The sensors will send data to an Amazon S3 bucket. The sensors collect approximately 2 MB of data
every night for each mattress. The company must process and summarize the data for each
mattress. The results need to be available as soon as possible. Data processing will require 1 GB
of memory and will finish within 30 seconds.
Which solution will meet these requirements MOST cost-effectively?
A. Use AWS Glue with a Scala job
B. Use Amazon EMR with an Apache Spark script
C. Use AWS Lambda with a Python script
D. Use AWS Glue with a PySpark job

A

C. Use AWS Lambda with a Python script

Explanation:
AWS Lambda charges you based on the number of invocations and the execution time of your
function. Since the data processing job is relatively small (2 MB of data), Lambda is a cost-
effective choice. You only pay for the actual usage without the need to provision and maintain
infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

QUESTION 562
A company has an application that processes customer orders. The company hosts the
application on an Amazon EC2 instance that saves the orders to an Amazon Aurora database.
Occasionally when traffic is high the workload does not process orders fast enough.
What should a solutions architect do to write the orders reliably to the database as quickly as
possible?
A. Increase the instance size of the EC2 instance when traffic is high. Write orders to Amazon
Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic.
B. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in
an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and
process orders into the database.
C. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database
endpoint to the SNS topic. Use EC2 instances in an Auto Scaling group behind an Application
Load Balancer to read from the SNS topic.
D. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance
reaches CPU threshold limits. Use scheduled scaling of EC2 instances in an Auto Scaling group
behind an Application Load Balancer to read from the SQS queue and process orders into the
database.

A

B. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in
an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and
process orders into the database.

Explanation:
By decoupling the write operation from the processing operation using SQS, you ensure that the
orders are reliably stored in the queue, regardless of the processing capacity of the EC2
instances. This allows the processing to be performed at a scalable rate based on the available
EC2 instances, improving the overall reliability and speed of order processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

QUESTION 561
A company is developing a mobile gaming app in a single AWS Region. The app runs on multiple
Amazon EC2 instances in an Auto Scaling group. The company stores the app data in Amazon
DynamoDB. The app communicates by using TCP traffic and UDP traffic between the users and the servers. The application will be used globally. The company wants to ensure the lowest
possible latency for all users.
Which solution will meet these requirements?
A. Use AWS Global Accelerator to create an accelerator. Create an Application Load Balancer
(ALB) behind an accelerator endpoint that uses Global Accelerator integration and listening on
the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB.
B. Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer (NLB)
behind an accelerator endpoint that uses Global Accelerator integration and listening on the TCP
and UDP ports. Update the Auto Scaling group to register instances on the NLB.
C. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create a Network Load
Balancer (NLB) behind the endpoint and listening on the TCP and UDP ports. Update the Auto
Scaling group to register instances on the NLB. Update CloudFront to use the NLB as the origin.
D. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create an Application
Load Balancer (ALB) behind the endpoint and listening on the TCP and UDP ports. Update the
Auto Scaling group to register instances on the ALB. Update CloudFront to use the ALB as the
origin.

A

B. Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer (NLB)
behind an accelerator endpoint that uses Global Accelerator integration and listening on the TCP
and UDP ports. Update the Auto Scaling group to register instances on the NLB.

Explanation:
AWS Global Accelerator is a better solution for the mobile gaming app than CloudFront.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

QUESTION 560
A company wants to securely exchange data between its software as a service (SaaS)
application Salesforce account and Amazon S3. The company must encrypt the data at rest by
using AWS Key Management Service (AWS KMS) customer managed keys (CMKs). The
company must also encrypt the data in transit. The company has enabled API access for the
Salesforce account.
A. Create AWS Lambda functions to transfer the data securely from Salesforce to Amazon S3.
B. Create an AWS Step Functions workflow. Define the task to transfer the data securely from
Salesforce to Amazon S3.
C. Create Amazon AppFlow flows to transfer the data securely from Salesforce to Amazon S3.
D. Create a custom connector for Salesforce to transfer the data securely from Salesforce to
Amazon S3.

A

C. Create Amazon AppFlow flows to transfer the data securely from Salesforce to Amazon S3.

Explanation:
Amazon AppFlow is a fully managed integration service that allows you to securely transfer data
between different SaaS applications and AWS services. It provides built-in encryption options and
supports encryption in transit using SSL/TLS protocols. With AppFlow, you can configure the data
transfer flow from Salesforce to Amazon S3, ensuring data encryption at rest by utilizing AWS
KMS CMKs.

42
Q

QUESTION 559
A company uses AWS Organizations to run workloads within multiple AWS accounts. A tagging
policy adds department tags to AWS resources when the company creates tags.
An accounting team needs to determine spending on Amazon EC2 consumption. The accounting team must determine which departments are responsible for the costs regardless ofAWS
account. The accounting team has access to AWS Cost Explorer for all AWS accounts within the
organization and needs to access all reports from Cost Explorer.
Which solution meets these requirements in the MOST operationally efficient way?
A. From the Organizations management account billing console, activate a user-defined cost
allocation tag named department. Create one cost report in Cost Explorer grouping by tag name,
and filter by EC2.
B. From the Organizations management account billing console, activate an AWS-defined cost
allocation tag named department. Create one cost report in Cost Explorer grouping by tag name,
and filter by EC2.
C. From the Organizations member account billing console, activate a user-defined cost allocation
tag named department. Create one cost report in Cost Explorer grouping by the tag name, and
filter by EC2.
D. From the Organizations member account billing console, activate an AWS-defined cost allocation
tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter
by EC2.

A

A. From the Organizations management account billing console, activate a user-defined cost
allocation tag named department. Create one cost report in Cost Explorer grouping by tag name,
and filter by EC2.

Explanation:
By activating a user-defined cost allocation tag named “department” and creating a cost report in
Cost Explorer that groups by the tag name and filters by EC2, the accounting team will be able to
track and attribute costs to specific departments across all AWS accounts within the organization.
This approach allows for consistent cost allocation and reporting regardless of the AWS account
structure.

43
Q

QUESTION 558
A solutions architect is designing a RESTAPI in Amazon API Gateway for a cash payback
service. The application requires 1 GB of memory and 2 GB of storage for its computation
resources. The application will require that the data is in a relational format.
Which additional combination ofAWS services will meet these requirements with the LEAST
administrative effort? (Choose two.)
A. Amazon EC2
B. AWS Lambda
C. Amazon RDS
D. Amazon DynamoDB
E. Amazon Elastic Kubernetes Services (Amazon EKS)

A

B. AWS Lambda
C. Amazon RDS

Explanation:
“The application will require that the data is in a relational format” so DynamoDB is out. RDS is
the choice. Lambda is severless.

44
Q

QUESTION 557
A company that uses AWS is building an application to transfer data to a product manufacturer.
The company has its own identity provider (IdP). The company wants the IdP to authenticate
application users while the users use the application to transfer data. The company must use
Applicability Statement 2 (AS2) protocol.
Which solution will meet these requirements?
A. Use AWS DataSync to transfer the data. Create an AWS Lambda function for IdP authentication.
B. Use Amazon AppFlow flows to transfer the data. Create an Amazon Elastic Container Service
(Amazon ECS) task for IdP authentication.
C. Use AWS Transfer Family to transfer the data. Create an AWS Lambda function for IdP
authentication.
D. Use AWS Storage Gateway to transfer the data. Create an Amazon Cognito identity pool for IdP
authentication.

A

C. Use AWS Transfer Family to transfer the data. Create an AWS Lambda function for IdP
authentication.

Explanation:
To authenticate your users, you can use your existing identity provider with AWS Transfer Family.
You integrate your identity provider using an AWS Lambda function, which authenticates and
authorizes your users for access to Amazon S3 or Amazon Elastic File System (Amazon EFS).
https://docs.aws.amazon.com/transfer/latest/userguide/custom-identity-provider-users.html

45
Q

QUESTION 556
A company runs applications on Amazon EC2 instances in one AWS Region. The company
wants to back up the EC2 instances to a second Region. The company also wants to provision
EC2 resources in the second Region and manage the EC2 instances centrally from one AWS
account.
Which solution will meet these requirements MOST cost-effectively?
A. Create a disaster recovery (DR) plan that has a similar number of EC2 instances in the second
Region. Configure data replication.
B. Create point-in-time Amazon Elastic Block Store (Amazon EBS) snapshots of the EC2 instances.
Copy the snapshots to the second Region periodically.
C. Create a backup plan by using AWS Backup. Configure cross-Region backup to the second
Region for the EC2 instances.
D. Deploy a similar number of EC2 instances in the second Region. Use AWS DataSync to transfer
the data from the source Region to the second Region.

A

C. Create a backup plan by using AWS Backup. Configure cross-Region backup to the second
Region for the EC2 instances.

Explanation:
Using AWS Backup, you can create backup plans that automate the backup process for your
EC2 instances. By configuring cross-Region backup, you can ensure that backups are replicated to the second Region, providing a disaster recovery capability. This solution is cost-effective as it
leverages AWS Backup’s built-in features and eliminates the need for manual snapshot
management or deploying and managing additional EC2 instances in the second Region.

46
Q

QUESTION 555
A company uses AWS Organizations. The company wants to operate some of its AWS accounts
with different budgets. The company wants to receive alerts and automatically prevent
provisioning of additional resources on AWS accounts when the allocated budget threshold is met
during a specific period.
Which combination of solutions will meet these requirements? (Choose three.)
A. Use AWS Budgets to create a budget. Set the budget amount under the Cost and Usage Reports
section of the required AWS accounts.
B. Use AWS Budgets to create a budget. Set the budget amount under the Billing dashboards of the
required AWS accounts.
C. Create an IAM user for AWS Budgets to run budget actions with the required permissions.
D. Create an IAM role for AWS Budgets to run budget actions with the required permissions.
E. Add an alert to notify the company when each account meets its budget threshold. Add a budget
action that selects the IAM identity created with the appropriate config rule to prevent provisioning
of additional resources.
F. Add an alert to notify the company when each account meets its budget threshold. Add a budget
action that selects the IAM identity created with the appropriate service control policy (SCP) to
prevent provisioning of additional resources.

A

B. Use AWS Budgets to create a budget. Set the budget amount under the Billing dashboards of the
required AWS accounts.
D. Create an IAM role for AWS Budgets to run budget actions with the required permissions.
F. Add an alert to notify the company when each account meets its budget threshold. Add a budget
action that selects the IAM identity created with the appropriate service control policy (SCP) to
prevent provisioning of additional resources.

Explanation:
https://docs.aws.amazon.com/ja_jp/awsaccountbilling/latest/aboutv2/view-billing-dashboard.html

47
Q

QUESTION 554
A company has resources across multiple AWS Regions and accounts. A newly hired solutions
architect discovers a previous employee did not provide details about the resources inventory.
The solutions architect needs to build and map the relationship details of the various workloads
across all accounts.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Systems Manager Inventory to generate a map view from the detailed view report.
B. Use AWS Step Functions to collect workload details. Build architecture diagrams of the workloads
manually.
C. Use Workload Discovery on AWS to generate architecture diagrams of the workloads.
D. Use AWS X-Ray to view the workload details. Build architecture diagrams with relationships.

A

C. Use Workload Discovery on AWS to generate architecture diagrams of the workloads.

Explanation:
Workload Discovery on AWS is a service that helps visualize and understand the architecture of
your workloads across multiple AWS accounts and Regions. It automatically discovers and maps
the relationships between resources, providing an accurate representation of the architecture.

48
Q

QUESTION 553
A company wants to implement a backup strategy for Amazon EC2 data and multiple Amazon S3
buckets. Because of regulatory requirements, the company must retain backup files for a specific
time period. The company must not alter the files for the duration of the retention period.
Which solution will meet these requirements?
A. Use AWS Backup to create a backup vault that has a vault lock in governance mode. Create the
required backup plan.
B. Use Amazon Data Lifecycle Manager to create the required automated snapshot policy.
C. Use Amazon S3 File Gateway to create the backup. Configure the appropriate S3 Lifecycle
management.
D. Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the
required backup plan.

A

D. Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the
required backup plan.

Explanation:
https://docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html

49
Q

QUESTION 552
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hour and
takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB of memory.
The CPU utilization of the instance is low except for short surges during which the job uses the maximum CPU available. The company wants to optimize the costs to run the job.
Which solution will meet these requirements?
A. Use AWS App2Container (A2C) to containerize the job. Run the job as an Amazon Elastic
Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU (vCPU) and 1 GB of
memory.
B. Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon
EventBridge scheduled rule to run the code each hour.
C. Use AWS App2Container (A2C) to containerize the job. Install the container in the existing
Amazon Machine Image (AMI). Ensure that the schedule stops the container when the task
finishes.
D. Configure the existing schedule to stop the EC2 instance at the completion of the job and restart
the EC2 instance when the next job starts.

A

B. Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon
EventBridge scheduled rule to run the code each hour.

Explanation:
https://docs.aws.amazon.com/lambda/latest/operatorguide/computing-power.html

50
Q

QUESTION 551
A company is migrating its applications and databases to the AWS Cloud. The company will use
Amazon Elastic Container Service (Amazon ECS), AWS Direct Connect, and Amazon RDS.
Which activities will be managed by the company’s operational team? (Choose three.)
A. Management of the Amazon RDS infrastructure layer, operating system, and platforms
B. Creation of an Amazon RDS DB instance and configuring the scheduled maintenance window
C. Configuration of additional software components on Amazon ECS for monitoring, patch
management, log management, and host intrusion detection
D. Installation of patches for all minor and major database versions for Amazon RDS
E. Ensure the physical security of the Amazon RDS infrastructure in the data center
F. Encryption of the data that moves in transit through Direct Connect

A

B. Creation of an Amazon RDS DB instance and configuring the scheduled maintenance window
C. Configuration of additional software components on Amazon ECS for monitoring, patch
management, log management, and host intrusion detection
F. Encryption of the data that moves in transit through Direct Connect

51
Q

QUESTION 550
A company has a three-tier web application that is in a single server. The company wants to
migrate the application to the AWS Cloud. The company also wants the application to align with
the AWS Well-Architected Framework and to be consistent with AWS recommended best
practices for security, scalability, and resiliency.
Which combination of solutions will meet these requirements? (Choose three.)
A. Create a VPC across two Availability Zones with the application’s existing architecture. Host the
application with existing architecture on an Amazon EC2 instance in a private subnet in each
Availability Zone with EC2 Auto Scaling groups. Secure the EC2 instance with security groups
and network access control lists (network ACLs).
B. Set up security groups and network access control lists (network ACLs) to control access to the
database layer. Set up a single Amazon RDS database in a private subnet.
C. Create a VPC across two Availability Zones. Refactor the application to host the web tier,
application tier, and database tier. Host each tier on its own private subnet with Auto Scaling
groups for the web tier and application tier.
D. Use a single Amazon RDS database. Allow database access only from the application tier
security group.
E. Use Elastic Load Balancers in front of the web tier. Control access by using security groups
containing references to each layer’s security groups.
F. Use an Amazon RDS database Multi-AZ cluster deployment in private subnets. Allow database
access only from application tier security groups.

A

C. Create a VPC across two Availability Zones. Refactor the application to host the web tier,
application tier, and database tier. Host each tier on its own private subnet with Auto Scaling
groups for the web tier and application tier.
E. Use Elastic Load Balancers in front of the web tier. Control access by using security groups
containing references to each layer’s security groups.
F. Use an Amazon RDS database Multi-AZ cluster deployment in private subnets. Allow database
access only from application tier security groups.

52
Q

QUESTION 549
A company runs its application on an Oracle database. The company plans to quickly migrate to
AWS because of limited resources for the database, backup administration, and data center
maintenance. The application uses third-party database features that require privileged access.
Which solution will help the company migrate the database to AWS MOST cost-effectively?
A. Migrate the database to Amazon RDS for Oracle. Replace third-party features with cloud
services.
B. Migrate the database to Amazon RDS Custom for Oracle. Customize the database settings to
support third-party features.
C. Migrate the database to an Amazon EC2 Amazon Machine Image (AMI) for Oracle. Customize
the database settings to support third-party features.
D. Migrate the database to Amazon RDS for PostgreSQL by rewriting the application code to
remove dependency on Oracle APEX.

A

B. Migrate the database to Amazon RDS Custom for Oracle. Customize the database settings to
support third-party features.

Explanation:
https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-rds-custom-oracle/

53
Q

QUESTION 548
A company has two VPCs named Management and Production. The Management VPC uses
VPNs through a customer gateway to connect to a single device in the data center. The
Production VPC uses a virtual private gateway with two attached AWS Direct Connect
connections. The Management and Production VPCs both use a single VPC peering connection
to allow communication between the applications.
What should a solutions architect do to mitigate any single point of failure in this architecture?
A. Add a set of VPNs between the Management and Production VPCs.
B. Add a second virtual private gateway and attach it to the Management VPC.
C. Add a second set of VPNs to the Management VPC from a second customer gateway device.
D. Add a second VPC peering connection between the Management VPC and the Production VPC.

A

C. Add a second set of VPNs to the Management VPC from a second customer gateway device.

Explanation:
Redundant VPN connections: Instead of relying on a single device in the data center, the
Management VPC should have redundant VPN connections established through multiple
customer gateways. This will ensure high availability and fault tolerance in case one of the VPN
connections or customer gateways fails.

54
Q

QUESTION 547
A company has a stateless web application that runs on AWS Lambda functions that are invoked
by Amazon API Gateway. The company wants to deploy the application across multiple AWS
Regions to provide Regional failover capabilities.

What should a solutions architect do to route traffic to multiple Regions?
A. Create Amazon Route 53 health checks for each Region. Use an active-active failover
configuration.
B. Create an Amazon CloudFront distribution with an origin for each Region. Use CloudFront health
checks to route traffic.
C. Create a transit gateway. Attach the transit gateway to the API Gateway endpoint in each Region.
Configure the transit gateway to route requests.
D. Create an Application Load Balancer in the primary Region. Set the target group to point to the
API Gateway endpoint hostnames in each Region.

A

A. Create Amazon Route 53 health checks for each Region. Use an active-active failover
configuration.

Explanation:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html

55
Q

QUESTION 546
A company stores data in PDF format in an Amazon S3 bucket. The company must follow a legal
requirement to retain all new and existing data in Amazon S3 for 7 years.
Which solution will meet these requirements with the LEAST operational overhead?
A. Turn on the S3 Versioning feature for the S3 bucket. Configure S3 Lifecycle to delete the data
after 7 years. Configure multi-factor authentication (MFA) delete for all S3 objects.
B. Turn on S3 Object Lock with governance retention mode for the S3 bucket. Set the retention
period to expire after 7 years. Recopy all existing objects to bring the existing data into
compliance.
C. Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention
period to expire after 7 years. Recopy all existing objects to bring the existing data into
compliance.
D. Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention
period to expire after 7 years. Use S3 Batch Operations to bring the existing data into compliance.

A

D. Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention
period to expire after 7 years. Use S3 Batch Operations to bring the existing data into compliance.

Explanation:
To replicate existing object/data in S3 Bucket to bring them to compliance, optionally we use “S3
Batch Replication”, so option D is the most appropriate, especially if we have big data in S3.

56
Q

QUESTION 545
A company is storing 700 terabytes of data on a large network-attached storage (NAS) system in
its corporate data center. The company has a hybrid environment with a 10 Gbps AWS Direct
Connect connection.
After an audit from a regulator, the company has 90 days to move the data to the cloud. The
company needs to move the data efficiently and without disruption. The company still needs to be
able to access and update the data during the transfer window.
Which solution will meet these requirements?
A. Create an AWS DataSync agent in the corporate data center. Create a data transfer task Start
the transfer to an Amazon S3 bucket.
B. Back up the data to AWS Snowball Edge Storage Optimized devices. Ship the devices to an
AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
C. Use rsync to copy the data directly from local storage to a designated Amazon S3 bucket over the
Direct Connect connection.
D. Back up the data on tapes. Ship the tapes to an AWS data center. Mount a target Amazon S3
bucket on the on-premises file system.

A

A. Create an AWS DataSync agent in the corporate data center. Create a data transfer task Start
the transfer to an Amazon S3 bucket.

Explanation:
By leveraging AWS DataSync in combination with AWS Direct Connect, the company can
efficiently and securely transfer its 700 terabytes of data to an Amazon S3 bucket without
disruption. The solution allows continued access and updates to the data during the transfer
window, ensuring business continuity throughout the migration process.

57
Q

QUESTION 544
A company has hired a solutions architect to design a reliable architecture for its application. The
application consists of one Amazon RDS DB instance and two manually provisioned Amazon
EC2 instances that run web servers. The EC2 instances are located in a single Availability Zone.
An employee recently deleted the DB instance, and the application was unavailable for 24 hours
as a result. The company is concerned with the overall reliability of its environment.
What should the solutions architect do to maximize reliability of the application’s infrastructure?
A. Delete one EC2 instance and enable termination protection on the other EC2 instance. Update
the DB instance to be Multi-AZ, and enable deletion protection.
B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances
behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple
Availability Zones.
C. Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda
function. Configure the application to invoke the Lambda function through API Gateway. Have the
Lambda function write the data to the two DB instances.
D. Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in
multiple Availability Zones. Use Spot Instances instead of On-Demand Instances. Set up Amazon
CloudWatch alarms to monitor the health of the instances Update the DB instance to be Multi-AZ,
and enable deletion protection.

A

B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances
behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple
Availability Zones.

Explanation:
Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances
behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple
Availability Zones.

58
Q

QUESTION 543
A company wants to host a scalable web application on AWS. The application will be accessed
by users from different geographic regions of the world. Application users will be able to
download and upload unique data up to gigabytes in size. The development team wants a cost-
effective solution to minimize upload and download latency and maximize performance.
What should a solutions architect do to accomplish this?
A. Use Amazon S3 with Transfer Acceleration to host the application.
B. Use Amazon S3 with CacheControl headers to host the application.
C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.

A

A. Use Amazon S3 with Transfer Acceleration to host the application.

Explanation:
https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/userguide/transfer-acceleration.html

59
Q

QUESTION 542
A company stores several petabytes of data across multiple AWS accounts. The company uses
AWS Lake Formation to manage its data lake. The company’s data science team wants to
securely share selective data from its accounts with the company’s engineering team for
analytical purposes.
Which solution will meet these requirements with the LEAST operational overhead?
A. Copy the required data to a common account. Create an IAM access role in that account. Grant
access by specifying a permission policy that includes users from the engineering team accounts
as trusted entities.
B. Use the Lake Formation permissions Grant command in each account where the data is stored to
allow the required engineering team users to access the data.
C. Use AWS Data Exchange to privately publish the required data to the required engineering team
accounts.
D. Use Lake Formation tag-based access control to authorize and grant cross-account permissions
for the required data to the engineering team accounts.

A

D. Use Lake Formation tag-based access control to authorize and grant cross-account permissions
for the required data to the engineering team accounts.

Explanation:
By utilizing Lake Formation’s tag-based access control, you can define tags and tag-based
policies to grant selective access to the required data for the engineering team accounts. This
approach allows you to control access at a granular level without the need to copy or move the data to a common account or manage permissions individually in each account. It provides a
centralized and scalable solution for securely sharing data across accounts with minimal
operational overhead.
https://aws.amazon.com/blogs/big-data/securely-share-your-data-across-aws-accounts-using-
aws-lake-formation/

60
Q

QUESTION 541
A company hosts a multi-tier web application on Amazon Linux Amazon EC2 instances behind an
Application Load Balancer. The instances run in an Auto Scaling group across multiple
Availability Zones. The company observes that the Auto Scaling group launches more On-
Demand Instances when the application’s end users access high volumes of static web content.
The company wants to optimize cost.
What should a solutions architect do to redesign the application MOST cost-effectively?
A. Update the Auto Scaling group to use Reserved Instances instead of On-Demand Instances.
B. Update the Auto Scaling group to scale by launching Spot Instances instead of On-Demand
Instances.
C. Create an Amazon CloudFront distribution to host the static web contents from an Amazon S3
bucket.
D. Create an AWS Lambda function behind an Amazon API Gateway API to host the static website
contents.

A

C. Create an Amazon CloudFront distribution to host the static web contents from an Amazon S3
bucket.

Explanation:
By leveraging Amazon CloudFront, you can cache and serve the static web content from edge
locations worldwide, reducing the load on your EC2 instances. This can help lower the number of
On-Demand Instances required to handle high volumes of static web content requests. Storing
the static content in an Amazon S3 bucket and using CloudFront as a content delivery network
(CDN) improves performance and reduces costs by reducing the load on your EC2 instances.

61
Q

QUESTION 540
A company used an Amazon RDS for MySQL DB instance during application testing. Before
terminating the DB instance at the end of the test cycle, a solutions architect created two
backups. The solutions architect created the first backup by using the mysqldump utility to create
a database dump. The solutions architect created the second backup by enabling the final DB
snapshot option on RDS termination.
The company is now planning for a new test cycle and wants to create a new DB instance from
the most recent backup. The company has chosen a MySQL-compatible edition ofAmazon
Aurora to host the DB instance.
Which solutions will create the new DB instance? (Choose two.)
A. Import the RDS snapshot directly into Aurora.
B. Upload the RDS snapshot to Amazon S3. Then import the RDS snapshot into Aurora.
C. Upload the database dump to Amazon S3. Then import the database dump into Aurora.
D. Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora.
E. Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS
DMS) to import the database dump into Aurora.

A

A. Import the RDS snapshot directly into Aurora.
C. Upload the database dump to Amazon S3. Then import the database dump into Aurora.

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RD
SMySQL.Import.html

62
Q

QUESTION 539
A solutions architect configured a VPC that has a small range of IP addresses. The number of
Amazon EC2 instances that are in the VPC is increasing, and there is an insufficient number of IP
addresses for future workloads.
Which solution resolves this issue with the LEAST operational overhead?
A. Add an additional IPv4 CIDR block to increase the number of IP addresses and create additional
subnets in the VPC. Create new resources in the new subnets by using the new CIDR.
B. Create a second VPC with additional subnets. Use a peering connection to connect the second
VPC with the first VPC Update the routes and create new resources in the subnets of the second
VPC.
C. Use AWS Transit Gateway to add a transit gateway and connect a second VPC with the first
VPUpdate the routes of the transit gateway and VPCs. Create new resources in the subnets of
the second VPC.
D. Create a second VPC. Create a Site-to-Site VPN connection between the first VPC and the
second VPC by using a VPN-hosted solution on Amazon EC2 and a virtual private gateway.
Update the route between VPCs to the traffic through the VPN. Create new resources in the
subnets of the second VPC.

A

A. Add an additional IPv4 CIDR block to increase the number of IP addresses and create additional
subnets in the VPC. Create new resources in the new subnets by using the new CIDR.

Explanation:
You assign a single CIDR IP address range as the primary CIDR block when you create a VPC
and can add up to four secondary CIDR blocks after creation of the VPC.

63
Q

QUESTION 538
A company wants to share accounting data with an external auditor. The data is stored in an
Amazon RDS DB instance that resides in a private subnet. The auditor has its own AWS account
and requires its own copy of the database.
What is the MOST secure way for the company to share the database with the auditor?
A. Create a read replica of the database. Configure IAM standard database authentication to grant
the auditor access.
B. Export the database contents to text files. Store the files in an Amazon S3 bucket. Create a new
IAM user for the auditor. Grant the user access to the S3 bucket.
C. Copy a snapshot of the database to an Amazon S3 bucket. Create an IAM user. Share the user’s
keys with the auditor to grant access to the object in the S3 bucket.
D. Create an encrypted snapshot of the database. Share the snapshot with the auditor. Allow access
to the AWS Key Management Service (AWS KMS) encryption key.

A

D. Create an encrypted snapshot of the database. Share the snapshot with the auditor. Allow access
to the AWS Key Management Service (AWS KMS) encryption key.

Explanation:
The most secure way for the company to share the database with the auditor is option D: Create
an encrypted snapshot of the database, share the snapshot with the auditor, and allow access to
the AWS Key Management Service (AWS KMS) encryption key.
By creating an encrypted snapshot, the company ensures that the database data is protected at
rest. Sharing the encrypted snapshot with the auditor allows them to have their own copy of the database securely.
In addition, granting access to the AWS KMS encryption key ensures that the auditor has the
necessary permissions to decrypt and access the encrypted snapshot. This allows the auditor to
restore the snapshot and access the data securely.
This approach provides both data protection and access control, ensuring that the database is
securely shared with the auditor while maintaining the confidentiality and integrity of the data.

64
Q

QUESTION 537
A company operates an ecommerce website on Amazon EC2 instances behind an Application
Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues
related to a high request rate from illegitimate external systems with changing IP addresses. The
security team is worried about potential DDoS attacks against the website. The company must
block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.
What should a solutions architect recommend?
A. Deploy Amazon Inspector and associate it with the ALB.
B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
C. Deploy rules to the network ACLs associated with the ALB to block the incomingtraffic.
D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.

A

B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.

Explanation:
AWS WAF (Web Application Firewall) is a service that provides protection for web applications
against common web exploits. By associating AWS WAF with the Application Load Balancer
(ALB), you can inspect incoming traffic and define rules to allow or block requests based on
various criteria.

65
Q

QUESTION 536
A company moved its on-premises PostgreSQL database to an Amazon RDS for PostgreSQL DB
instance. The company successfully launched a new product. The workload on the database has
increased. The company wants to accommodate the larger workload without adding
infrastructure.
Which solution will meet these requirements MOST cost-effectively?
A. Buy reserved DB instances for the total workload. Make the Amazon RDS for PostgreSQL DB
instance larger.
B. Make the Amazon RDS for PostgreSQL DB instance a Multi-AZ DB instance.
C. Buy reserved DB instances for the total workload. Add another Amazon RDS for PostgreSQL DB instance.
D. Make the Amazon RDS for PostgreSQL DB instance an on-demand DB instance.

A

A. Buy reserved DB instances for the total workload. Make the Amazon RDS for PostgreSQL DB
instance larger.

Explanation:
“without adding infrastructure” means scaling vertically and choosing larger instance.
“MOST cost-effectively” reserved instances

66
Q

QUESTION 535
A company needs to migrate a MySQL database from its on-premises data center to AWS within
2 weeks. The database is 20 TB in size. The company wants to complete the migration with
minimal downtime.
Which solution will migrate the database MOST cost-effectively?
A. Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service
(AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with
replication of ongoing changes. Send the Snowball Edge device to AWS to finish the migration
and continue the ongoing replication.
B. Order an AWS Snowmobile vehicle. Use AWS Database Migration Service (AWS DMS) with
AWS Schema Conversion Tool (AWS SCT) to migrate the database with ongoing changes. Send
the Snowmobile vehicle back to AWS to finish the migration and continue the ongoing replication.
C. Order an AWS Snowball Edge Compute Optimized with GPU device. Use AWS Database
Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the
database with ongoing changes. Send the Snowball device to AWS to finish the migration and
continue the ongoing replication
D. Order a 1 GB dedicated AWS Direct Connect connection to establish a connection with the data
center. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool
(AWS SCT) to migrate the database with replication of ongoing changes.

A

A. Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service
(AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with
replication of ongoing changes. Send the Snowball Edge device to AWS to finish the migration
and continue the ongoing replication.

Explanation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.Process.html

67
Q

QUESTION 534
A company hosts its application in the AWS Cloud. The application runs on Amazon EC2
instances behind an Elastic Load Balancer in an Auto Scaling group and with an Amazon
DynamoDB table. The company wants to ensure the application can be made available in
anotherAWS Region with minimal downtime.
What should a solutions architect do to meet these requirements with the LEAST amount of
downtime?
A. Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery
Region’s load balancer.
B. Create an AWS CloudFormation template to create EC2 instances, load balancers, and
DynamoDB tables to be launched when needed Configure DNS failover to point to the new
disaster recovery Region’s load balancer.
C. Create an AWS CloudFormation template to create EC2 instances and a load balancer to be
launched when needed. Configure the DynamoDB table as a global table. Configure DNS failover
to point to the new disaster recovery Region’s load balancer.
D. Create an Auto Scaling group and load balancer in the disaster recovery Region. Configure the
DynamoDB table as a global table. Create an Amazon CloudWatch alarm to trigger an AWS
Lambda function that updates Amazon Route 53 pointing to the disaster recovery load balancer.

A

A. Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery
Region’s load balancer.

68
Q

QUESTION 533
A company is running its production and nonproduction environment workloads in multiple AWS
accounts. The accounts are in an organization in AWS Organizations. The company needs to
design a solution that will prevent the modification of cost usage tags.
Which solution will meet these requirements?
A. Create a custom AWS Config rule to prevent tag modification except by authorized principals.
B. Create a custom trail in AWS CloudTrail to prevent tag modification.
C. Create a service control policy (SCP) to prevent tag modification except by authorized principals.
D. Create custom Amazon CloudWatch logs to prevent tag modification.

A

C. Create a service control policy (SCP) to prevent tag modification except by authorized principals.

Explanation:
https://docs.aws.amazon.com/ja_jp/organizations/latest/userguide/orgs_manage_policies_scps_e
xamples_tagging.html

69
Q

QUESTION 532
An ecommerce company wants to use machine learning (ML) algorithms to build and train models. The company will use the models to visualize complex scenarios and to detect trends in
customer data. The architecture team wants to integrate its ML models with a reporting platform
to analyze the augmented data and use the data directly in its business intelligence dashboards.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Glue to create an ML transform to build and train models. Use Amazon OpenSearch
Service to visualize the data.
B. Use Amazon SageMaker to build and train models. Use Amazon QuickSight to visualize the data.
C. Use a pre-built ML Amazon Machine Image (AMI) from the AWS Marketplace to build and train
models. Use Amazon OpenSearch Service to visualize the data.
D. Use Amazon QuickSight to build and train models by using calculated fields. Use Amazon
QuickSight to visualize the data.

A

B. Use Amazon SageMaker to build and train models. Use Amazon QuickSight to visualize the data.

Explanation:
Amazon SageMaker is a fully managed service that provides a complete set of tools and
capabilities for building, training, and deploying ML models. It simplifies the end-to-end ML
workflow and reduces operational overhead by handling infrastructure provisioning, model
training, and deployment.
To visualize the data and integrate it into business intelligence dashboards, Amazon QuickSight
can be used. QuickSight is a cloud-native business intelligence service that allows users to easily
create interactive visualizations, reports, and dashboards from various data sources, including the
augmented data generated by the ML models.

70
Q

QUESTION 531
A company has developed a new video game as a web application. The application is in a three-
tier architecture in a VPC with Amazon RDS for MySQL in the database layer. Several players
will compete concurrently online. The game’s developers want to display a top-10 scoreboard in
near-real time and offer the ability to stop and restore the game while preserving the current
scores.
What should a solutions architect do to meet these requirements?
A. Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web
application to display.
B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web
application to display.
C. Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard
in a section of the application.
D. Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and
serve the read traffic to the web application.

A

B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web
application to display.

Explanation:
https://aws.amazon.com/jp/blogs/news/building-a-real-time-gaming-leaderboard-with-amazon-
elasticache-for-redis/

71
Q

QUESTION 530
A manufacturing company has machine sensors that upload .csv files to an Amazon S3 bucket.
These .csv files must be converted into images and must be made available as soon as possible for the automatic generation of graphical reports.
The images become irrelevant after 1 month, but the .csv files must be kept to train machine
learning (ML) models twice a year. The ML trainings and audits are planned weeks in advance.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
A. Launch an Amazon EC2 Spot Instance that downloads the .csv files every hour, generates the
image files, and uploads the images to the S3 bucket.
B. Design an AWS Lambda function that converts the .csv files into images and stores the images in
the S3 bucket. Invoke the Lambda function when a .csv file is uploaded.
C. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files
from S3 Standard to S3 Glacier 1 day after they are uploaded. Expire the image files after 30
days.
D. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files
from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 1 day after they are
uploaded. Expire the image files after 30 days.
E. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files
from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 1 day after they are
uploaded. Keep the image files in Reduced Redundancy Storage (RRS).

A

B. Design an AWS Lambda function that converts the .csv files into images and stores the images in
the S3 bucket. Invoke the Lambda function when a .csv file is uploaded.
C. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files
from S3 Standard to S3 Glacier 1 day after they are uploaded. Expire the image files after 30
days.

Explanation:
https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html
https://aws.amazon.com/jp/about-aws/whats-new/2021/11/amazon-s3-glacier-storage-class-
amazon-s3-glacier-flexible-retrieval/

72
Q

QUESTION 529
The following IAM policy is attached to an IAM group. This is the only policy applied to the group.

What are the effective IAM permissions of this policy for group members?
A. Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements
after the Allow permission are not applied.
B. Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they
are logged in with multi-factor authentication (MFA).
C. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for
all Regions when logged in with multi-factor authentication (MFA). Group members are permitted
any other Amazon EC2 action.
D. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for
the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members
are permitted any other Amazon EC2 action within the us-east-1 Region.

A

D. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for
the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members
are permitted any other Amazon EC2 action within the us-east-1 Region.

73
Q

QUESTION 528
A serverless application uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB.
The Lambda function needs permissions to read and write to the DynamoDB table.
Which solution will give the Lambda function access to the DynamoDB table MOST securely?
A. Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user
that allows read and write access to the DynamoDB table. Store the access_key_id and
secret_access_key parameters as part of the Lambda environment variables. Ensure that other
AWS users do not have read and write access to the Lambda function configuration.
B. Create an IAM role that includes Lambda as a trusted service. Attach a policy to the role that
allows read and write access to the DynamoDB table. Update the configuration of the Lambda
function to use the new role as the execution role.
C. Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user
that allows read and write access to the DynamoDB table. Store the access_key_id and
secret_access_key parameters in AWS Systems Manager Parameter Store as secure string
parameters. Update the Lambda function code to retrieve the secure string parameters before
connecting to the DynamoDB table.
D. Create an IAM role that includes DynamoDB as a trusted service. Attach a policy to the role that
allows read and write access from the Lambda function. Update the code of the Lambda function
to attach to the new role as an execution role.

A

B. Create an IAM role that includes Lambda as a trusted service. Attach a policy to the role that
allows read and write access to the DynamoDB table. Update the configuration of the Lambda
function to use the new role as the execution role.

Explanation:
Option B suggests creating an IAM role that includes Lambda as a trusted service, meaning the
role is specifically designed for Lambda functions. The role should have a policy attached to it
that grants the required read and write access to the DynamoDB table.

74
Q

QUESTION 527
A solutions architect is implementing a complex Java application with a MySQL database. The
Java application must be deployed on Apache Tomcat and must be highly available.
What should the solutions architect do to meet these requirements?
A. Deploy the application in AWS Lambda. Configure an Amazon API Gateway API to connect with
the Lambda functions.
B. Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment
and a rolling deployment policy.
C. Migrate the database to Amazon ElastiCache. Configure the ElastiCache security group to allow
access from the application.
D. Launch an Amazon EC2 instance. Install a MySQL server on the EC2 instance. Configure the
application on the server. Create an AMI. Use the AMI to create a launch template with an Auto
Scaling group.

A

B. Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment
and a rolling deployment policy.

Explanation:
AWS Elastic Beanstalk provides an easy and quick way to deploy, manage, and scale
applications. It supports a variety of platforms, including Java and Apache Tomcat. By using
Elastic Beanstalk, the solutions architect can load the Java application and configure the environment to run Apache Tomcat.

75
Q

QUESTION 526
A company needs to store data from its healthcare application. The application’s data frequently
changes. A new regulation requires audit access at all levels of the stored data.
The company hosts the application on an on-premises infrastructure that is running out of storage
capacity. A solutions architect must securely migrate the existing data to AWS while satisfying the
new regulation.
Which solution will meet these requirements?
A. Use AWS DataSync to move the existing data to Amazon S3. Use AWS CloudTrail to log data
events.
B. Use AWS Snowcone to move the existing data to Amazon S3. Use AWS CloudTrail to log
management events.
C. Use Amazon S3 Transfer Acceleration to move the existing data to Amazon S3. Use AWS
CloudTrail to log data events.
D. Use AWS Storage Gateway to move the existing data to Amazon S3. Use AWS CloudTrail to log
management events.

A

A. Use AWS DataSync to move the existing data to Amazon S3. Use AWS CloudTrail to log data
events.

Explanation:
Datasync, this way we can monitor and audit all data at all times. With Snowcone/Snowball we lose access to audit the data as it arrives in the AWS data centers/Region/Availability Zone. AWS DataSync is a data transfer service that simplifies and accelerates the movement of large amounts of data to and from AWS. It is designed to safely and efficiently migrate data from on-premises storage systems to AWS services such as Amazon S3. In this scenario, the company needs to securely migrate its healthcare application data to AWS while complying with the new regulation for audit access. Using AWS DataSync, existing data can be securely transferred to Amazon S3, ensuring that data is stored in a scalable and durable storage service. Additionally, using AWS CloudTrail to log data events ensures that all access and activity related to data stored in Amazon S3 is audited. This helps meet the regulatory requirement for audit access at all levels of stored data. https://docs.aws.amazon.com/ja_jp/datasync/latest/userguide/encryption-in-transit.html

76
Q

QUESTION 525
A company uses high block storage capacity to runs its workloads on premises. The company’s
daily peak input and output transactions per second are not more than 15,000 IOPS. The
company wants to migrate the workloads to Amazon EC2 and to provision disk performance
independent of storage capacity.
Which Amazon Elastic Block Store (Amazon EBS) volume type will meet these requirements
MOST cost-effectively?
A. GP2 volume type
B. io2 volume type
C. GP3 volume type
D. io1 volume type

A

C. GP3 volume type

Explanation:
Both GP2 and GP3 has max IOPS 16000 but GP3 is cost effective.
https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-
save-up-to-20-on-costs/

77
Q

QUESTION 524
A company is running a custom application on Amazon EC2 On-Demand Instances. The
application has frontend nodes that need to run 24 hours a day, 7 days a week and backend
nodes that need to run only for a short time based on workload. The number of backend nodes
varies during the day.
The company needs to scale out and scale in more instances based on workload.
Which solution will meet these requirements MOST cost-effectively?
A. Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
B. Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.
C. Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.
D. Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.

A

B. Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.

78
Q

QUESTION 523
A solutions architect wants to use the following JSON text as an identity-based policy to grant
specific permissions:
Which IAM principals can the solutions architect attach this policy to? (Choose two.)
A. Role
B. Group
C. Organization
D. Amazon Elastic Container Service (Amazon ECS) resource
E. Amazon EC2 resource

A

A. Role
B. Group

identity Policy used with role and group

79
Q

QUESTION 522
A company is developing a new machine learning (ML) model solution on AWS. The models are
developed as independent microservices that fetch approximately 1 GB of model data from
Amazon S3 at startup and load the data into memory. Users access the models through an
asynchronous API. Users can send a request or a batch of requests and specify where the
results should be sent.
The company provides models to hundreds of users. The usage patterns for the models are
irregular. Some models could be unused for days or weeks. Other models could receive batches
of thousands of requests at a time.
Which design should a solutions architect recommend to meet these requirements?
A. Direct the requests from the API to a Network Load Balancer (NLB). Deploy the models as AWS
Lambda functions that are invoked by the NLB.
B. Direct the requests from the API to an Application Load Balancer (ALB). Deploy the models as
Amazon Elastic Container Service (Amazon ECS) services that read from an Amazon Simple
Queue Service (Amazon SQS) queue. Use AWS App Mesh to scale the instances of the ECS
cluster based on the SQS queue size.
C. Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue.
Deploy the models as AWS Lambda functions that are invoked by SQS events. Use AWS Auto
Scaling to increase the number of vCPUs for the Lambda functions based on the SQS queue
size.
D. Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue.
Deploy the models as Amazon Elastic Container Service (Amazon ECS) services that read from
the queue. Enable AWS Auto Scaling on Amazon ECS for both the cluster and copies of the
service based on the queue size.

A

D. Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue.
Deploy the models as Amazon Elastic Container Service (Amazon ECS) services that read from
the queue. Enable AWS Auto Scaling on Amazon ECS for both the cluster and copies of the
service based on the queue size.

There is no need for an application load balancer as C says, anywhere in the text. SQS is required to ensure that the entire request is routed correctly in a microservices architecture and also waits until it is picked up. ECS with Autoscaling, will scale based on unknown usage pattern as mentioned.

Explanation:
https://aws.amazon.com/blogs/containers/amazon-elastic-container-service-ecs-auto-scaling-using-custom-metrics/

A is wrong, question mentioned “asynchronous”, so we need a queue B is wrong NOT because of the 512MB size limit of Lambda (the limit increased to 10GB in late 2022), but because we CAN’T directly increase vCPUs of Lambda, we can only increase memory of Lambda (as memory is increased, the number of vCPU also go up - 1 vCPU per 1769MB) C is wrong, App Mesh is not for managing auto scales So the only one left is D https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-common.html

80
Q

QUESTION 521
A company runs a highly available SFTP service. The SFTP service uses two Amazon EC2 Linux
instances that run with elastic IP addresses to accept traffic from trusted IP sources on the
internet. The SFTP service is backed by shared storage that is attached to the instances. User accounts are created and managed as Linux users in the SFTP servers.
The company wants a serverless option that provides high IOPS performance and highly
configurable security. The company also wants to maintain control over user permissions.
Which solution will meet these requirements?
A. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume. Create an AWS
Transfer Family SFTP service with a public endpoint that allows only trusted IP addresses. Attach
the EBS volume to the SFTP service endpoint. Grant users access to the SFTP service.
B. Create an encrypted Amazon Elastic File System (Amazon EFS) volume. Create an AWS
Transfer Family SFTP service with elastic IP addresses and a VPC endpoint that has internet-
facing access. Attach a security group to the endpoint that allows only trusted IP addresses.
Attach the EFS volume to the SFTP service endpoint. Grant users access to the SFTP service.
C. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family
SFTP service with a public endpoint that allows only trusted IP addresses. Attach the S3 bucket
to the SFTP service endpoint. Grant users access to the SFTP service.
D. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family
SFTP service with a VPC endpoint that has internal access in a private subnet. Attach a security
group that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint.
Grant users access to the SFTP service.

A

B. Create an encrypted Amazon Elastic File System (Amazon EFS) volume. Create an AWS
Transfer Family SFTP service with elastic IP addresses and a VPC endpoint that has internet-
facing access. Attach a security group to the endpoint that allows only trusted IP addresses.
Attach the EFS volume to the SFTP service endpoint. Grant users access to the SFTP service.

Explanation:
https://aws.amazon.com/blogs/storage/use-ip-whitelisting-to-secure-your-aws-transfer-for-sftp-
servers/

81
Q

QUESTION 520
A company wants to use an Amazon RDS for PostgreSQL DB cluster to simplify time-consuming
database administrative tasks for production database workloads. The company wants to ensure
that its database is highly available and will provide automatic failover support in most scenarios
in less than 40 seconds. The company wants to offload reads off of the primary instance and
keep costs as low as possible.
Which solution will meet these requirements?
A. Use an Amazon RDS Multi-AZ DB instance deployment. Create one read replica and point the
read workload to the read replica.
B. Use an Amazon RDS Multi-AZ DB duster deployment Create two read replicas and point the read
workload to the read replicas.
C. Use an Amazon RDS Multi-AZ DB instance deployment. Point the read workload to the
secondary instances in the Multi-AZ pair.
D. Use an Amazon RDS Multi-AZ DB cluster deployment Point the read workload to the reader
endpoint.

A

D. Use an Amazon RDS Multi-AZ DB cluster deployment Point the read workload to the reader
endpoint.

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.ht
ml

82
Q

QUESTION 519
A company uses AWS Organizations with all features enabled and runs multiple Amazon EC2
workloads in the ap-southeast-2 Region. The company has a service control policy (SCP) that
prevents any resources from being created in any other Region. A security policy requires the
company to encrypt all data at rest.

An audit discovers that employees have created Amazon Elastic Block Store (Amazon EBS)
volumes for EC2 instances without encrypting the volumes. The company wants any new EC2
instances that any IAM user or root user launches in ap-southeast-2 to use encrypted EBS
volumes. The company wants a solution that will have minimal effect on employees who create
EBS volumes.
Which combination of steps will meet these requirements? (Choose two.)
A. In the Amazon EC2 console, select the EBS encryption account attribute and define a default
encryption key.
B. Create an IAM permission boundary. Attach the permission boundary to the root organizational
unit (OU). Define the boundary to deny the ec2:CreateVolume action when the ec2:Encrypted
condition equals false.
C. Create an SCP. Attach the SCP to the root organizational unit (OU). Define the SCP to deny the
ec2:CreateVolume action whenthe ec2:Encrypted condition equals false.
D. Update the IAM policies for each account to deny the ec2:CreateVolume action when the
ec2:Encrypted condition equals false.
E. In the Organizations management account, specify the Default EBS volume encryption setting.

A

C. Create an SCP. Attach the SCP to the root organizational unit (OU). Define the SCP to deny the
ec2:CreateVolume action whenthe ec2:Encrypted condition equals false.
E. In the Organizations management account, specify the Default EBS volume encryption setting.

Explanation:
SCP that denies the ec2:CreateVolume action when the ec2:Encrypted condition equals false.
This will prevent users and service accounts in member accounts from creating unencrypted EBS
volumes in the ap-southeast-2 Region.

83
Q

QUESTION 518
A solutions architect needs to allow team members to access Amazon S3 buckets in two different
AWS accounts: a development account and a production account. The team currently has access
to S3 buckets in the development account by using unique IAM users that are assigned to an IAM
group that has appropriate permissions in the account.
The solutions architect has created an IAM role in the production account. The role has a policy
that grants access to an S3 bucket in the production account.
Which solution will meet these requirements while complying with the principle of least privilege?
A. Attach the Administrator Access policy to the development account users.
B. Add the development account as a principal in the trust policy of the role in the production
account.
C. Turn off the S3 Block Public Access feature on the S3 bucket in the production account.
D. Create a user in the production account with unique credentials for each team member.

A

B. Add the development account as a principal in the trust policy of the role in the production
account.

Explanation:
By adding the development account as a principal in the trust policy of the IAM role in the
production account, you are allowing users from the development account to assume the role in
the production account. This allows the team members to access the S3 bucket in the production
account without granting them unnecessary privileges.

84
Q

QUESTION 517
A company is storing petabytes of data in Amazon S3 Standard. The data is stored in multiple S3
buckets and is accessed with varying frequency. The company does not know access patterns for
all the data. The company needs to implement a solution for each S3 bucket to optimize the cost
of S3 usage.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3
Intelligent-Tiering.
B. Use the S3 storage class analysis tool to determine the correct tier for each object in the S3
bucket. Move each object to the identified storage tier.
C. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3
Glacier Instant Retrieval.
D. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3
One Zone-Infrequent Access (S3 One Zone-IA).

A

A. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3
Intelligent-Tiering.

Explanation:
https://aws.amazon.com/s3/storage-classes/intelligent-tiering/

85
Q

QUESTION 516
A company has a business system that generates hundreds of reports each day. The business
system saves the reports to a network share in CSV format. The company needs to store this
data in the AWS Cloud in near-real time for analysis.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Use AWS DataSync to transfer the files to Amazon S3. Create a scheduled task that runs at the
end of each day.
B. Create an Amazon S3 File Gateway. Update the business system to use a new network share
from the S3 File Gateway.
C. Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the
DataSync API in the automation workflow.
D. Deploy an AWS Transfer for SFTP endpoint. Create a script that checks for new files on the
network share and uploads the new files by using SFTP.

A

B. Create an Amazon S3 File Gateway. Update the business system to use a new network share
from the S3 File Gateway.

Explanation:
https://aws.amazon.com/storagegateway/file/?nc1=h_ls

86
Q

QUESTION 515
A company is implementing a shared storage solution for a gaming application that is hosted in
the AWS Cloud. The company needs the ability to use Lustre clients to access data. The solution
must be fully managed.
Which solution meets these requirements?
A. Create an AWS DataSync task that shares the data as a mountable file system. Mount the file
system to the application server.
B. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client
protocol. Connect the application server to the file share.
C. Create an Amazon Elastic File System (Amazon EFS) file system, and configure it to support
Lustre. Attach the file system to the origin server. Connect the application server to the file
system.
D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect
the application server to the file system.

A

D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect
the application server to the file system.

87
Q

QUESTION 514
A new employee has joined a company as a deployment engineer. The deployment engineer will
be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect
wants the deployment engineer to perform job activities while following the principle of least
privilege.
Which combination of actions should the solutions architect take to accomplish this goal?
(Choose two.)

A. Have the deployment engineer use AWS account root user credentials for performing AWS
CloudFormation stack operations.
B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the
PowerUsers IAM policy attached.
C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the
AdministratorAccess IAM policy attached.
D. Create a new IAM user for the deployment engineer and add the IAM user to a group that has an
IAM policy that allows AWS CloudFormation actions only.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the
AWS CloudFormation stack and launch stacks using that IAM role.

A

D. Create a new IAM user for the deployment engineer and add the IAM user to a group that has an
IAM policy that allows AWS CloudFormation actions only.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the
AWS CloudFormation stack and launch stacks using that IAM role.

88
Q

QUESTION 513
An ecommerce company is running a multi-tier application on AWS. The front-end and backend
tiers both run on Amazon EC2, and the database runs on Amazon RDS for MySQL. The backend
tier communicates with the RDS instance. There are frequent calls to return identical datasets
from the database that are causing performance slowdowns.
Which action should be taken to improve the performance of the backend?
A. Implement Amazon SNS to store the database calls.
B. Implement Amazon ElastiCache to cache the large datasets.
C. Implement an RDS for MySQL read replica to cache database calls.
D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.

A

B. Implement Amazon ElastiCache to cache the large datasets.

Explanation:
Key term is identical datasets from the database it means caching can solve this issue by cached
in frequently used dataset from DB.

89
Q

QUESTION 512
A company is building a game system that needs to send unique events to separate leaderboard,
matchmaking, and authentication services concurrently. The company needs an AWS event-
driven system that guarantees the order of the events.
Which solution will meet these requirements?
A. Amazon EventBridge event bus
B. Amazon Simple Notification Service (Amazon SNS) FIFO topics
C. Amazon Simple Notification Service (Amazon SNS) standard topics
D. Amazon Simple Queue Service (Amazon SQS) FIFO queues

A

B. Amazon Simple Notification Service (Amazon SNS) FIFO topics

SQS looks like a good idea first, but since we have to send the same message to multiple destination, even if SQS could do it, SNS is much more dedicated to this kind of usage.

SNS FIFO also can send events or messages cocurrently to many subscribers while maintaining the order it receives. SNS fanout pattern is set in standard SNS which is commonly used to fan out events to large number of subscribers and usually for duplicated messages.

90
Q

QUESTION 511
A company hosts a frontend application that uses an Amazon API Gateway API backend that is
integrated with AWS Lambda. When the API receives requests, the Lambda function loads many
libraries. Then the Lambda function connects to an Amazon RDS database, processes the data,
and returns the data to the frontend application. The company wants to ensure that response
latency is as low as possible for all its users with the fewest number of changes to the company’s
operations.
Which solution will meet these requirements?
A. Establish a connection between the frontend application and the database to make queries faster
by bypassing the API.
B. Configure provisioned concurrency for the Lambda function that handles the requests.
C. Cache the results of the queries in Amazon S3 for faster retrieval of similar datasets.
D. Increase the size of the database to increase the number of connections Lambda can establish at
one time.

A

B. Configure provisioned concurrency for the Lambda function that handles the requests.

Explanation:
Configure provisioned concurrency for the Lambda function that handles the requests.
Provisioned concurrency allows you to set the amount of compute resources that are available to
the Lambda function, so that it can handle more requests at once and reduce latency. Caching
the results of the queries in Amazon S3 could also help to reduce latency, but it would not be as
effective as setting up provisioned concurrency. Increasing the size of the database would not
help to reduce latency, as this would not increase the number of connections the Lambda
function could establish, and establishing a direct connection between the frontend application
and the database would bypass the API, which would not be the best solution either.

91
Q

QUESTION 510
A company is migrating an old application to AWS. The application runs a batch job every hour
and is CPU intensive. The batch job takes 15 minutes on average with an on-premises server.
The server has 64 virtual CPU (vCPU) and 512 GiB of memory.
Which solution will run the batch job within 15 minutes with the LEAST operational overhead?
A. Use AWS Lambda with functional scaling
B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate
C. Use Amazon Lightsail with AWS Auto Scaling
D. Use AWS Batch on Amazon EC2

A

D. Use AWS Batch on Amazon EC2

Explanation:
Use AWS Batch on Amazon EC2. AWS Batch is a fully managed batch processing service that
can be used to easily run batch jobs on Amazon EC2 instances. It can scale the number of
instances to match the workload, allowing the batch job to be completed in the desired time frame
with minimal operational overhead.

92
Q

QUESTION 509
A gaming company is moving its public scoreboard from a data center to the AWS Cloud. The
company uses Amazon EC2 Windows Server instances behind an Application Load Balancer to
host its dynamic application. The company needs a highly available storage solution for the
application. The application consists of static files and dynamic server-side code.
Which combination of steps should a solutions architect take to meet these requirements?
(Choose two.)

A. Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge.
B. Store the static files on Amazon S3. Use Amazon ElastiCache to cache objects at the edge.
C. Store the server-side code on Amazon Elastic File System (Amazon EFS). Mount the EFS
volume on each EC2 instance to share the files.
D. Store the server-side code on Amazon FSx for Windows File Server. Mount the FSx for Windows
File Server volume on each EC2 instance to share the files.
E. Store the server-side code on a General Purpose SSD (gp2) Amazon Elastic Block Store
(Amazon EBS) volume. Mount the EBS volume on each EC2 instance to share the files.

A

A. Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge.
D. Store the server-side code on Amazon FSx for Windows File Server. Mount the FSx for Windows
File Server volume on each EC2 instance to share the files.

Explanation:
https://www.techtarget.com/searchaws/tip/Amazon-FSx-vs-EFS-Compare-the-AWS-file-services
FSx is built for high performance and submillisecond latency using solid-state drive storage
volumes. This design enables users to select storage capacity and latency independently. Thus,
even a subterabyte file system can have 256 Mbps or higher throughput and support volumes up
to 64 TB.

93
Q

QUESTION 508
A company has implemented a self-managed DNS service on AWS. The solution consists of the
following:
- Amazon EC2 instances in different AWS Regions
- Endpoints of a standard accelerator in AWS Global Accelerator
The company wants to protect the solution against DDoS attacks.
What should a solutions architect do to meet this requirement?
A. Subscribe to AWS Shield Advanced. Add the accelerator as a resource to protect.
B. Subscribe to AWS Shield Advanced. Add the EC2 instances as resources to protect.
C. Create an AWS WAF web ACL that includes a rate-based rule. Associate the web ACL with the
accelerator.
D. Create an AWS WAF web ACL that includes a rate-based rule. Associate the web ACL with the
EC2 instances.

A

A. Subscribe to AWS Shield Advanced. Add the accelerator as a resource to protect.

Explanation:
AWS Shield is a managed service that provides protection against Distributed Denial of Service
(DDoS) attacks for applications running on AWS. AWS Shield Standard is automatically enabled
to all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service.
AWS Shield Advanced provides additional protections against more sophisticated and larger
attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load
Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53.

94
Q

QUESTION 507
A company wants to migrate an Oracle database to AWS. The database consists of a single table
that contains millions of geographic information systems (GIS) images that are high resolution
and are identified by a geographic code. When a natural disaster occurs tens of thousands of images get updated every few minutes. Each geographic code has a single image or row that is
associated with it. The company wants a solution that is highly available and scalable during such
events.
Which solution meets these requirements MOST cost-effectively?
A. Store the images and geographic codes in a database table. Use Oracle running on an Amazon
RDS Multi-AZ DB instance.
B. Store the images in Amazon S3 buckets. Use Amazon DynamoDB with the geographic code as
the key and the image S3 URL as the value.
C. Store the images and geographic codes in an Amazon DynamoDB table. Configure DynamoDB
Accelerator (DAX) during times of high load.
D. Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a
database table. Use Oracle running on an Amazon RDS Multi-AZ DB instance.

A

B. Store the images in Amazon S3 buckets. Use Amazon DynamoDB with the geographic code as
the key and the image S3 URL as the value.

95
Q

QUESTION 506
A company has migrated an application to Amazon EC2 Linux instances. One of these EC2
instances runs several 1-hour tasks on a schedule. These tasks were written by different teams
and have no common programming language. The company is concerned about performance
and scalability while these tasks run on a single instance. A solutions architect needs to
implement a solution to resolve these concerns.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge
(Amazon CloudWatch Events).
B. Convert the EC2 instance to a container. Use AWS App Runner to create the container on
demand to run the tasks as jobs.
C. Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon
EventBridge (Amazon CloudWatch Events).
D. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto
Scaling group with the AMI to run multiple copies of the instance.

A

A. Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge
(Amazon CloudWatch Events).

Explanation:
Lambda functions are short lived; the Lambda max timeout is 900 seconds (15 minutes). This can
be difficult to manage and can cause issues in production applications. We’ll take a look at AWS
Lambda timeout limits, timeout errors, monitoring timeout errors, and how to apply best practices
to handle them effectively.

96
Q

QUESTION 505
A company needs to ingest and handle large amounts of streaming data that its application
generates. The application runs on Amazon EC2 instances and sends data to Amazon Kinesis
Data Streams, which is configured with default settings. Every other day, the application
consumes the data and writes the data to an Amazon S3 bucket for business intelligence (BI)
processing. The company observes that Amazon S3 is not receiving all the data that the
application sends to Kinesis Data Streams.
What should a solutions architect do to resolve this issue?
A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data
Streams.
C. Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis
Data Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is
ingested in the S3 bucket.

A

A. Update the Kinesis Data Streams default settings by modifying the data retention period.

Explanation:
https://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html
The question mentioned Kinesis data stream default settings and “every other day”. After 24hrs,
the data isn’t in the Data stream if the default settings is not modified to store data more than
24hrs.

97
Q

QUESTION 504
A company needs a backup strategy for its three-tier stateless web application. The web
application runs on Amazon EC2 instances in an Auto Scaling group with a dynamic scaling
policy that is configured to respond to scaling events. The database tier runs on Amazon RDS for
PostgreSQL. The web application does not require temporary local storage on the EC2 instances.
The company’s recovery point objective (RPO) is 2 hours.
The backup strategy must maximize scalability and optimize resource utilization for this
environment.
Which solution will meet these requirements?
A. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and
database every 2 hours to meet the RPO.
B. Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS)
snapshots. Enable automated backups in Amazon RDS to meet the RPO.
C. Retain the latest Amazon Machine Images (AMIs) of the web and application tiers. Enable
automated backups in Amazon RDS and use point-in-time recovery to meet the RPO.
D. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances
every 2 hours. Enable automated backups in Amazon RDS and use point-in-time recovery to
meet the RPO.

A

C. Retain the latest Amazon Machine Images (AMIs) of the web and application tiers. Enable
automated backups in Amazon RDS and use point-in-time recovery to meet the RPO.

Explanation:
The web application does not require temporary local storage on the EC2 instances => No EBS
snapshot is required, retaining the latest AMI is enough.

98
Q

QUESTION 503
A company needs to transfer 600 TB of data from its on-premises network-attached storage
(NAS) system to the AWS Cloud. The data transfer must be complete within 2 weeks. The data is
sensitive and must be encrypted in transit. The company’s internet connection can support an
upload speed of 100 Mbps.
Which solution meets these requirements MOST cost-effectively?
A. Use Amazon S3 multi-part upload functionality to transfer the fees over HTTPS.
B. Create a VPN connection between the on-premises NAS system and the nearest AWS Region.
Transfer the data over the VPN connection.
C. Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized
devices. Use the devices to transfer the data to Amazon S3.
D. Set up a 10 Gbps AWS Direct Connect connection between the company location and the
nearest AWS Region. Transfer the data over a VPN connection into the Region to store the data
in Amazon S3.

A

C. Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized
devices. Use the devices to transfer the data to Amazon S3.

Explanation:
The best option is to use the AWS Snow Family console to order several AWS Snowball Edge
Storage Optimized devices and use the devices to transfer the data to Amazon S3. Snowball
Edge is a petabyte-scale data transfer device that can help transfer large amounts of data
securely and quickly. Using Snowball Edge can be the most cost-effective solution for transferring
large amounts of data over long distances and can help meet the requirement of transferring 600
TB of data within two weeks.

99
Q

QUESTION 502
A company runs a public three-Tier web application in a VPC. The application runs on Amazon
EC2 instances across multiple Availability Zones. The EC2 instances that run in private subnets
need to communicate with a license server over the internet. The company needs a managed
solution that minimizes operational maintenance.
Which solution meets these requirements?
A. Provision a NAT instance in a public subnet. Modify each private subnets route table with a
default route that points to the NAT instance.
B. Provision a NAT instance in a private subnet. Modify each private subnet’s route table with a
default route that points to the NAT instance.
C. Provision a NAT gateway in a public subnet. Modify each private subnet’s route table with a
default route that points to the NAT gateway.
D. Provision a NAT gateway in a private subnet. Modify each private subnet’s route table with a
default route that points to the NAT gateway.

A

C. Provision a NAT gateway in a public subnet. Modify each private subnet’s route table with a
default route that points to the NAT gateway.

Explanation:
As the company needs a managed solution that minimizes operational maintenance - NAT
Gateway is a public subnet is the answer.

100
Q

QUESTION 501
An IAM user made several configuration changes to AWS resources in their company’s account
during a production deployment last week. A solutions architect learned that a couple of security
group rules are not configured as desired. The solutions architect wants to confirm which IAM
user was responsible for making changes.
Which service should the solutions architect use to find the desired information?
A. Amazon GuardDuty
B. Amazon Inspector
C. AWS CloudTrail
D. AWS Config

A

C. AWS CloudTrail

Explanation:
The best option is to use AWS CloudTrail to find the desired information. AWS CloudTrail is a
service that enables governance, compliance, operational auditing, and risk auditing of AWS
account activities. CloudTrail can be used to log all changes made to resources in an AWS
account, including changes made by IAM users, EC2 instances, AWS management console, and
other AWS services. By using CloudTrail, the solutions architect can identify the IAM user who
made the configuration changes to the security group rules.