Cert Prep: Certified Solutions Architect - Associate for AWS (SAA-C03) Flashcards

1
Q

As the new Security Engineer for your company’s AWS cloud environment, you are responsible for developing best practice guidelines. In addition to data security such as encryption, you need to develop a plan for Security Groups, Access Control Lists, as well as IAM Policies. You want to roll out best practice policies for IAM.

Which choice belowis notan IAM best practice?

A. Share access keys for cross-account access.
B. Use policy conditions for extra security.
C. Delegate by using roles instead of sharing credentials.
D. Rotate credentials regularly.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following data transfer solutions are free? Select three choices.

A. Data transfer between EC2, RDS, and Redshift in the same Availability Zone
B. Data transferred into and out of Elastic Load Balancers using private IP addresses
C. Data transferred into and out from an IPv6 address in a different VPC
D. Data transfer directly between S3, Glacier, DynamoDB, and EC2 in the same AWS Region

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You are designing a web application that needs to be highly available and handle a large amountof read traffic. You aredesigning an RDS database with a Multi-AZ configuration that will store transaction data includingpersonal customer data.

You are considering using options to help offset some of the read traffic, andyour client wants to discuss multiple options outside of Amazon RDS features. What other Amazon serviceswould bestoffload the read traffic workload from the application’s database without requiring extensiveapp design changes?

A. Migrate static, WORM data to public Amazon S3 buckets.
B. Implement anElasticache instance to cache frequently-accessed data.
C. Configure an SQS queue to manage read requests for frequently-accessed data.
D. Promote the Multi-AZ standby databaseto a read replica during peak hours.

A

B. Implement anElasticache instance to cache frequently-accessed data.

Explanation:
Amazon ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory system, instead of relying entirely on slower disk-based databases. Amazon ElastiCache is ideally suited as a front-end for Amazon Web Services like Amazon RDS and Amazon DynamoDB, providing extremely low latency for high-performance applications and offloading some of the request workload while these services provide long-lasting data durability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which would be the most efficient way to review and reduce networking costs by deleting idle load balancers?

A. Use the Amazon Trusted Advisor Idle Load balancers check to get a report of load balancers with a RequestCount of less than 100 in the last week. Send this report to S3 to find the load balancers to delete.
B. Use the AWS SDK to create a Lambda function to find and delete load balancers with RequestCount <10 in the last week.
C. Use the AWS Management Console to query and delete load balancers on each appropriate EC2 instance with RequestCount < 100 in the last week.
D. Use the Amazon Inspector Idle Load balancers check to get a report of load balancers with a RequestCount of less than 100 in the last week. Send this report to S3 to find the load balancers to delete.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company storesits applicationdata onsite using iSCSI connectionstoarchival disk storagelocated at an on-premises data center.Now managementwants to identifycloud solutions to back up that data to the AWS cloud and store it at minimal expense.

The company needs to back up200 TB of data to the cloud over the course of amonth, and the speed of the backup is not a critical factor. The backups are rarely accessed, but whenrequested, they should be available in less than 6hours.

What are the most cost-effective steps to archiving the data and meeting additional requirements?

A. 1) Copy the data to AWS Storage Gateway file gateways.

2) Store the data in Amazon S3 in the S3 Glacier Flexible Retrieval storage class.
B. 1) Copy the data to Amazon S3 usingAWS DataSync.

2)Store the data in Amazon S3 in the S3 Glacier Deep Archive storage class.
C. 1) Backup the data using AWS Storage Gateway volumegateways.

2) Store the data in Amazon S3 in the S3 Glacier Flexible Retrieval storage class.
D. 1) Migrate the data to AWS using an AWS Snowball Edge device.

2) Store the data in Amazon S3 in the S3 Glacier Deep Archive storage class.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Your latest client contacted you a week before an audit on its AWS cloud infrastructure. Your client is concerned about its lack of automated policy enforcement for data protection and the difficulties they encounter when reporting for audit and compliance.

Which service should you enable to assist this client?

A. AWS Macie
B. AWS DataSync
C. AWS GuardDuty
D. AWS Backup

A

D. AWS Backup

Explanation:
The client is in search of a solution that automates policy enforcement for data protection and compliance. With AWS Backup the client can enable automated data protection policies and schedules that will meet the regulatory compliance requirements for its upcoming audit. Also, AWS Backup allows you to centrally manage and automate the backup of data across AWS services such as Ec2, S3, EBS, RDS, EFS,FSx, and more.

The remaining choices are incorrect for the following reasons:

AWS DataSync is a data transfer service that enables you to optimize network bandwidth and accelerate data transfer between on-premises storage and AWS storage. DataSync does not provide policy enforcement for data protection.

Amazon GuardDuty is a threat detection service that continuously monitors AWS accounts and workloads for malicious activity and anomalous behavior.

Amazon FSx provides a cost-effective file storage service that makes it easy to launch, run, and scale high-performance file systems in the cloud. It does not offer the data protection needed in this scenario.

Although AWS Macie protects your data through discovery and protection of your sensitive data at scale, Macie does not provide automated data protection, compliance, and governance for your applications running in the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

An environmental agency is concluding a 10-year study of mining sites and needs to transfer over 200 terabytes of data to the AWS cloud for storage and analysis.

Data will be gradually collected over a period of weeks in an area with no existing network bandwidth. Given the remote location, the agency wants a transfer solution that is cost-effective while requiring minimal device shipments back and forth.

Which AWS solution will best address the agency’s data storage and migration requirements?

A. AWS Snowcone
B. AWS Snowmobile
C. AWS Snowball Compute Optimized with GPU
D. AWS Snowball Storage Optimized

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You are rapidly configuringa VPC for a new insurance application that needs to go live imminently to meet an upcoming compliance deadline. The insurance company must migrate a new application tothis newVPC and connect it as quickly as possible to an on-premises, company-ownedapplication thatcannot migrateto the cloud.

Your immediate goal is to connect your on-premises appas quickly as possible but speed and reliability are critical long-term requirements. The medical insurance company suggestsimplementingthe quickest connection method now, and if necessary,switching over to a faster, more reliable connection servicewithinthe next six months if necessary.

Which strategywould work best to satisfy theirshort and long-term networking requirements?

A. AWS VPN is the best short-term and long-term solution.
B. AWS VPN is the best short-term solution, and AWS Direct Connect is the best long-term solution.
C. VPC Endpoints are the best short-term and long-term solutions.
D. VPC Endpoints are the best short-term solution, and AWS VPN is the best long-term solution.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A pharmaceutical company is building an application that will use both AWS and on-premises resources. The application must comply with regulatory requirements and ensure the protection of intellectual property. One of the essential requirements is that data transferred between AWS and on-premises resources should not flow through the public internet. The company currently manages a single VPC with two private subnets in two different availability zones.

Which solution would enable connectivity between AWS and on-premises resources while maintaining a private connection?

A. Use a virtual private gateway with a customer gateway and create a site-to-site VPN connection.
B. Use AWS Transit Gateway to create a private site-to-site VPN connection.
C. Use AWS Direct Connect withavirtual private gateway and a private virtual interface (private VIF).
D. Use AWS VPN CloudHub to create a private site-to-site VPN connection.

A

C. Use AWS Direct Connect withavirtual private gateway and a private virtual interface (private VIF).

Explanation:
Several AWS services are available to help organizations connect AWS cloud resources with their on-premises infrastructure. Using either AWS Direct Connect and a Virtual Private Gateway with a site-to-site VPN connection are standard solutions to help accomplish this goal. However, the key to this question is that the team is looking for a solution where the data transferred between AWS and on-premises resources should not flow through the public internet. BecauseAWS Direct Connect uses a dedicated network connection and does not use the public internet to connect AWS resources to an on-premises network, this is the correct choice.Using a virtual private gateway with a customer gateway tocreate a site-to-site VPN connection would work, but it uses existing internet connections.

Now, let’s look at the other services mentioned in theremaining choices:

Though theTransit Gateway service can help connect multiple VPCs togetherwithan on-premises network, it alone will not establish a private connection as described in this scenario.  A transit gateway can be used with either AWS Direct Connect or a virtual private gateway to connect VPCs with an on-premises network.
AWS VPN CloudHub is a service that solutions architects can use with a virtual private gateway to connect multiple customer networks located at different locations.  With the virtual private gateway,theremote sites can communicate with each other and thecustomer's Amazon VPCs.

For more information on options for connecting customer networks toAmazonVPCs, take a look at theAmazon Virtual Private Cloud Connectivity Options whitepaper.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You host twoseparate applications that utilize the same DynamoDB tables containing environmental data.The firstapplication, which focuses ondata analysis,is hosted on compute-optimized EC2 instances in a private subnet.It retrieves raw data, processes the data, and uploads the results to a second DynamoDB table. The secondapplication is apublic website hosted on general-purpose EC2 instances within a public subnet and allows researchers to view the raw and processed data online.

For security reasons, you want both applications to access the relevant DynamoDB tables within your VPC rather than sending requests over the internet. You also want to ensure that while your data analysis application can retrieve and upload data to DynamoDB, outside researchers will not be able to upload data or modify any data through the public website.

How can you ensure each application is granted the correct level of authorization? (Choose 2 answers)

A. Deploy a DynamoDB VPC endpoint in the data analysis application’s private subnet, and a DynamoDB VPC endpoint in the public website’s public subnet.
B. Deploy one DynamoDB VPC endpoint in its own subnet. Update the route tables for each application’s subnet with routes to the DynamoDB VPC endpoint.
C. Configure and implement a singleVPC endpoint policy to grant access to both applications.
D. Configure and implement separate VPC endpoint policies for each application.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company is developing a mission-critical API on AWS using a Lambda function that accesses data stored in Amazon DynamoDB. Once it is in production, the API should respond in microseconds. The database configuration needs to handlehigh throughput and be capable of withstanding spikes in CPU consumption.

Which configuration options should the solutions architect chooseto meet these requirements?

A. DynamoDB with auto scaling
B. DynamoDB provisioned capacity
C. DynamoDB with DAX burstable instances
D. DynamoDB on-demand capacity

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company is concerned with potential poor architectural practices used by your core internal application. After recently migrating to AWS, you hope to take advantage of an AWS service that recommends best practices for specific workloads.

As a Solutions Architect, which of the following services would you recommend for this use case?

A. AWS Trusted Advisor
B. AWS Well-Architected Framework
C. AWS Inspector
D. AWS Well-Architected Tool

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A telecommunications company is developing an AWS cloud data bridge solution to process large amounts of data in real-time from millions of IoT devices.The IoT devices communicate with the data bridge using UDP (user datagram protocol).

The company has deployed a fleet of EC2 instances to handle the incoming traffic but needs to choose the rightElastic Load Balancer to distribute traffic between the EC2 instances.

Which Amazon Elastic Load Balancer is the appropriate choice in this scenario?

A. Network Load Balancer
B. Application Load Balancer
C. Gateway Load Balancer
D. Classic Load Balancer

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A team of solutions architects designed an eCommerce website. The team is concerned about API calls from malicious IP addresses or anomalous behaviors. They would like an intelligent service to continuously monitor their AWS accounts and workloads and then deploy AWS Lambda functions for remediations.

How would the solutions architects protect this web presence against the threats that they are concerned about?

A. Assess their AWS account and workloads with Amazon CodeGuru
B. Deploy Amazon GuardDuty on their AWS account and workloads.
C. Monitor their AWS account and workloads with Amazon Cognito
D. Enable Amazon Inspector on their AWS account and workloads.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You are designing an AWS cloud environment for a client. There are applications that will not be migrated to the cloud environment so it will be a hybrid solution. You also need to create an EFS file system that both the cloud and hybrid environments need to access. You will use Direct Connect to facilitate the communication between the on-premises servers and the EFS File System. Which statement characterizes how Amazon will charge you for this configuration?

A. You will be charged for AWS Direct Connect and for the data transmitted between the on-premises servers and EFS.
B. This is all covered under the VPC charge so there is no additional charge.
C. You will be charged for AWS Direct Connect there is no additional cost for on-premises access to your Amazon EFS file systems.
D. There is no charge for Direct Connect and a flat fee for EFS.

A

C. You will be charged for AWS Direct Connect there is no additional cost for on-premises access to your Amazon EFS file systems.

Explanation:
By using an Amazon EFS file system mounted on an on-premises server, you can migrate on-premises data into the AWS Cloud hosted in an Amazon EFS file system. You can also take advantage of bursting, meaning that you can move data from your on-premises servers into Amazon EFS, analyze it on a fleet of Amazon EC2 instances in your Amazon VPC, and then store the results permanently in your file system or move the results back to your on-premises server. There is no additional cost for on-premises access to your Amazon EFS file systems. Note that you’ll be charged for the AWS Direct Connect connection to your Amazon VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have implemented Amazon S3 multipart uploads to import on-premise files into the AWS cloud.

While the process is running smoothly, there are concerns about network utilization, especially during peak business hours, when multipart uploads require shared network bandwidth.

What is a cost-effective way to minimize network issues caused by S3 multipart uploads?

A. Transmit multipart uploads to AWS using VPC endpoints.
B. Transmit multipart uploads to AWS using AWS Direct Connect.
C. Pause multipart uploads during peak network usage.
D. Compress objects before initiating multipart uploads.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You plan to develop an efficient auto scaling process for EC2 instances. A key to this will be bootstrapping for newly created instances. You want to configure new instances as quickly as possible to get them into service efficiently upon startup. What tasks can bootstrapping perform? (Choose 3 answers)

A. Increase network throughput
B. Enroll an instance into a directory service
C. Install application software
D. Apply patches and OS updates

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Your data engineering team has recently migrated its Hadoop infrastructure to AWS. They ask if you are aware of options for higher-speed network connectivity between their instances.

What two enhanced network options can you present to the team? (Choose 2 answers)

A. Elastic Network Adapter (ENA)
B. Dual-stack Network Adapter (DNA)
C. Intel 85299 Virtual Function (VF) interface
D. AMD Opteron Virtual Function (VF) interface

A

A. Elastic Network Adapter (ENA)
C. Intel 85299 Virtual Function (VF) interface

Explanation:
Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Depending on your instance type, you will either use the Intel 82599 Virtual Function interface or the Elastic Network Adapter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You are working on a 2 tier application hosted on a cluster of EC2 instances behind an application load balancer. During peak times, the webserver’s auto-scaling group is configured to add additional servers when CPU utilization reaches 70% for the existing servers.

Due to compliance requirements, only approved Amazon machine images can be utilized in the creation of servers for the application and all existing AMIs need to be compliant. You need to determine a way to monitor the EC2 instances for non-compliant amazon machine images and be alerted when a non-compliant image is in use.

Which of the following monitoring solutions would provide the necessary visibility and alerting whenever a non-compliant AMI is in use?

A. Enable AWS Config in the region the application is hosted in. Utilize the AWS-managed rule ‘approved-amis-by-id’ to trigger an alert whenever a non-compliant AMI is in use.
B. Create a CloudWatch Event type ‘EC2 Instance State-change Notification’ in the region the application is hosted in. Create an event rule to trigger an alert whenever a non-compliant AMI is in use.
C. Enable AWS Inspector for the EC2 instances in the auto-scaling group. Utilize the AWS-managed rule ‘Approved CIS hardened AMIs’ to trigger an alert whenever a non-compliant AMI is in use.
D. Enable AWS Shield in the region the application is hosted in. Create a rule to trigger an alert whenever a non-compliant AMI is in use.

A

A. Enable AWS Config in the region the application is hosted in. Utilize the AWS-managed rule ‘approved-amis-by-id’ to trigger an alert whenever a non-compliant AMI is in use.

Explanation:
AWS Config can assist with security monitoring by alerting you to when resources such as security groups and IAM credentials have had changes to the baseline configurations. AWS Config has a managed rules set and the AWS managed rule ‘approved-amis-by-id’ can check that running instances are using approved Amazon Machine Images, or AMIs. You can specify a list of approved AMIs by ID or provide a tag to specify the list of AMI Ids.

The remaining choices are incorrect for the following reasons:

● AWS Inspector is a tool used primarily for the purpose of checking the network accessibility and security vulnerabilities of your EC2 instances and the security state of the applications running on those instances as opposed to alerting on non-compliant amazon machine images.

● AWS Shield is a managed DDoS service enabled by Amazon to protect applications running on AWS. The rules on AWS Shield are not designed to track and alert for non-compliant amazon machine images.

● Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. The CloudWatch Event type ‘EC2 Instance State-change Notification’ will log state changes of Amazon EC2 Instances, not non-compliant amazon machine images.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A new, small hotel chain has hired you to optimizean existing small, single-AZ RDS DB instanceto manage reservations for their original location. They recently expanded to new locations and need to optimize their online reservation service.

Incoming requests for reservations could double ortriple their existing database size in a matter of hours, depending on how well their advertising works.With how much capital they invested in new locations, they value the availability of the database far above any cost concerns. During this peak period, the RDS database will need to manage an equal number of reads and writes.

With a limited amount of time to prepare for a potential spike, what is the best single step to ensurethe database remains available to schedule reservations with no loss of service?

A. Enable multi-AZ configuration.
B. Enable Amazon RDS Storage Auto Scaling.
C. Manually modify the DB instance to a larger instance class.
D. Enableread replicas.

A

C. Manually modify the DB instance to a larger instance class.

Explanation:
First, let’s review the key pieces of information in this question:

They currently use a small RDS instanceto manage reservations...
Incoming requests for reservations could double ortriple their existing database size in a matter of hours...
the RDS database will need to manage an equal number of reads and writes.

To handle a large number of reads and writes will require scaling vertically. Read replicas are ideal for handling spikes in read requests, but will not effectively manage writes as well.

Multi-AZ configurations are a feature to enable high availability but are not designed to handle increased read or write workloads.

Storage auto scaling could handle storage limitations, but an influx of writes would overwhelm the small instances compute and memory limitations. It is also feasible that auto scaling would not scale fast enough, given how quickly the hotel business expects itsdatabase to double in size. Auto scaling increases database size gradually, and once the storage scales once, it cannot scale again for approximately six hours. (See this link for more information.)

This is why the best choice is to manually modify the instance to a distance DB class.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A Solutions Architect is configuringthe network security fora new three-tier application. The application hasa web tier of EC2 instances in a public subnet, an application tier on EC2 instances in a private subnet, and a largeRDS MySQL database instance in a second private subnet behind an internal load balancer.

The web tier willallow inbound requests using theHTTPS protocol. Theapplication tier should receive requests using the HTTPS protocol, but mustcommunicate with public endpoints on the internet without exposing its public IP addresses.

The RDS database should specifically allow both inbound and outbound traffic requests toport 3306 from the web and application tiers, but explicitlydeny all inbound and outbound traffic over all other protocols and ports.

What stateful network security resourceshould the Solution Architect configure to protect the web tier?

A. Configure an AWS WAF Web ACL.
B. Configure a NAT Gateway placed in the web tier’s public subnet.
C. Deploy a Network Access Control List (NACL) with inbound and outbound rules allowing traffic from a Source/Destination of 0.0.0.0/0.
D. Deploy a security group for the web tier with Port 443 open to 0.0.0.0/0.

A

D. Deploy a security group for the web tier with Port 443 open to 0.0.0.0/0.

Explanation:
The best solution for the web tier is a security group with Port 443 open to 0.0.0.0/0. A Network ACL or NACL is not stateful, and a NAT Gateway is not necessary as the web tier is within a public subnet.

An application load balancer with HTTPS listeners can offload encryption/decryption for an application, but it does not act as a firewall to protect a resource.

An AWS WAF Web ACL is overkill for this tier - it is stateless, but also has a significant cost compared to a security group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

After weeks of testing, an organization is launching the first publicly available version of its online service, with plans to release version two in six months. They will host a scalable web application on Amazon EC2 instances associated with an auto scaling group behind a Network Load Balancer.

Version 1.0of the application must maintainhigh availability at all times, but version 2.0version willrequire a different instance family to provide optimalperformance for new app features. After extensive market seeding, version 1.0 of the application has built a strong userbase so they expect workloads to be consistent when they launch and steadily grow over time.

Which choice below isa durable and cost-effective solution in this scenario?

A. Use an EC2 Instance Savings Plan
B. Use Standard Reserved Instances
C. Use a Compute Savings Plan
D. Use Spot Instances

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

An IT department currentlymanages Windows-based file storage for user directories and department file shares.Due to increased costs and resources required to maintain this file storage, the company plans to migrate its files to the cloud anduse Amazon FSx for Windows Server.The team islooking for the appropriate configuration options that will minimize their costs for this storage service.

Which of the following FSx for Windows configuration options are cost-effective choices the team can makein this scenario?(Choose 2 answers)

A. Choose the HDD storage type when creating the file system.
B. Enable data deduplication for the file system.
C. Choose the SSD storage type when creating the file system.
D.Disabledata deduplication for the file system.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A team is deploying AWS resources, including EC2 and RDS database instances, into a VPC’s public subnet after recovering from a system failure. The team attempts to establish connections using HTTPS protocol to these new instances from other subnets within the VPC, and from other peered VPCs within the same region, but receives numerous 500 error messages.

The team needs to quickly identify the cause or causes of the connection problem that prevents connecting to the new subnet.

What AWS solution should they use to identify the cause of the network problem?

A. Amazon Route 53 Application Recovery Controller (ARC)
B. Amazon Route 53 Resolver
C. VPC Reachability Analyzer
D. VPC Network Access Analyzer

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You are migrating on-premise legal files to theAWS Cloud in Amazon S3 buckets. The corporate audit team will review all legal files within the next year, but until that review is completed, you need to ensure that the legal files are not updated or deleted in the next 18months.

There are millions of objects contained in the buckets that need review, and you are concerned you will need to spend an excessive amount of time protecting each object.

What steps will ensure the files can be uploaded most efficiently but have the required protection for the specific time period of 18 months? (Choose 2 answers)

A. Set a default retention period of 18 months on all related S3 buckets.
B. Set a retention period of 18 months on all relevant object versionsvia a batch operation.
C. Enable object locks on all relevantS3 buckets with a retention mode of compliance mode.
D. Set a default legal hold on all related S3 buckets

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Your company has recently acquired several small start-up techcompanies within the last year. In an effort to consolidate your resources, you are gradually migrating all digital files to your parent company’s AWS accounts, and storing a large number of files within an S3 bucket.

You are uploading millions of files, to save costs, but have not had the opportunity to review many of the files and documents to understand which files will be accessed frequently or infrequently.

What would be the best way to quickly upload the objects to S3and ensure the best storage class from a cost perspective?

A. Upload all files to the Amazon S3 Standard-IA storage class and immediately set up all objects to be processed withStorage Class Analysis.
B. Upload all files to the Amazon S3 Intelligent Tiering storage class and review costs related to the frequency of access over time.
C. Upload all the files to the Amazon S3 Standardstorage class and review costs for access frequency over time.
D. Upload all the files to the Amazon S3 Standard-IA storage class and review costs for access frequency over time.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

An IT department manages a content management system (CMS) running on an Amazon EC2 instance mounted to an Elastic File System (EFS). The CMS throughput demands are high compared to the amount of data stored on the file system.

What is the appropriate EFS configuration in this case?

A. Choose Bursting Throughput mode for the file system.
B. Start with the General Purpose performance mode and update thefile system toMax I/O if it reaches its I/O limit.
C. Start withBursting Throughput mode and update thefile system toMax I/O if it reaches its I/O limit.
D. Choose Provisioned Throughput mode for the file system.

A

q

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A company’s container applications are managed with Kubernetes and hosted on Windowsvirtual servers. The companywants to migrate these applications to the AWS cloud, and needs a solution that supports Kubernetes pods hosted on Windows servers.

The solutionmust manage theKubernetes API servers and the etcd cluster. The company’s development team would prefer that AWS manage the host instances and containersas much as possible, but is willing tomanage themboth if necessary.

Which AWS service offers the best options for the developer’s preferences and thecompany’s essential requirement for their container application?

A. Amazon Elastic Compute Cloud (EC2)
B. Amazon Elastic Kubernetes Service (EKS) on AWS Fargate
C. Amazon Elastic Kubernetes Service (EKS) with self-managed node groups
D. Amazon Elastic Kubernetes Service (EKS) with EKS-managed node groups

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A company maintains an on-premises data center and performs daily data backups to on-disk and tape storage to comply with regulatory requirements.The IT department is looking for an AWS cloud solution to back up its data.The IT department responsible for this project plans to continue maintaining the primarydata on-site and is looking for an AWS cloud solution for data backupthat will work well with their current archiving process.

Which of the following AWS storage services should the team choose to manage its data backup requirements?

A. AWS Backup
B. AWS Tape Gateway
C. AWS VolumeGateway
D. AWS FileGateway

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

You are responsible for setting up a new Amazon EFS file system.The organization’s security policies mandate that the file system storeall datain an encrypted form.The organization does not needto control key rotation or policies regarding access to the KMS key.

What steps should you take to ensure the data is encrypted at rest in this scenario?
j
A. When mounting the EFS filesystem to an EC2 instance, use the default AWS-managed KMS key to encrypt the data.
B. When creating the EFS filesystem enable encryption using acustomer-managed KMS key.
C. When creating the EFS filesystem enable encryption using the default AWS-managed KMS key for Amazon EFS.
D. When mounting the EFS filesystem to an EC2 instance, use a customer-managed KMS key to encrypt the data.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

You are placed in charge of your company’s cloud storage and need to deploy empty EBS volumes. You are concerned about an initial performance hit when the new volumes are first accessed.

What steps should you take to ensure peak performance when the empty EBS volumes are first accessed?

A. Enable fast snapshot restore
B. Creating a RAID 0 array
C. Do nothing - empty EBS volumes do not require initialization
D. Force the immediate initialization of the entire volume

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

While building your environment on AWS you decide to use Key Management Service to help you manage encryption of data volumes. As part of your architecture you design a disaster recovery environment in a second region.

What should you anticipate in your architecture regarding the use of KMS in this environment?

A. KMS is not highly available by default; you have to make sure you span KMS across at least two availability zones to avoid single points of failure.
B. KMS is a global service, your architecture must account for regularly migrating encryption keys across regions to allow disaster recovery environment to decrypt volumes.
C. KMS is highly available within the region; to make it span across multiple regions you have to connect primary and DR environments with a Direct Connect line.
D. KMS keys can operate on a multi-region scope, but AWS recommends region-specific keys for most cases.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

The IT department at a pharmaceutical company plans to reduce the size of one of itsdata centers and needs to migrate some of thedata stored on a network file system to the Amazon cloud.

After the team migrates the files to the cloud, scientists and on-premises applications still need access to these resources as if they were still on site.The team is looking foran automated service thatthey can use to transfer the assets to the cloud and then continue accessing the files from on-premises after migration.

Which combination of AWS services is the appropriate choice to migrate data from an on-premises network file system and continue to access these files in the cloud seamlessly from on-premises?
j
A. Use AWS Batchto migrate the data and AWS Direct Connect to enable on-premises access to files in the AWS cloud.
B. Use AWS DataSync to migrate the data and AWS Storage Gateway (File Gateway)to enable on-premises access to files in the AWS cloud.
C. Use AWS Backup to migrate the data and AWS Storage Gateway (File Gateway)to enable on-premises access to files in the AWS cloud.
D. Use AWS Storage Gatewayto migrate the data and AWS Direct Connect to enable on-premises access to files in the AWS cloud.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

You are working on a project that involves several AWS resources that will be protected by cryptographic keys. You decided to create these keys using AWS Key Management Services (KMS) and you will need to evaluate the security cost across resources and projects.

How will you easily categorize the security keys’ cost?

A. To each key, add a tag and specify the tag key and tag value. Aggregate the costs by tags.
B. To each key, add a description and specify the reason for creating this key. Aggregate the costs by descriptions.
C. Create asymmetric keys and you will be able to aggregate the costs by resources and projects,
D. After creating the keys, use AWS Organizations to obtain costs across resources and projects.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

You are the AWS account owner for a small IT company, with a team of developers assigned to your account as IAM users within existing IAM groups.

New associate-level developers manage resources in theDev/Test environments, and these resources are quickly launched, used, and then deleted to save on resource costs.The new developershaveread-only permissions in the production environment.

There isa complex existing set of buckets intended toseparate Development and Test resources from Production resources, but you know this policy of separation between environments is not followed at all times. Your company needs to prevent new developers from accessing production environment files placed in anincorrect S3 bucket because these production-level objects are accidentally deleted along with other Dev/Test S3 objects.

The ideal solution will prevent existing objects from being accidentally deleted andautomatically minimize the problem in the future.

What stepsare the most efficienttocontinuously enforce the tagging best practicesand apply the principle of least privilege within Amazon S3? (Choose 2 answers)

A. Assign IAM policiesto the Dev/TestIAM group that authorizeS3 object operation based on object tags.
B. Update all existing object tags to correctly reflect their environment using Amazon S3 batch operations.
C. Implement an object tagging policy usingAWS Config’s Auto Remediation feature.
D. Create an AWS Lambda function to check object tags for each new Amazon S3 object. An incorrect tag would trigger an additional Lambda function to fix the tag.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

You are deploying a two-tieredweb application with web servers hosted on Amazon EC2 in a public subnet of your VPC and your database tier hosted on RDS instances isolated in a private subnet.

Your requirements call for the web tier to be highly available. Which services listed will be needed to make the web-tier highly available?(Choose 3 answers)

A. EC2 Auto Scaling
B. Elastic Load Balancer
C. Route 53
D. Multi-AZ for RDS

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A startup company currently stores all documents in S3. At the beginning of last year, they created a bonus policy, but after a long year of creating and storing further documentation, it seems to be lost in your S3 bucket.

Which of the following services could most easily help you find the bonus policy document?

A. AWS Kendra
B. AWS Rekognition
C. Amazon Comprehend
D. Amazon S3 Search

A

C. Amazon Comprehend

Explanation:
Amazon Comprehend can find documents about a particular subject using topic modeling, scan a set of documents to determine the topics discussed, and find the documents associated with each topic.

The remaining choices are incorrect for the following reasons:

S3 Search is not an AWS service
AWS Kendra searches unstructured data and can be used for S3 but requires a greater setup process than Comprehend
AWS Rekognition is used for image and video analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A companyis configuring its new AWS Organization and has implemented an allow list strategy.Now the company needs to grant special permissions to a singleAWS accountinthe Development organizational unit(OU).

All AWS users within thissingle AWS accountneed to be granted full access to Amazon EC2. Other AWS accounts within the Development OU will not have full access to Amazon EC2. Certain accounts within the Development OU will have partial access to EC2 as needed.

The IT Security department has applied a service control policy (SCP) to the organization’s root accountthat allows AmazonEC2FullAccess.

What choice below includes all the necessary stepsto grant full EC2 access only toAWS users in this single AWS account?

A. Apply an SCPgranting AmazonEC2FullAccess tothe Development OU and the specific AWS account. Apply theAmazonEC2FullAccess IAM policy to all IAM users in the account.
B. Apply theAmazonEC2FullAccess IAM policy to all IAM users in the account.
C. Apply an SCPgranting AmazonEC2FullAccess tothe Development OU and the specific AWS account. Apply a separate SCP denyingEC2 access to all other AWS accounts within the Development OU.
D. Apply an SCPgranting AmazonEC2FullAccess tothe Development OU and the specific AWS account.

A

A. Apply an SCPgranting AmazonEC2FullAccess tothe Development OU and the specific AWS account. Apply theAmazonEC2FullAccess IAM policy to all IAM users in the account.

Explanation:
Inheritance for service control policies behaves like a filter through which permissions flow to all parts of the tree below. To allow an AWS service API at the member account level, you must allow that API at every level between the member account and the root of your organization. You must attach SCPs to every level from your organization’s root to the member account that allows the given AWS service API (such as EC2 Full Access or S3 Full Access). An allow list strategy has you remove the FullAWSAccess SCP that is attached by default to every OU and account. This means that no APIs are permitted anywhere unless you explicitly allow them. To allow a service API to operate in an AWS account, you must create your own SCPs and attach them to the account and every OU above it, up to and including the root. Every SCP in the hierarchy, starting at the root, must explicitly allow the APIs that you want to be usable in the OUs and accounts below it.
Users and roles in accounts must still be granted permissions using AWS Identity and Access Management (IAM) permission policies attached to them or to groups. The SCPs only determine what permissions are available to be granted by such policies. The user can’t perform any actions that the applicable SCPs don’t allow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Multiple AWS accounts within a company’s AWS Organization are managing separatewebsites in EC2 instances behind Application Load Balancers withstatic content and user-generated content storedin S3 buckets behind CloudFront web distributions.

The engineering team wants to protect these vulnerableresources from common web attacks, such as SQL injection, cross-site scripting, and DDoS attacks. Currently, each AWS account allows different types of traffic usingAWS Web Application Firewall (WAF).At the same time, they want to use an approach that will allow them to protect new EC2 instances and CloudFront distributions that will beadded in the future.

What would be an effective and efficientapproach to meet this requirement?

A. Create a set of AWS Web Application Firewall (WAF) rules for account managers for each relevant AWS account to deploy and associate aweb ACL to every EC2 instance and S3 bucket.
B. Associate AWS Shield Advanced withevery Application Load Balancer and CloudFront distribution.
C. Create a service control policy (SCP) to deny all IAM users’ organizational units (OUs) access to AWS WAF. Allow only AWS account root users to modify or create firewall rules with AWS WAF.
D. Tag web application resources such as EC2 instances and CloudFront distributions with resource tags based on their security requirements.Using Firewall Manager, add appropriate AWSWAFrules for each resourcetag.

A

D. Tag web application resources such as EC2 instances and CloudFront distributions with resource tags based on their security requirements.Using Firewall Manager, add appropriate AWSWAFrules for each resourcetag.

Explanation:
AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for AWS WAF, AWS Shield Advanced, Amazon VPC security groups, and AWS Network Firewall. With Firewall Manager, you set up your AWS WAF firewall rules, Shield Advanced protections, Amazon VPC security groups, and Network Firewall firewalls just once. The service automatically applies the rules and protections across your accounts and resources, even as you add new resources. A prerequisite to using AWS Firewall Manager is to use AWS Organization, with all features enabled.

Using Firewall Manager you define the WAF rules in a single place and assign those rules to resources containing a specific tag or resources of a specific type, like CloudFront distributions. Firewall Manager is particularly useful when you want to protect your entire organization rather than a small number of specific accounts and resources, or if you frequently add new resources that you want to protect. Firewall Manager also provides centralized monitoring of DDoS attacks across your organization.

AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for AWS WAF, AWS Shield Advanced, Amazon VPC security groups, and AWS Network Firewall. With Firewall Manager, you set up your AWS WAF firewall rules, Shield Advanced protections, Amazon VPC security groups, and Network Firewall firewalls just once. The service automatically applies the rules and protections across your accounts and resources, even as you add new resources. A prerequisite to usingAWS Firewall Manager is to use AWS Organization, with all features enabled.

Using Firewall Manager you define the WAF rules in a single place and assign those rules to resources containing a specific tag or resources of a specific type, like CloudFront distributions. Firewall Manager is particularly useful when you want to protect your entire organization rather than a small number of specific accounts and resources, or if you frequently add new resources that you want to protect. Firewall Manager also provides centralized monitoring of DDoS attacks across your organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Your team manager requires all EBS volumes and snapshots to be encrypted, so the solutions architect enables EBSencryption by default for the team’s AWS accounts.

Now a team memberis using an unencrypted EBS snapshot provided by another team to create a new EBS volume.

What does the solutions architect need to do to ensure that the new EBS volume is encrypted?
j
A. Use Amazon Data Lifecycle Manager to create a new encrypted EBS volume from the unencrypted snapshot.
B. No additional action is necessary. The volume from the unencrypted snapshot will automatically be encrypted. Create a new EBS volume from a copy of theunencrypted snapshot.
C. Encrypt the unencrypted snapshot with the default AWS KMS, and then create a new EBS volume from the encrypted snapshot.
D. Manually enable encryption during when creating the volume. Otherwise the volume will not be encrypted.

A
41
Q

A user is running a critical batch process which runs for 1 hour and 50 mins every day at a fixed time.

Which option is the right instance type and purchase option in this case, assuming the user performs the same task for the next twelve months?

A. An Instance-store backed instance with spot instance pricing
B. An EBS-backed scheduled reserved instance with partial instance pricing
C. An EBS-backed instance with standard reserved upfront instance pricing.
D. An EBS-backed instance with on-demand instance pricing.

A

D. An EBS-backed instance with on-demand instance pricing.

Explanation:
For Amazon Web Services, the reserved instance (standard or convertible) helps the user save money if the user is going to run the same instance for a longer period. Generally if the user uses the instances around 30-40% of the year annually it is recommended to use RI. Here as the instance runs only for 1 hour 50 minutes daily, or less than 8 percent of the year, it is not recommended to have RI as it will be costlier.

At its highest potential savings, you are still paying 25 percent of an annual cost for a reserved instance you are using less than 2 hours a day, (or less than 8 percent of each year) you are not saving money.

Spot Instances are not ideal because the process is critical, and must run for a fixed length of time at a fixed time of day. Spot instances would stop and start based on fluctuations in instance pricing, leaving this process potentially unfinished.

The user should use on-demand with EBS in this case. While it has the highest cost, it also has the greatest flexibility to ensure that a critical process like this is always completed.

42
Q

An application hosted on EC2 instances has no usage (almost zero load) for the majority of the day. The application workload lasts for roughly five hours a day, at the same time each day. The day-to-day workload is consistent, and will be required for the foreseeable future.

Which solution below is the most efficient and cost-effective in this scenario?
j
A. Use on-demand T2 burstable instance configured for unlimited mode
B. Replace the EC2 instance with Lambda functions
C. Set instance start and stop times using the AWS Instance Scheduler
D. Use on-demand capacity reservations to save costs for running the instance all day.

A
43
Q

A group of companies has invested in creating on AWS a series of games and relaxation tools for their employees as an app. The architect needs to authorize these employees to access this new application using their corporate credentials. They want their employees to have a one-hour access time at most each time they log in.

How can the company IT team enable this request?

A. Use AWS IoT to provide access to this AWS application and have a one-hour access limit on the credentials.
B. Use AWS for Games to provide temporary access to corporate employees.
C. Use AWS Cloud9 to provide access to the leisure service and set login duration to an hour.
D. Use AWS IAM Identity Center (formerly AWS SSO) to provide access to this AWS application and have a one-hour access limit on the credentials.

A
44
Q

A company manages DynamoDB databases to store online transaction-related data such as sales, returns, and inventory changes.

As sales continue to grow, the company is more concerned with backing up database tables across multiple regions as well as across multiple AWS accounts. By replicating backups across business locations and employee accounts, the company believes responses to any issues will be solved faster and easier.

How can a solutions architect effectively address the company’s requirements?

A. Enable continuous backups with point-in-time recovery
B. Enable On-Demand backup and restore using DynamoDB backups
C. Enable On-Demand backup and restore using AWS Backup
D. Deploy a DynamoDB Accelerator (DAX) cluster

A
45
Q

A genomics company stores petabytes of data for scientists in its R&D department to run various scientific computations. The company is planning to move this data from an on-premises data center to AWS, and they need to select an AWS storage service that will allow them to access this shared data from multiple EC2 instances. The EC2 instances will run a suite of existing scientific tools that expect the data to be in POSIX files and need random read/write access to each file’s data.

Which storage solution is the appropriate choice to store this data?

A. ElastiCache
B. Simple Storage Service (S3)
C. Elastic Block Storage (EBS)
D. Elastic File System (EFS)

A

D. Elastic File System (EFS)

Explanation:
Let’s take a look at Amazon’s storage solutions and see which would be the right choice for shared access between multiple EC2 instances:

Elastic Block Storage (EBS): EBS provides block-level storage for your EC2 instances forpersistent and durable data storage.EBS is an appropriate choice for storing frequently changing dataor if you havespecific IOPS requirements.You can attach one or more EBS volumes to an EC2 instance;however, multiple EC2 instances cannot share EBS storage.
Simple Storage Service (S3): Amazon S3 is ahighly available, highly durableobject-based storage service that is cost-effective and accessible.Multiple EC2 instances can access this storage, but it might not be the best choice if applications running on the EC2 instance need access to a mounted file system.You cannot mount S3 storage to an EC2 instance.Also, since the applications accessing the files areexpecting POSIX files, S3 would not be the right choice.
ElastiCache:Amazon ElastiCache is a database service, not a storage service like the other options.ElastiCache provides an in-memory cache used by distributed applications to share data.ElastiCache is not associated with EC2 instances for shared storage as required in this scenario.
Elastic File System (EFS):Amazon EFS is file-level storage optimized for low latency access that appears to users like a file manager interface.EFS uses standard file system semantics such as locking files, renaming files, updating files, and uses a hierarchy structure.You can mount EFS storage to multiple EC2 instances to enable concurrent access to the file system.Also, EFS would support POSIX files as required by the scientific tools the team is using.

From these storage solutions, only EFS will provide the shared storage we are looking for in this scenario.
https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html

46
Q

You are in charge of a web application which is running on EC2 instances. Your site occasionally experiences spikes in traffic that cause your EC2 instances’ resources to become overwhelmed. During these spikes, the application mayfreeze up and lose recently-submitted requests from users.

You have implemented Auto Scaling to deployadditional EC2 instancesto handlespikes, but the new instances are not deploying fast enough to prevent the existing application servers from freezing.

Which of the following is likely to provide the cheapest solution to avoid losing recently submitted requests, assuming that you cannot find a pattern to when these spikes are occurring?

A. Deploy additional EC2 spot instances when needed.
B. Deploy additional EC2 reserved instances when needed.
C. Use Amazon SQS to delete acknowledged messages and redeliver failed messages.
D. Set up another Availability Zone with the same resources and use that when the spikes occur.

A

C. Use Amazon SQS to delete acknowledged messages and redeliver failed messages.

Explanation:
The use of an SQS queue allows submitted requests to be retained as messages in the SQS queue until the application resumes normal operation and can process the requests. Using Amazon SQS to delete acknowledged messages and redeliver failed messages isdecoupling the application components.

Using EC2 resources,whether you use reserved or spot instances, is not cost-effective owing to the infrequency of the spikes in traffic.

SQS queues are preferable to in-memory caches because in-memory storage will operate at all times and can be fairly expensive to address an issue that only comes up during spikes.

47
Q

A corporate tax firm stores prepared tax documents in Amazon S3 for each of its customers, and wants to make sure documents will only be accessedbythe intendedcustomer using a static CloudFront page that retrieves data from S3.

To do so, the tax firm limitsdocument access to a specific, secure IP address provided by each customer.

Which access control method should a solutions architect use to meet this security requirement?

A. CloudFront Origin Access Identities (OAI)
B. CloudFront Signed URLs
C. S3 Object Access Control Lists (ACLs)
D. S3 pre-signed URLs

A
48
Q

A company is using EC2 instances to run their website behind an Application Load Balancer (ALB).

The engineering team wants to terminate encrypted connections atthe Load Balancer, using Secure Sockets Layer (SSL) protocol, without having the need to manage SSL connections atthe EC2 instances. The company does not have an SSL certificate available yet.

What should you use to meet this requirement and follow AWS security best practices? (Choose 2 answers)

A. Assign an IAM SSL certificate imported to the load balancer.
B. Deploy another EC2 instance in front of the existing EC2 instances. Install andrunHAProxy software on the instance.
C. Use an HTTPS listener in the Application Load Balancer.
D. Assign an SSL certificate issued by AWS Certificate Manager (ACM) to the load balancer

A
49
Q

Your company adopted an open-source based monitoring strategy using Prometheus for container monitoring. The company is undertaking a large initiative to migrate fully to AWS by the end of the fiscal year.

Which of the following AWS services may best help manage the operational complexity of scaling the ingestion, storage, alerting, and querying of metrics while being compatible with an open-source cloud-native project?

A. AWS CloudWatch
B. Amazon Managed Service for Prometheus
C. Amazon Managed Service for Grafana
D. AWS EKS with CloudWatch for metrics

A

B. Amazon Managed Service for Prometheus

Explanation:
Amazon Managed Service for Prometheus is a serverless monitoring service for metrics compatible with open-source Prometheus, making it easier for you to securely monitor and alert on container environments.You should use Amazon Managed Service for Prometheus if you have adopted an open source-based monitoring strategy, have already deployed or plan to adopt Prometheus for container monitoring, and prefer a fully managed experience where AWS provides enhanced security, scalability, and availability.

The remaining choices are not the most effective solution for this problem.

50
Q

A customer is using a NAT Gateway to allow a cluster of EC2 instances on a private subnet in their VPC to access an S3 bucket in the same region. After a recent uptick in usage, the customer noticed that data transfer charges rose beyond what they expected. The customer has requested that you find a solution that minimizes data transfer costs without exposing the EC2 instances to the Internet directly. Which option best meets the requirements?

A. Create a DX connection between the S3 bucket and the private subnet
B. Use CloudFront to cache frequently accessed data
C. Create a VPC Endpoint for the S3 bucket and update the routing table for the private subnet to route traffic to the S3 bucket to the VPC Endpoint
D. Use a NAT Instance instead of the NAT Gateway and update the routing table for the private subnet to route traffic to the S3 bucket to the NAT Instance

A

C. Create a VPC Endpoint for the S3 bucket and update the routing table for the private subnet to route traffic to the S3 bucket to the VPC Endpoint

Explanation:
A VPC endpoint enables you to establish a private connection between a VPC and other AWS resources. Transfers between S3 and AWS resources in the same region are free. Therefore, in this scenario using a VPC Endpoint would save on data transfer costs when compared to a NAT Gateway. A NAT instance would have similar transfer costs to a NAT Gateway. Caching data using CloudFront would not reduce the transfer costs as dramatically as using a VPC Endpoint, and depending on the type of data being transfer may have no or limited impact on costs. A Direct Connect (DX) connection would not be useful in connectinga private VPC subnet to an S3 bucket.

51
Q

A user is using an EC2 key pair to connect to an EC2 Linux instance backed by an Elastic Block Storage (EBS) volume.The same EC2 key pair was created and downloaded when the EC2 instance was deployed.

The user has lost the EC2 key pair private key and is not able to connect to the EC2 instance via SSH anymore.

What steps should the user follow to regain access to the EC2 instance with the least operational overhead? (Choose 2 answers)

A. Stop the instance, detach the root volume and attach it to another EC2 instance as a data volume.
B. Stop the instance, detach the root volume and attach it to another EC2 instance as the root volume.
C. Create a new EC2 key pair and assign it to the EC2 instance using the AWS Management Console.
D. Modify the authorized_keys file with a new public key, move the volume back to the original instance, and restart the instance.

A
52
Q

A DevOps team manages an EBS-backed Amazon EC2 instance that hosts a staging environment for a web-based application. The group does not have a backup for the staging environment and would like to set upa process to automatically back it up. If needed, the team should be able tolaunch a new instance torestore the staging environment from a backup, with each associated EBS volume automatically attached to the instance. The systemshould be backed up each day and retain the last five backups.

Which of the following solutions could the DevOps team use to automate their EC2 backups and allow system recovery as described?

A. Use the EBS snapshot service to automate Amazon Machine Image(AMI) lifecycles.
B. Use AWS Backup to automate Amazon Machine Image(AMI) lifecycles.
C. Use Amazon Data Lifecycle Manager to automate Amazon Machine Image(AMI) lifecycles.
D. Use AWS File Gateway to create and managesnapshots for EBS boot device volumes and data volumes.

A

C. Use Amazon Data Lifecycle Manager to automate Amazon Machine Image(AMI) lifecycles.

Explanation:
In this problem scenario, we have an instance that uses EBS for its root device volume with one or more EBS data volumes. The problem states that we are looking for a way to automatically back up the system so that we can restore it completely from a recent snapshot.

The primary approach to backup EBS storage is using EBS snapshots. EBS snapshots allow you to back up EBS data and store the snapshots in S3. Each snapshot saves only the incremental changes that have occurred since the last snapshot. This approach helps reduce storage costs by not duplicating data with each backup. However, the snapshot service does not alone have a way to build policies to create, retain, and delete snapshots automatically. Also, these snapshots are not in the form of an Amazon Machine Image (AMI) snapshot as is needed in this case.

Luckily, there is a service that you can use to build policies to help create and manage EBS snapshots and AMIs automatically; the service is called Amazon Data Lifecycle Manager (DLM). In this case, you can use DLM to automate AMI lifecycles. For example, you can use DLM to create an EBS-backed AMI and then make additional snapshots regularly. Additionally, you can reduce storage needs by setting a policy to keep only a certain number of snapshots. EBS-backed AMIs automatically includea snapshot for each associated EBS volume.

The other services that appear in the remaining choices are not services that administrators would use to backup EBS volumes or automate AMI lifecycles.

53
Q

A cloud engineer is tasked with building a custom data identifier that will discover sensitive data in Amazon S3 bucket and classify these objects according to the type of data discovered. The engineer is searching for a service that will help the team complete this task with little configuration.

How can the engineer discover the sensitive data and classify them as quickly as possible?

A
54
Q

A cloud engineer is tasked with building a custom data identifier that will discover sensitive data in Amazon S3 bucket and classify these objects according to the type of data discovered. The engineer is searching for a service that will help the team complete this task with little configuration.How can the engineer discover the sensitive data and classify them as quickly as possible?

A. Enable Amazon Detective on the AWS account containing these buckets, define detection criteria and define finding severity settings
B. Enable Amazon Macie on the AWS account containing these buckets, define detection criteria and define finding severity settings
C. Enable Amazon Cognito on the AWS account containing these buckets, define detection criteria, and define finding severity settings.
D. Enable Amazon Inspector on the AWS account containing these buckets, define detection criteria and define finding severity settings.

A

B. Enable Amazon Macie on the AWS account containing these buckets, define detection criteria and define finding severity settings

Explanation:
Amazon Macie is a data security and data privacy service that leverages machine learning and pattern recognition to discover sensitive data and protect them. In this scenario, Amazon Macie will continuously evaluate Amazon S3 buckets on the account, discover data containing PII, and take remediation actions to protect them according to HIPAA compliance.

The remaining choices are incorrect for the following reasons:

Amazon Detective makes it easy to analyze, investigate and determine the root cause of security assessment findings or suspicious activities. Analysis and investigation data are also presented in forms of graphs, continuously refreshed. Amazon Detective does not locate or discover sensitive data in Amazon S3 buckets.

Amazon Cognito is a service for user sign-up/sign-in and access management to web and mobile applications. Amazon Cognito also enables you to authenticate users with external identity providers such Amazon, Apple, Google or Meta. it does not use machine learning to sensitive information such as PII.

Amazon Inspector is a vulnerability management service; it scans AWS workloads for software vulnerabilities and unintended network exposure. Amazon Inspector is a security assessment service. It does not use machine learning to detect PII in Amazon S3 or take actions to protect sensitive information.

Amazon GuardDuty is a threat detection service that continuously monitors AWS accounts for malicious activities and anomalous behaviors Amazon GuardDuty leverages machine learning to identify the threats and classify them. It does not apply machine learning to buckets that you select and alert you when sensitive information is discovered.

55
Q

You’ve been assigned a new client that hosts a stateless proprietary application on four EC2 reserved instances in an existing AWS cloud environment. Additionally, they have two reserved instances reading from a queue.Examining historical performance data, you determine that a large traffic spike occurs during their fiscal year processing in late June.What changes can you make to your EC2 instances to maintain the application’s resiliency, improve performance, and reduce cost? (Choose 2 answers)

A. Register the Reserved instances with a Load Balancer for the queuing.
B. Assign Spot Instances to interact with the queue for cost savings.
C. Configure an Amazon EC2 Auto Scaling group of on-demand instances to address the June spike.
D. Add Spot instances for the expected traffic spike in June.

A

A. Register the Reserved instances with a Load Balancer for the queuing.

Explanation:
Reserved instances are the best value for steady traffic over an extended period. Licensing agreements for reserved instances can be 1 or 3 years. On-demand instances are perfect for handling short-term traffic spikes. Spot instances are the best value for non-critical applications that can afford to be stopped. Trusted Advisor can provide valuable information on your instance utilization relative to cost savings.

56
Q

An online gaming platform is evaluating an in-memory solution to store the session states of its users. In order to support backup and restore, a requirement for the datastore is to take point-in-time snapshots and create replicas. Which of these meets the stated requirements for distributed session management?

A. Amazon CloudFront
B. Amazon Aurora
C. Amazon Elasticache for Redis
D. Amazon Elasticache for Memcached

A
57
Q

A company manages DynamoDB databases to store online transaction-related data such as sales, returns, and inventory changes.As sales continue to grow, the company is more concerned with backing up database tables across multiple regions as well as across multiple AWS accounts. By replicating backups across business locations and employee accounts, the company believes responses to any issues will be solved faster and easier.How can a solutions architect effectively address the company’s requirements?

A. Enable On-Demand backup and restore using DynamoDB backups
B. Enable On-Demand backup and restore using AWS Backup
C. Enable continuous backups with point-in-time recovery
D. Deploy a DynamoDB Accelerator (DAX) cluster

A

B. Enable On-Demand backup and restore using AWS Backup

Explanation:
Amazon DynamoDB can help you meet regulatory compliance and business continuity requirements through enhanced backup features in AWS Backup. AWS Backup is a fully managed data protection service that makes it easy to centralize and automate backups across AWS services, in the cloud and on-premises.

Enhanced backup features available through AWS Backup include:

Scheduled backups - You can set up regularly scheduled backups of your DynamoDB tables using backup plans.

Cross-account and cross-Region copying - You can automatically copy your backups to another backup vault in a different AWS Region or account, which allows you to support your data protection requirements.

58
Q

A team is deploying AWS resources, including EC2 and RDS database instances, into a VPC’s public subnet after recovering from a system failure. The team attempts to establish connections using HTTPS protocol to these new instances from other subnets within the VPC, and from other peered VPCs within the same region, but receives numerous 500 error messages.The team needs to quickly identify the cause or causes of the connection problem that prevents connecting to the new subnet.What AWS solution should they use to identify the cause of the network problem?

A. Amazon Route 53 Resolver
B. VPC Reachability Analyzer
C. VPC Network Access Analyzer
D. Amazon Route 53 Application Recovery Controller (ARC)

A

B. VPC Reachability Analyzer

Explanation:
The correct solution, in this case, is the VPC Reachability Analyzer.

VPC Reachability Analyzer is a configuration analysis tool that enables you to perform connectivity testing between a source resource and a destination resource in your virtual private clouds (VPCs). When the destination is reachable, Reachability Analyzer produces hop-by-hop details of the virtual network path between the source and the destination.

The other choices are useful in other situations:

Route 53 ARC provides continual readiness checks to help make sure, on an ongoing basis, that your applications are scaled to handle failover traffic and configured so you can route around failures. Route 53 ARC helps you centrally coordinate failovers within an AWS Region or across multiple Regions. It provides extremely reliable routing control so you can recover applications by rerouting traffic, for example, across Availability Zones or Regions. To do this, you partition your applications into redundant failure-containment units, or replicas, called cells. The boundary of a cell can be an Availability Zone or a Region, or even a smaller unit within an Availability Zone.
The Route 53 Resolver can contain endpoints that you configure to answer DNS queries to and from your on-premises environment. You also can integrate DNS resolution between Resolver and DNS resolvers on your network by configuring forwarding rules. Your network can include any network that is reachable from your VPC.
Network Access Analyzer is a feature that identifies unintended network access to your resources on AWS. You can use Network Access Analyzer to specify your network access requirements and to identify potential network paths that do not meet your specified requirements.
59
Q

Your company needs a solution to graph process data from industrial IoT devices and various supporting systems for customers operating in seafood processing. The requirements for the graphing solution are: It must provide graphs and visualizationThere cannot be any complex IT integrationsYou should be able to track metrics from both AWS and external sources and It must support multi-source, multi-account and multi-region dashboardsWhich of the following AWS services would best fulfill these requirements?

A. AWS Managed Grafana
B. AWS Managed Prometheus
C. Amazon Quicksight
D. AWS CloudWatch

A

A. AWS Managed Grafana

Explanation:
Amazon Managed Grafana is a fully managed service for open-source Grafana developed in collaboration with Grafana Labs.Grafana is a popular open-source analytics platform that enables you to query, visualize, alert, and understand your metrics no matter where they are stored.

60
Q

You are rapidly configuringa VPC for a new insurance application that needs to go live imminently to meet an upcoming compliance deadline. The insurance company must migrate a new application tothis newVPC and connect it as quickly as possible to an on-premises, company-ownedapplication thatcannot migrateto the cloud.Your immediate goal is to connect your on-premises appas quickly as possible but speed and reliability are critical long-term requirements. The medical insurance company suggestsimplementingthe quickest connection method now, and if necessary,switching over to a faster, more reliable connection servicewithinthe next six months if necessary.Which strategywould work best to satisfy theirshort and long-term networking requirements?

A. AWS VPN is the best short-term solution, and AWS Direct Connect is the best long-term solution.
B. VPC Endpoints are the best short-term and long-term solutions.
C. AWS Direct Connect is the best short-term solution, and AWS VPN is the best long-term solution.
D. VPC Endpoints are the best short-term solution, and AWS VPN is the best long-term solution.

A

A. AWS VPN is the best short-term solution, and AWS Direct Connect is the best long-term solution.

Explanation:
A VPN connection is the fastest way to complete the connection between on-premises computing and your VPC. However, VPN is not as reliable or as fast as Direct Connect. VPN would satisfy the requirement to establish the connection as quickly as possible. You can subsequently request a Direct Connect connection that is not subject to inconsistencies of the internet and will be faster and more reliable than AWS VPN. Direct Connect can ultimately replace the VPN connection and all requirements will be satisfied.

61
Q

A solutions architect is working on a project to migrate an application hosted on-premises to the AWS cloud. The application will run on an Amazon EC2 instance and use Amazon EBS for storage. The team is looking for a solution to transfer the on-premises application data stored on a block device on-premises to AWS. Which of the following AWS services could the solutions architect use to migrate the application data?

A. AWS Volume Gateway
B. AWS File Gateway
C. AWS Tape Gateway
D. AWS DataSync

A

A. AWS Volume Gateway

Explanation:
AWS Storage Gateway is an Amazonservice that allows you to create a gateway between your on-premises storage systems and Amazon’sS3 and Glacier. There are three storage gateway services you should know about:

AWS File Gateway: File Gateway is a service you use to extend your on-premises file storage tothe AWS cloud without needing to update existing applications that use these files.  With file gateway, you can map on-premises drives to S3 so thatusers and applications can access these files as they usuallywould.  Once the files are in S3, you can access them in the cloud from any applications deployed there or continue using these files from on-premises.  File gateway provides access to files in S3 based on SMB or NSF protocols.
AWS Volume Gateway:  The Volume Gateway can provideprimary storage or backup storage for your on-premises iSCSI block storage devices.  There are two configuration modes when you use volume gateway: cache mode and storedmode.  With cache mode, the system keeps the primary copy of your data in S3, and a local on-premises cacheprovides low latency access to the most frequently accessed data.  In stored mode, the primary copy of the data is on-premises, and the system periodically saves a backup to S3.Stored volume gateways create EBS snapshots stored on S3 and are billed as Amazon EBS snapshots.
AWS Tape Gateway:  The tape gateway, also known as the Virtual Tape Library, allows you to back up your on-premises data to S3 from your corporate data center.  With tape gateway, you canleverage the Glacier storage classes for data archiving at a lower cost than S3.  The tape gatewayis essentially a cloud-based tape backup solution that replacesphysical components with virtual ones.

In theproblem statementfor thisquestion, we are looking for a storage service that will help transfer data stored on a block storage device on-premises to the AWS cloud. The best solution for this problem is to use the stored volume gateway. With thisapproach, the systemwill create a snapshot of the on-premises volume, save it in S3, andthe solutions architect can use the snapshotto create an EBS volume to associate with the EC2 instance.

62
Q

You are working on a project that involves several AWS resources that will be protected by cryptographic keys. You decided to create these keys using AWS Key Management Services (KMS) and you will need to evaluate the security cost across resources and projects.How will you easily categorize the security keys’ cost?

A. To each key, add a tag and specify the tag key and tag value. Aggregate the costs by tags.
B. After creating the keys, use AWS Organizations to obtain costs across resources and projects.
C. Create asymmetric keys and you will be able to aggregate the costs by resources and projects,
D. To each key, add a description and specify the reason for creating this key. Aggregate the costs by descriptions.

A

A. To each key, add a tag and specify the tag key and tag value. Aggregate the costs by tags.

Explanation:
When you add tags to your AWS resources, AWS can generate a cost allocation report with reports and costs aggregated by tag. To each key created with AWS KMS, you add a tag to it; then, you can evaluate the security cost across resources and projects using the attached tags.

The remaining choices are incorrect for the following reasons:

Creating symmetric or asymmetric keys does not automatically generate reports or cost aggregated by resources or projects.

Although one of the features of AWS Organizations is consolidated billing, this does not provide you with a report on security const across projects or resources.

Unlike tagging, you cannot use resource descriptions to aggregate security cost across resources or projects.

63
Q

A company’s container applications are managed with Kubernetes and hosted on Windowsvirtual servers. The companywants to migrate these applications to the AWS cloud, and needs a solution that supports Kubernetes pods hosted on Windows servers.The solutionmust manage theKubernetes API servers and the etcd cluster. The company’s development team would prefer that AWS manage the host instances and containersas much as possible, but is willing tomanage themboth if necessary.Which AWS service offers the best options for the developer’s preferences and thecompany’s essential requirement for their container application?

A. Amazon Elastic Compute Cloud (EC2)
B. Amazon Elastic Kubernetes Service (EKS) with EKS-managed node groups.
C. Amazon Elastic Kubernetes Service (EKS) with self-managed node groups
D. Amazon Elastic Kubernetes Service (EKS) on AWS Fargate

A

C. Amazon Elastic Kubernetes Service (EKS) with self-managed node groups

Explanation:
The key requirement in this question is Windows nodes and being sure which service option supports them and which does not.

In this question, the company wants AWS to manage as much as possible.

AWS Fargate, which manages the highest level of management, does not support Windows. Amazon ECS supports Windows, but the Fargate EKS option does not.
Amazon EKS with EKS-managed nodes does not support Windows nodes.
Amazon EKS with self-managed nodes DOES support Windows nodes
Amazon EC2 supports Windows, but is an IaaS service.

Therefore the best option is EKS with self-managed nodes.

64
Q

A company is developing a new application hosted acrossseveral Amazon EC2 instances.The company’s security policyrequires that the system configuration encrypts all data stored on Amazon EBS volumes and in snapshots.Also, the company wants to controlkey rotation and policies regarding who can access encrypted data.Which type of KMS key is the appropriate choice for encrypting the EBS volumes and snapshots?

A. Customer-managed KMS key
B. AWS-managed KMS key
C. AWS-owned KMS key
D. Third-party-owned KMS key

A

A. Customer-managed KMS key

Explanation:
When you enable encryption for an EBS volume, you have two types ofKMS keys to choose from:

Customer-managed KMS key
AWS-managed KMS key

In this scenario, the customer has specific requirements about how often the key is rotated.Therefore the best choice would be to use a customer-managed KMS key.If the customer did not have specific key rotation requirements or no need to update key management policies, then an AWS-managed KMS key would be a better choice.

Additional Resources

AWS Key Management Service Concepts
65
Q

A company needs to maintain system access logs for a minimum of 2 years due to financial compliancepolicies. Logs are rarely accessed, andaccess must be requested at least 12 hours in advance.What is the most cost-effective method to store logs that satisfiesthe compliance requirement and delivers logs as requested in a timely manner?

A. Store the logs in CloudWatch logs and configure a two-year retention period.
B. Store the logs in Amazon S3 in the Intelligent-Tiering storage class. Configure a lifecycle policy to delete the logs after two years.
C. Store the logs in Amazon S3 in the S3 Glacier Deep Archive storage class. Configure a lifecycle policy that deletes the logsafter two years.
D. Store the logs in Amazon S3 in the S3 One Zone-IA storage class.Configure a lifecycle policy to delete the logs after two years.

A

C. Store the logs in Amazon S3 in the S3 Glacier Deep Archive storage class. Configure a lifecycle policy that deletes the logsafter two years.

Explanation:
Amazon S3 Glacier Deep Archive storage class is the cheapest storage class in AWS, and the information can be retrieved within a default time of 12 hours.

The other options offer some cost savings compared to the Amazon S3 Standard storage class, but Intelligent-Tiering would not cycle the files to an archive storage class for roughly 120 days. While the One Zone-IA storage class would provide some cost savings, it offers less durable storage, so the likelihood of losing logs is higher.

66
Q

While building your environment on AWS you decide to use Key Management Service to help you manage encryption of data volumes. As part of your architecture you design a disaster recovery environment in a second region.What should you anticipate in your architecture regarding the use of KMS in this environment?

A. KMS is not highly available by default; you have to make sure you span KMS across at least two availability zones to avoid single points of failure.
B. KMS keys can operate on a multi-region scope, but AWS recommends region-specific keys for most cases.
C. KMS is a global service, your architecture must account for regularly migrating encryption keys across regions to allow disaster recovery environment to decrypt volumes.
D. KMS is highly available within the region; to make it span across multiple regions you have to connect primary and DR environments with a Direct Connect line.

A

B. KMS keys can operate on a multi-region scope, but AWS recommends region-specific keys for most cases.

Explanation:
KMS has been a regional service for many years, and while can complicate the design of multi-region architecture, there are security benefits to the regional limitations.

KMS has a multi-region key option, but region-specific keys cannot be converted to multi-region AND there are significantly more security issues to consider with using a multi-region key. The benefit it offers customers also expands the risk if the key is compromised or deleted, so something to keep in mind. AWS mentions it here:

You cannot convert an existing single-Region key to a multi-Region key. This design ensures that all data protected with existing single-Region keys maintain the same data residency and data sovereignty properties.

For most data security needs, the Regional isolation and fault tolerance of Regional resources make standard AWS KMS single-Region keys a best-fit solution. However, when you need to encrypt or sign data in client-side applications across multiple Regions, multi-Region keys might be the solution.

67
Q

You must decide how to best maintain your application’s availability and ensure optimum service as the level of customer activity changes. Reviewing performance logs for the past several days, you’ve noticed that users experience connectivity issues once the CPU utilization rate increases beyond70percent. To maintain optimum performance, you need to ensure CPU utilization remains at 50percent.What choice below will best address the issue and help maintain optimum performance? (Choose 2 answers)

A. Create aTarget Tracking EC2 auto scaling policytracking CPU utilization with a 50 percent target value.
B. Create a CloudWatch Alarm to trigger once CPU utilization passes 50 percent.
C. Create a Step Scaling policy using AWS EC2 Auto Scaling.
D. Enable CloudWatch monitoring for your EC2 auto scaling group with detailed monitoring.

A

D. Enable CloudWatch monitoring for your EC2 auto scaling group with detailed monitoring.

Explanation:
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to the changes in the metric due to a changing load pattern.

To ensure a faster response to changes in the metric value, AWSrecommends that you scale on metrics with a 1-minute frequency. Scaling on metrics with a 5-minute frequency can result in slower response times and scaling on stale metric data.

To get this level of data for Amazon EC2 metrics, you must specifically enable detailed monitoring. By default, Amazon EC2 instances are enabled for basic monitoring, which means metric data for instances is available at 5-minute frequency.

68
Q

You host twoseparate applications that utilize the same DynamoDB tables containing environmental data.The firstapplication, which focuses ondata analysis,is hosted on compute-optimized EC2 instances in a private subnet.It retrieves raw data, processes the data, and uploads the results to a second DynamoDB table. The secondapplication is apublic website hosted on general-purpose EC2 instances within a public subnet and allows researchers to view the raw and processed data online.For security reasons, you want both applications to access the relevant DynamoDB tables within your VPC rather than sending requests over the internet. You also want to ensure that while your data analysis application can retrieve and upload data to DynamoDB, outside researchers will not be able to upload data or modify any data through the public website.How can you ensure each application is granted the correct level of authorization? (Choose 2 answers)

A. Deploy a DynamoDB VPC endpoint in the data analysis application’s private subnet, and a DynamoDB VPC endpoint in the public website’s public subnet.
B. Deploy one DynamoDB VPC endpoint in its own subnet. Update the route tables for each application’s subnet with routes to the DynamoDB VPC endpoint.
C. Configure and implement a singleVPC endpoint policy to grant access to both applications.
D. Configure and implement separate VPC endpoint policies for each application.

A

A. Deploy a DynamoDB VPC endpoint in the data analysis application’s private subnet, and a DynamoDB VPC endpoint in the public website’s public subnet.

Explanation:
DynamoDB VPC endpoints are Gateway endpoints. You can configure multiple gateway endpoints in a single VPC for the same AWS service, and route different resources to different gateways with different policies based on the specific permissions granted to those resources.

69
Q

An IT department is using Amazon EFS to share files between a fleet of EC2 instances.The organization’s security policy requires encrypteddata for this file system both at rest and in-transit.You enabled encryption at rest when the file system was created, but needa way to enforce encryption in-transit whenever mounting the filesystem.What action is the most effective solution to enforceencryption in-transit whenever a user mounts this EFS file system to an EC2 instance?

A. Create a script that administrators usewhen mounting the EFS filesystem to an EC2 instance that always uses Transport Layer Security (TLS).
B. Update the EFS filesystem configuration to specify thatyou must use Transport Layer Security (TLS) whenever mounting the filesystem.
C. Update the Amazon Machine Image used for the EC2 instance to only use Transport Layer Security to mount to EFS filesystems.
D. Use IAM (Identity and Access Management) to control file system data access with an EFS file system policythat includes the conditionaws:SecureTransport set to true.

A

D. Use IAM (Identity and Access Management) to control file system data access with an EFS file system policythat includes the conditionaws:SecureTransport set to true.

Explanation:
The key to setting up encryption in-transit with an EFS filesystem is to mount the filesystem usingTransport Layer Security (TLS). There is no setting for an EFS filesystem configuration that will let you enforce TLS for a mount operation, nor is there a way to do this with an AMI.

Of the two remaining choices, you could create a script that administrators use to always mount the filesystem using TLS; however, a better choice is to use IAM to establish an EFS filesystem policy that includes the conditionaws:SecureTransport set to true. Using a filesystem policy is the best choice because this will cause AWS toraise an error if a user tries to mount the filesystem without specifying TLS.

70
Q

A company storesits applicationdata onsite using iSCSI connectionstoarchival disk storagelocated at an on-premises data center.Now managementwants to identifycloud solutions to back up that data to the AWS cloud and store it at minimal expense.The company needs to back up200 TB of data to the cloud over the course of amonth, and the speed of the backup is not a critical factor. The backups are rarely accessed, but whenrequested, they should be available in less than 6hours.What are the most cost-effective steps to archiving the data and meeting additional requirements?

A.1) Copy the data to Amazon S3 usingAWS DataSync.

2)Store the data in Amazon S3 in the S3 Glacier Deep Archive storage class.

B. 1) Backup the data using AWS Storage Gateway volumegateways.

2) Store the data in Amazon S3 in the S3 Glacier Flexible Retrieval storage class.

C. 1) Copy the data to AWS Storage Gateway file gateways.

2) Store the data in Amazon S3 in the S3 Glacier Flexible Retrieval storage class.

D. 1) Migrate the data to AWS using an AWS Snowball Edge device.

2) Store the data in Amazon S3 in the S3 Glacier Deep Archive storage class.

A

B. 1) Backup the data using AWS Storage Gateway volumegateways.

2) Store the data in Amazon S3 in the S3 Glacier Flexible Retrieval storage class.

Explanation:
Review the questions and consider the key points in bold:

A company storesits applicationdata onsite using iSCSI connectionstoarchival disk storagelocated at an on-premises data center.Now managementwants to identifycloud solutions to backup that data to the cloud and store it at minimal expense.

The company needs to back up200 TB of data to the cloud over the course of amonth, and the speed of the backup is not a critical factor. The backups are rarely accessed, but whenrequested, they should be available in less than 6hours.

What are the most cost-effective steps to archiving the data and meeting additional requirements? (Choose 2 answers)

So why is Storage Gateway the better option?

It is cheaper - compare uploading 200 TB per month on each service using the AWS Pricing Calculator. DataSync costs more than double what it does on Storage Gateway.
Speed is not an issue - the main advantage of DataSync is that it is faster, so if speed is not an issue, you do not need DataSync.
While the amount of data may be ideal for Snowball devices, this option is not as cost-effective. If this were a one-time data upload, or perhaps more sensitive data that should be migrated as securely as possible, then a Snowball device would be a better choice.

Why use S3 Glacier and not S3 Glacier DeepArchive?

You need to retrieve data from the tape gateway in less than 6 hours. This is possible with S3 Glacier storage, but not possible with S3 Glacier Deep Archive. See more information on storage classes here.
71
Q

You have implemented Amazon S3 multipart uploads to import on-premise files into the AWS cloud.While the process is running smoothly, there are concerns about network utilization, especially during peak business hours, when multipart uploads require shared network bandwidth.What is a cost-effective way to minimize network issues caused by S3 multipart uploads?

A. Transmit multipart uploads to AWS using AWS Direct Connect.
B. Pause multipart uploads during peak network usage.
C. Compress objects before initiating multipart uploads.
D. Transmit multipart uploads to AWS using VPC endpoints.

A

B. Pause multipart uploads during peak network usage.

Explanation:
The ability to pause multipart uploads offers great flexibility in its usage. A pause could be initiated for a number of reasons, but pausing to relieve network traffic is a prime use case. If a multipart upload aborts, it is good practice to have a lifecycle policy, which will delete incomplete uploads after a predetermined number of days. This will reduce storage costs.

The other choices would not address network bandwidth issues in a cost-effective way:

Using AWS Direct Connect would increase network expenses.
VPC endpoints can be used for communication between AWS resources deployed in a VPC and Amazon S3. This does not assist in this case because objects are uploaded from an on-premises environment.
Compressing objects before initiating multi-part uploads would only have a marginal effect. Multipart uploads already divide large objects into smaller, separate parts, so modest changes in object size from compressing an object would have a minor overall impact.
72
Q

You are migrating on-premise legal files to theAWS Cloud in Amazon S3 buckets. The corporate audit team will review all legal files within the next year, but until that review is completed, you need to ensure that the legal files are not updated or deleted in the next 18months. There are millions of objects contained in the buckets that need review, and you are concerned you will need to spend an excessive amount of time protecting each object.What steps will ensure the files can be uploaded most efficiently but have the required protection for the specific time period of 18 months? (Choose 2 answers)

A. Enable object locks on all relevantS3 buckets with a retention mode of compliance mode.
B. Set a default retention period of 18 months on all related S3 buckets.
C. Set a retention period of 18 months on all relevant object versionsvia a batch operation.
D. Set a default legal hold on all related S3 buckets

A

A. Enable object locks on all relevantS3 buckets with a retention mode of compliance mode.
B. Set a default retention period of 18 months on all related S3 buckets.

Explanation:
The key points of this question are:

the difference between compliance mode and governance mode - compliance prevents objects from being updated or deleted, governance only prevents deletion.
Default Retention periods can be set at bucket level and then applied to all objects uploaded into the bucket. Otherwise, they have to be set at an object version level.
Legal holds have no specific time frame associated with them, and cannot be configured at a bucket level. They must be configured at an object version level.
Object lock settings cannot be configured via batch operations.
73
Q

A corporate tax firm stores prepared tax documents in Amazon S3 for each of its customers, and wants to make sure documents will only be accessedbythe intendedcustomer using a static CloudFront page that retrieves data from S3.To do so, the tax firm limitsdocument access to a specific, secure IP address provided by each customer.Which access control method should a solutions architect use to meet this security requirement?

A. S3 pre-signed URLs
B. CloudFront Signed URLs
C. CloudFront Origin Access Identities (OAI)
D. S3 Object Access Control Lists (ACLs)

A

B. CloudFront Signed URLs

Explanation:
CloudFront Signed URLs, specifically custom signed URLs, allow AWS users to limit access to specific content stored on Amazon CloudFront. No other option provides this type of control.

74
Q

You are leading a team in the design of an AWS environment for a client. This will be a hybrid environment requiring communication between the on-premises application and the cloud environment, as well as private and public subnets thatneed internet access. The team has chosen to use NATGateways to provide internet access to the instances in the private subnets. What configuration steps must be taken to enable internet access for instances in the private subnets via the NAT Gateway? (Choose 2 answers)

A . Create a route in the route table to direct traffic from the private subnet to the Internet Gateway.
B. Create a route in the route table to direct traffic from the private subnet to the NAT Gateway.
C. Disable source/destination check on the NAT Gateway.
D. Allocate an Elastic IP address and associate it with the NAT Gateway.

A

B. Create a route in the route table to direct traffic from the private subnet to the NAT Gateway.

D. Allocate an Elastic IP address and associate it with the NAT Gateway.

Explanation:

You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances. To create a NAT gateway, you must specify the public subnet in which the NAT gateway will reside. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. After you’ve created a NAT gateway, you must update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway. Disabling source/destination checks is a feature of NAT Instances not NAT Gateways.

75
Q

You are placed in charge of your company’s cloud storage and need to deploy empty EBS volumes. You are concerned about an initial performance hit when the new volumes are first accessed.What steps should you take to ensure peak performance when the empty EBS volumes are first accessed?

A. Do nothing - empty EBS volumes do not require initialization
B. Force the immediate initialization of the entire volume
C. Enable fast snapshot restore
D. Creating a RAID 0 array

A

A. Do nothing - empty EBS volumes do not require initialization

Explanation:
Initializing volumes (formerly known as pre-warming) has changed from its prior functionality. Formerly, you would have to initialize (pre-warm) a newly created volume from scratch. This is no longer necessary. Newly created volumes created from snapshots still need to be pre-warmed by reading from the blocks that contain data.

76
Q

A company uses Amazon DynamoDB for a serverless application with Amazon DynamoDB Accelerator (DAX) for caching. The company will be doubling its customer base and wants to ensure the database solution can scale and handle the increased read traffic. The development team has beenmonitoring the cache hit rates using Amazon Cloud watch and noticed that hit rates are high when the ratio of read to write traffic is high. Which action should the solutions architect recommend in this case to ensure the application can perform well with increased traffic?

A. Use larger DAX cluster nodes
B. Enable autoscaling on the DynamoDB table
C. Add read replicas to the DAX cluster
D. Create global secondary indexes on the DynamoDB table

A

C. Add read replicas to the DAX cluster

Explanation:
Adding read replicas to the DAX cluster can help improve the throughput.

77
Q

A company has created a new security auditor position as it restructuresits IT Security team. The security auditorwill haveadministrative access to employee information, and the capability to track employee actions in AWS.The company is configuring the permissions for the Security Auditor position in IAM, but wants to make sure these admin permissions are only assigned to the auditor’sIAM user identity, and cannot be accidentally assigned to other IAM groups or users.Which type of policy should a Solution Architect use to properly assign the IAM permission to the Security Auditor’s identity in AWS?

A. Assign permissions using an AWS-Managed IAM Policy.
B. Assign permissions using aCustomer-Managed IAM Policy.
C. Assign permissions using anInline Policy.
D. Assign permissions using IAM Identity Federation.

A

C. Assign permissions using anInline Policy.

Explanation:
The best choice in this situation is an Inline Policy because this policy type is part of the IAM identity itself, and cannot be accidentally reassigned to other users, roles, or groups. AWS-Managed and Customer-Managed policies can be reassigned, and IAM Identity Federation would only work if the user were signing in through a separate Identity Provider (IdP) like Facebook, Twitter, or Google.

78
Q

You’re building a greenfield application hosted with the following configurations:Webserver: On a fleet of public-facing EC2 instancesMicrosoft SQL DB: on a secure Amazon EC2 instanceYou need to develop a solution for the database server that will ensure that this server is not accessible directly from the internet but can get accessibility when needed for patching and upgrades. Which of the following infrastructure security solutions would ensure these requirements are fulfilled?

A. Launch the database server in a private VPC, use AWS Outposts to provide secure internet access to the database when needed.
B. Launch the database server in a private subnet, use a NAT gateway for secure internet access to the database when needed.
C. Launch the database server behind an AWS CloudFront distribution, use a CloudFront signed url to expose the database for internet access securely when needed.
D. Launch the database server behind an Application Load Balancer, Create an ALB Listener to expose the database for internet access securely when needed.

A

A. Launch the database server in a private VPC, use AWS Outposts to provide secure internet access to the database when needed.

Explanation:
When you launch an instance, you launch it into a subnet in your VPC. You can use subnets to isolate the tiers of your application like web, application, and database servers within a single VPC. You can use private subnets for your instances if they should not be accessed directly from the internet. You can also use a NAT gateway for internet access from an instance in a private subnet as needed.

The remaining choices are incorrect for the following reasons:

● AWS Outposts is used to build and run applications on premises using the same programming interfaces as in AWS Regions, not for DB security.

● You can use the AWS CloudFront service to set up a distribution that will allow you to use edge servers to cache content down to globally distributed end users to allow for better performance of static data. AWS CloudFront does not provide protection for database servers.

● Application Load Balancers automatically distribute your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It does not deal with security.

79
Q

Your company is concerned with potential poor architectural practices used by your core internal application. After recently migrating to AWS, you hope to take advantage of an AWS service that recommends best practices for specific workloads.As a Solutions Architect, which of the following services would you recommend for this use case?

A. AWS Well-Architected Tool
B. AWS Well-Architected Framework
C. AWS Trusted Advisor
D. AWS Inspector

A

A. AWS Well-Architected Tool

Explanation:
The AWS Well-Architected Tool is designed to help you review the state of your applications and workloads, and it provides a central place for architectural best practices and guidance. The AWS Well-Architected Tool is based on the AWS Well-Architected Framework, which was developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructures. The Framework has been used in thousands of workload reviews by AWS solutions architects. It provides a consistent approach for evaluating your cloud architecture and implementing designs that will scale with your application needs over time.

The remaining tools help more with vulnerabilities or cost savings rather than best practices, except for the Well-Architected Framework, which is not an AWS service.

80
Q

A telecommunications company is developing an AWS cloud data bridge solution to process large amounts of data in real-time from millions of IoT devices.The IoT devices communicate with the data bridge using UDP (user datagram protocol).The company has deployed a fleet of EC2 instances to handle the incoming traffic but needs to choose the rightElastic Load Balancer to distribute traffic between the EC2 instances.Which Amazon Elastic Load Balancer is the appropriate choice in this scenario?

A. Network Load Balancer
B. Application Load Balancer
C. Classic Load Balancer
D. Gateway Load Balancer

A

A. Network Load Balancer

Explanation:
Amazon offers several load balancers, and the one you choose depends on your application and how you plan to distribute the traffic across your resources.Here is a summary of each load balancer:

Application Load Balancer (ALB) - An application load balanceroperates at layer 7 of the open systems interconnection (OSI) model and distributes HTTP and HTTPS traffic.The ALB can make routing decisions based on application-specific information includedinside each HTTP/HTTPS request.
Network Load Balancer - A network load balanceroperates at layer 4 of the open systems interconnection (OSI) model and distributes traffic based on theTCP and UDP protocols.The network load balancer can process millions of requests per second and still maintain ultra-low latencies.
Classic Load Balancer - This is a legacy load balancer you would use if your application uses the EC2-Classic network.If you are using aVPC,Amazon recommends using either the Application Load Balancer or the Network Load Balancer instead of a Classic Load Balancer.This load balancer supports the following protocols:TCP, SSL/TLS, HTTP, and HTTPS.
Gateway Load Balancer - You use a Gateway Load Balancerto distribute trafficbetween third-partyvirtual appliances.

The correct answer in this scenario is the Network Load Balancer because the company needs the incoming traffic to be distributed based on UDP traffic at layer 4.

Additional resources:

AWS Partner Network (APN) Blog
81
Q

An international automobile manufacturer is building a new application to track on-time part delivery toits distributed factories. The database for the application needs to be available for reads and writes globally. It should also be able to scale without downtime or performance impact. Since the application is new, the data schema and data access patterns are not yet well defined. Which of these Amazon database solutions best meets these application requirements?

A. Multi-Region Amazon Redshift Cluster
B. Amazon DynamoDB Global Table
C. Amazon RDS Read Replicas
D. Amazon Neptune Multi-AZ Deployment

A

B. Amazon DynamoDB Global Table

Explanation:
A DynamoDB Global Table is the most appropriate database for this use case because it provides highly scalable cross-region reads and writes. Also, becausethe database does not yet have a fixed schema, a NoSQL data store would have better performance than a relational database.

Here are a few reasons the remaining choices are not the correct choice:

Amazon RDS Read Replicas: Setting up read replicas is a good choice when the database services a read-heavy workload; that is not the situation described in this problem statement.  Also, RDS is a relational database service, and this might not be the best choice since the data schema for this project is not well defined.  Relational database is a better choice when the schema is well-defined and not expected to change significantly.
Amazon Redshiftis a fast, fully-managed, petabyte-scale data warehouse.  Redshift isdesigned for high-performance data analysis and is capable of storing and processing petabytes of data.A Multi-region Redshift cluster is not the appropriate service for this use case because the application does not need a data warehouse.
Amazon Neptune is a graph database service that customers use when building applications that use highly connected data sets.  The application in this problem does not describe a need to work with a highly connected data set.
82
Q

CloudAcademy is running an internal application on an Amazon EC2 instance in a VPC. You can connect to the application using its private iPv6 address.As a Solutions Architect, you need to design a solution that will allow traffic to be quickly directed to a standby instance if the application fails and becomes unreachable. Which approach will meet these requirements the most efficiently?

A. Deploy an Elastic Load Balancer configured with a listener for the private IP address and register the primary instance with the load balancer. Upon failure, de-register the instance and register the secondary instance.
B. Attach an elastic network interface (ENI) to the instance configured with the private IP address. Move the ENI to the standby instance if the primary instance becomes unreachable.
C. Associate an Elastic IP address with the primary instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a secondary instance using Lambda
D. Use the Route53 Latency Routing Policy to route to the standby instance when latency is high

A

B. Attach an elastic network interface (ENI) to the instance configured with the private IP address. Move the ENI to the standby instance if the primary instance becomes unreachable.

Explanation:
A secondary ENI can be added to an instance. While primary ENIs cannot be detached from an instance, secondary ENIs can be detached and attached to a different instance.

83
Q

A user is using an EC2 key pair to connect to an EC2 Linux instance backed by an Elastic Block Storage (EBS) volume.The same EC2 key pair was created and downloaded when the EC2 instance was deployed.The user has lost the EC2 key pair private key and is not able to connect to the EC2 instance via SSH anymore.What steps should the user follow to regain access to the EC2 instance with the least operational overhead? (Choose 2 answers)

A. Stop the instance, detach the root volume and attach it to another EC2 instance as a data volume.
B. Stop the instance, detach the root volume and attach it to another EC2 instance as the root volume.
C. Modify the authorized_keys file with a new public key, move the volume back to the original instance, and restart the instance.
D. Stop the instance, attach a new Elastic Block Storage (EBS) volume as root volume, containing a new public key.

A

A. Stop the instance, detach the root volume and attach it to another EC2 instance as a data volume.

C. Modify the authorized_keys file with a new public key, move the volume back to the original instance, and restart the instance.

Explanation:
A key pair, consisting of a private key and a public key, is a set of security credentials that you use to prove your identity when connecting to an instance. Amazon EC2 stores the public key, and you store the private key. You use the private key, instead of a password, to securely access your instances. When you launch an instance, you are prompted for a key pair. If you plan to connect to the instance using SSH, you must specify a key pair. You can choose an existing key pair or create a new one.
If you lose the private key for an EBS-backed instance, you can regain access to your instance. You must stop the instance, detach its root volume and attach it to another instance as a data volume, modify the authorized_keys file with a new public key, move the volume back to the original instance, and restart the instance. This procedure is not supported for instances with instance store-backed root volumes.

84
Q

An application team plans to deploy an application that processes video files in a parallel workflow to a fleet of Amazon EC2 instances. The solutions architect plans to use a shared EFS filesystem to access and update the video files. When creating the filesystem,the solutions architect needs to choose the appropriate performance mode to enable the highest possible throughput andoperations per second. Which of the following Amazon EFS configuration stepsare themost appropriate for this application?

A. Create anElastic File System (EFS) and enable the Max I/O performance mode.
B. Create anElastic File System (EFS) and enable the General Purpose performance mode.
C. Create an Elastic File System (EFS) and enable the Provisioned Throughput performance mode.
D. Create an Elastic File System (EFS) with the default performance mode, and change it later to Provisioned Throughput if needed.

A

A. Create anElastic File System (EFS) and enable the Max I/O performance mode.

Explanation:
When we discussEFS performance, there are two types of EFS configuration settings we need to look at:

Performance mode: There are two performance modes: General Purpose (default) and Max I/O.  You choose the performance mode when creating the file system, and once the file system is created, you cannot modify the performance mode.  If you need to change the performance mode, you need to create a new file system and migrate your files to the new system.  There are no additionalcosts associated with eitherperformance mode.  Max I/O is a good choice for workloads when there is a lot of parallel file system work needed or if you need more than7,000 operations per second.
Throughput mode:  There are two throughput modes:  Bursting Throughput (default)  and Provisioned Throughput.  Unlike performance modes, there is a cost difference betweenthese throughput modes, and you can change the throughputfor the file system after creation.With bursting throughput mode, the amount of throughput scales as your file system grows;  the more data you store, the more throughput is available to you.  With provisioned throughput mode,you can specify the throughput irrespective of the amount of storage the file system uses.  Provisioned throughput is a good choice when you may need more throughput than you would receive based on the file system's size.

The correct answer, in this case, is to choosethe Max I/O performance mode for the file system. Max I/O is the best choice in this case because the scenario given is for a highly parallel application with the requirement of the highest possible throughput and operations per second.

Provisioned throughput would not be the best answer because the problemdoes not state that throughput needs are high compared to the amount of data stored on the file system.

Several choices mention taking the default file system settings and then changing the performance mode later if needed. Once a file system is created, you cannot change the performance mode without creating a new file system. If you are unsure which performance mode is right for a given workload; you can monitor the file system with Amazon CloudWatch, and then create a new file system with a new performance mode.

85
Q

The average traffic to your online business has quadrupled in the last quarter and you are closely monitoring your database tier consisting of RDS instances.You have configured CloudWatch alarms to monitor custom RDS metrics for read request latency and write throughput, but want to ensure these alarmsare as responsive as possible.However, you have noticed a lag between when the alarm is triggered and when you receive an SNS notification about RDS performance.How can you increase the responsiveness of CloudWatch alarms for these existing RDS metrics?

A. Create CloudWatch Log metric filters for default metrics
B. Enable RDS Performance Insights
C. Configure Route 53 CloudWatch Alarm Health Checks
D. Enable RDS Enhanced Monitoring

A

C. Configure Route 53 CloudWatch Alarm Health Checks

Explanation:
Most of these choices improve the responsiveness of CloudWatch in some way, but the aspect of this question isimproving the responsiveness of CloudWatch alarms you have already implemented. This means you are not interested in additional metric information outside of the custom metrics you’ve already created in CloudWatch, and any features that increase responsiveness to metrics other than what you want will not be beneficial.

So, creating CloudWatch Log metric filters for default RDS metrics will not help, even though this can provide 1-second granularity, it is unlikely the logs will indicate overall latency and throughput of reads and writes, as these performance metrics are not related to specific API calls, but rather general database performance.

Performance Insights allow you to review and analyze your RDS database performance overall, but is not ideal for identifying a performance issue in real-time.

Enabling Enhanced monitoring does provide increased insight into RDS database performance, and withthe increased granularity of 5 seconds instead of 60 seconds, or potentially even 1-second granularity through CloudWatch Log Streams. However, Enhanced Monitoring provides specific operating system metrics and other performance metrics, and would not apply to the custom metric you have created using CloudWatch.

In addition, creating a Route 53 CloudWatch Alarm Health Check will monitor the transmission of data to CloudWatch, and can effectively alert you before the CloudWatch alarm is even triggered.

How does this alarm health check work? According to AWS:

When you create a health check that is based on a CloudWatch alarm, Route 53 monitors the data stream for the corresponding alarm instead of monitoring the alarm state. If the data stream indicates that the state of the alarm is OK, the health check is considered healthy. If the data stream indicates that the state is Alarm, the health check is considered unhealthy. If the data stream doesn’t provide enough information to determine the state of the alarm, the health check status depends on the setting for Health check status: healthy, unhealthy, or last known status.

86
Q

You are responsible for setting up a new Amazon EFS file system.The organization’s security policies mandate that the file system storeall datain an encrypted form.The organization does not needto control key rotation or policies regarding access to the KMS key.What steps should you take to ensure the data is encrypted at rest in this scenario?

A. When creating the EFS filesystem enable encryption using the default AWS-managed KMS key for Amazon EFS.
B. When creating the EFS filesystem enable encryption using acustomer-managed KMS key.
C. When mounting the EFS filesystem to an EC2 instance, use the default AWS-managed KMS key to encrypt the data.
D. When mounting the EFS filesystem to an EC2 instance, use a customer-managed KMS key to encrypt the data.

A

A. When creating the EFS filesystem enable encryption using the default AWS-managed KMS key for Amazon EFS.

Explanation:
First, let’s eliminate the choices suggesting mounting the EFS filesystem using either the AWS-managed KMS key or a customer-managed KMS key.These answers are incorrect because EFS encryption is not related to mounting the filesystem to an EC2 instance.

You enable encryption for an EFS filesystem when you create the filesystem.When setting up EFS encryption you have the choice to use anAWS-managed key or a customer-managed key.In this scenario, the customer does not need to manage the cryptographic key rotation or any policies associated with managing the key.Therefore the best choice would be to use the AWS-managed key.If the customer did have specific key rotation policies or update the policies associated with key management, then a customer-managedkey would be a better choice.

87
Q

A robotics company is building an application that uses an Apache Cassandra cluster. The cluster runs on Amazon EC2 instances that will be deployed into a placement group. A SysOps administrator must design for maximum resiliency of the compute layer.Which deployment strategy meets this requirement?

A. Deploy a partition placement group across a single Availability Zone.
B. Deploy a partition placement group across multiple Availability Zones
C. Deploy a cluster placement group across a single Availability Zone
D. Deploy a cluster placement group across multiple Availability Zones

A

B. Deploy a partition placement group across multiple Availability Zones

Explanation:
There are three types of placement groups: cluster, partition, and spread. Cluster placement groups provide the closest physical placement of instances. Instances in the same cluster placement group have a higher throughput limit for each flow for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network. Cluster placement groups are Single-AZ deployments only.

When resiliency is most important, partition and spread placement groups provide options of Single-AZ or Multi-AZ deployments while ensuring that individual nodes run on separate racks. Partition placement groups divide the cluster into three partitions that are physically separated in the Availability Zone. Spread placement groups place each instance on its own rack inside the Availability Zone. Service quotas can be a factor in the selection of a placement group.

The remaining choices are incorrect for the following reasons:

Partition placement groups help reduce the likelihood of correlated hardware failures for an application. A partition placement group can place partitions in multiple Availability Zones in the same AWS Region. By limiting deployment to a single Availability Zone, the SysOps administrator has not implemented the most resilient solution.
A cluster placement group is a logical grouping of instances within a single Availability Zone. This solution helps workloads achieve the low-latency network performance necessary for tightly coupled node-to-node communication. This solution does not provide the most resiliency because a localized hardware failure could impact multiple EC2 instances.
A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group cannot extend across multiple Availability Zones.
88
Q

An IT department currentlymanages Windows-based file storage for user directories and department file shares.Due to increased costs and resources required to maintain this file storage, the company plans to migrate its files to the cloud anduse Amazon FSx for Windows Server.The team islooking for the appropriate configuration options that will minimize their costs for this storage service.Which of the following FSx for Windows configuration options are cost-effective choices the team can makein this scenario?(Choose 2 answers)

A. Choose the HDD storage type when creating the file system.
B. Enable data deduplication for the file system.
C. Choose the SSD storage type when creating the file system.
D. Select a large custom throughput capacity relative to the storage capacity.

A

A. Choose the HDD storage type when creating the file system.

Explanation:
When you create an Amazon FSx for Windows file system, several configuration selections impact your costs:

Storage Capacity, Storage Type (HDD or SSD)
Throughput Capacity
Deployment type (single-AZ or multi-AZ)
Enabling backups

In addition to managing costs by making the appropriate configuration choices, you can also choose to enable data deduplication for the file system.This step can reduce your storage costs by 30 - 80% by reducing the amount of data you store in your file system.

In this problem scenario, the IT team needs storage to support user directories and department file shares.The primary choice that will drive the cost here is the type of storage for the file system.The HDD storage type is the lowest cost and most appropriate for the workload needed to support home directories;SSD is more expensive and provides more performance than is required for this workload.Finally, the team can enable data deduplication to reduce costs for the configuration further.

The problem and choice list do not mention the need for a Multi-AZ deployment or backups.However, both of these options would increase costs.

89
Q

A solutions architect is responsible for configuring an Amazon EC2 instance to host a customapplication that processes application logs for a dashboard to monitor system health. The solutions architect must choose a data volume for theEC2 instance that will provide high throughput, optimized for a large sequential I/O workload. For this application, performance is more important than cost. Which Amazon storage service should the solutions architect choose for the data volume for this application?

A. Amazon EBS Throughput Optimized HDD (st1)
B. Amazon EBSGeneral Purpose SSD (gp2)
C. Amazon EBS Cold HDD (sc1)
D. AmazonEBS Provisioned IOPS SSD (io2)

A

A. Amazon EBS Throughput Optimized HDD (st1)

Explanation:
There are four volume types available with Amazon EBS:

General Purpose SSD (gp2): Use a general-purpose volume for workloads where IOPS is more critical than throughput, and the cost is more important than performance.
Provisioned IOPS SSD (io2): UseProvisioned IOPS volumes for workloads when you are more concerned with IOPS performance and your workload is primarily smaller random I/O operations.
Throughput Optimized HDD (st1): ChooseThroughput Optimized volumes when you are primarily concerned with throughput and your storage pattern aligns with larger sequential I/O operations.
Cold HDD (sc1): Use Cold HDD volumes when storage is infrequently accessed and when minimizing storagecosts is more important than performance.

In the problem statement for this question, we are most concerned with high throughputperformance optimized for a large sequential I/O workload.Therefore, choosing an Amazon EBS Throughput Optimized HDD (st1) volume would be the best choice.

90
Q

An environmental agency is concluding a 10-year study of mining sites and needs to transfer over 200 terabytes of data to the AWS cloud for storage and analysis.Data will be gradually collected over a period of weeks in an area with no existing network bandwidth. Given the remote location, the agency wants a transfer solution that is cost-effective while requiring minimal device shipments back and forth.Which AWS solution will best address the agency’s data storage and migration requirements?

A. AWS Snowcone
B. AWS Snowball Storage Optimized
C. AWS Snowball Compute Optimized
D. AWS Snowball Compute Optimized with GPU

A

B. AWS Snowball Storage Optimized

Explanation:
Consider Snowball Edge if you need to run computing in rugged, austere, mobile, or disconnected (or intermittently connected) environments. Also consider it for large-scale data transfers and migrations when bandwidth is not available for use of a high-speed online transfer service, such as AWS DataSync.

Snowball Edge Storage Optimized is the optimal data transfer choice if you need to securely and quickly transfer terabytes to petabytes of data to AWS. You can use Snowball Edge Storage Optimized if you have a large backlog of data to transfer or if you frequently collect data that needs to be transferred to AWS and your storage is in an area where high-bandwidth internet connections are not available or cost-prohibitive.

A Snowcone is far too small while a Snowmobile is far too large. Storage-optimized Snowball devices offer more than twice the storage capacity of compute-optimized.

91
Q

An organization plans to implement a security policy that requires all Amazon EBS volumes and snapshots are encrypted. You are responsible for managing resources within the AWS account and needto find a way to ensure that encryption is enabled automatically. What step should you do to make sure all EBS volumes and snapshots are enabled?

A. Enable EBS encryption by default for the AWS account.
B. Use Amazon Data Lifecycle Manager to enable EBS encryption by default for all EBS volumes and snapshots.
C. Configure an Amazon CloudTrail monitor to trigger a lambda function that will enable encryption if an EBSvolume is created without encryption enabled.
D. Create an AWS Identity and Access Management (IAM) identity-based policy that specifies that users can only create encrypted EBS volumes.

A

A. Enable EBS encryption by default for the AWS account.

Explanation:
The best way to ensure that your EBS volumes are encrypted is to enable EBS encryption by default for the AWS Account. When you enable encryption by default,there is no action you need to taketo encrypt the volume at creation time; the system will automatically take care of it even if using an unencrypted snapshot. Though you can explicitly enable encryption during the creation process, it is not required.

Here are a few points about the remaining choices:

Though Amazon Data Lifecycle Manager is a valuable service you can use to automate management tasks forEBS volumes and snapshots;  however, you would not use itto enable encryption by default for all EBS volumes.
In theory, you could useAWS CloudTrail to monitor your EBS resources for security compliance.  However, you would also need to combine this with other services such as AWS CloudWatch Events to trigger an action.  Also, this is a more complicated solution than is necessary.
There are no IAM identity-based policy settings that would allow you tospecify that users can only create encrypted EBSvolumes.  These settingsdo exist for EFS filesystems, but not EBS.

Additional Resources

Encryption by default
92
Q

You’re a Solutions Architect for a large bank that recently migrated to AWS. Because you are storing confidential information that must be PCI-compliant, your main banking application is scheduled for an audit before the go-live date when all customers will begin to use the AWS-based app. Which of the following services would be most helpful in reducing manual auditing efforts to prepare for stakeholder review of your new cloud-based banking application?

A. AWS Audit Manager
B. AWS GuardDuty
C. AWS Trusted Advisor
D. AWS Inspector

A

A. AWS Audit Manager

Explanation:
AWS Audit Manager helps you continuously audit your AWS usage to simplify how you assess risk and compliance with regulations and industry standards. Audit Manager automates evidence collection to reduce the “all hands on deck” manual effort that often happens for audits and enable you to scale your audit capability in the cloud as your business grows. With Audit Manager, it is easy to assess if your policies, procedures, and activities – also known as controls – are operating effectively. When it is time for an audit, AWS Audit Manager helps you manage stakeholder reviews of your controls and enables you to build audit-ready reports with much less manual effort.

93
Q

You plan to develop an efficient auto scaling process for EC2 instances. A key to this will be bootstrapping for newly created instances. You want to configure new instances as quickly as possible to get them into service efficiently upon startup. What tasks can bootstrapping perform? (Choose 3 answers)

A. Bid on spot instances
B. Apply patches and OS updates
C. Enroll an instance into a directory service
D. Install application software

A

B. Apply patches and OS updates
C. Enroll an instance into a directory service
D. Install application software

Explanation:
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.

94
Q

You are the AWS account owner for a small IT company, with a team of developers assigned to your account as IAM users within existing IAM groups.New associate-level developers manage resources in theDev/Test environments, and these resources are quickly launched, used, and then deleted to save on resource costs.The new developershaveread-only permissions in the production environment.There isa complex existing set of buckets intended toseparate Development and Test resources from Production resources, but you know this policy of separation between environments is not followed at all times. Your company needs to prevent new developers from accessing production environment files placed in anincorrect S3 bucket because these production-level objects are accidentally deleted along with other Dev/Test S3 objects.The ideal solution will prevent existing objects from being accidentally deleted andautomatically minimize the problem in the future.What stepsare the most efficienttocontinuously enforce the tagging best practicesand apply the principle of least privilege within Amazon S3? (Choose 2 answers)

A. Assign IAM policiesto the Dev/TestIAM group that authorizeS3 object operation based on object tags.
B. Implement an object tagging policy usingAWS Config’s Auto Remediation feature.
C. Update all existing object tags to correctly reflect their environment using Amazon S3 batch operations.
D. Create an AWS Lambda function to check object tags for each new Amazon S3 object. An incorrect tag would trigger an additional Lambda function to fix the tag.

A

A. Assign IAM policiesto the Dev/TestIAM group that authorizeS3 object operation based on object tags.
B. Implement an object tagging policy usingAWS Config’s Auto Remediation feature.

Explanation:
One of the key benefits of object tagging is that it “enables fine-grained control of permissions.” You also have the Auto Remediation feature within AWS Config to help not only alert you to non-compliant resources but automatically bring the resources into compliance based on permissions and actions you configure.

The other choices are not ideal. Batch operations are designed for updating millions to billions of objects at a time, and setting up a batch operation requires several steps. Going through these manual steps to ensure that very temporary resources are properly tagged is not efficient. Each batch operation is a one-time fix as well thatwould only address the issue for existing resources, and continue to allow non-compliant resources to be created.

The Lambda functions are not an efficient option because checking the tags for each S3 object would result in a large number of functions, and require manually configuring the lambda functions and ensuring that they work as designed. It would essentially be a manual, managed version of the service already available through AWS Config Auto Remediation.

95
Q

A web application requires 8 Amazon EC2 instances to meet its service level agreement (SLA) to respond to99% of the application requests in less than 2 ms. The application load is relativelyconsistent and does not experience large fluctuations in traffic. The application development team is looking for the right combination of AWS services to help manage this set of servers and maximize resiliency. For example, the team would like to monitor and replace unhealthy instances automatically to maintain eight servers at all times. What services should asolutions architect recommend to maximize resiliency and automatically maintain a fixed number of servers?

A. Deploy 4 servers to 2 availability zones with an application load balancer distributing requests across the 8 servers. Set up anauto-scaling group that uses the load balancer health checks and setthe same minimum, maximum, and desired capacity.
B. Deploy 8 servers to a single availability zone with an application load balancer distributing requests across the 8 servers. Set up anauto-scaling group that uses the load balancer health checks with a simple scaling policy.
C. Deploy 4 servers to 2 availability zones with an application load balancer distributing requests across the 8 servers.Set up load balancer health checks to replace unhealthy instances.
D. Deploy 8 servers to a single availability zone with an application load balancer distributing requests across the 8 servers. Set up a CloudWatch alarm to automatically replace unhealthy instances.

A

A. Deploy 4 servers to 2 availability zones with an application load balancer distributing requests across the 8 servers. Set up anauto-scaling group that uses the load balancer health checks and setthe same minimum, maximum, and desired capacity.

Explanation:
The critical point in this question is that you need to identify a solution that will automatically maintain a fixed number of servers. Added to this, you need to decide whether to use 1 or 2 availability zones.

Let’s start with the number of availability zones. If you want your application to be highly available and resilient, distributing your resource across multiple availability zones is the best choice. If one availability zone becomes unavailable, the servers in the remaining availability zone can support your application load. We can disregard any solution that includes using a single availability zone and only consider those that use two availability zones.

The last thing we need to do is determine how to set up an AWS service to maintain a fixed number of servers. It turns out you can useAWS auto-scaling for more than scaling-out your compute resources. You can also use auto-scalingto buildresilient systems by configuring it to maintain a fixed number of resources. When used in thisway, the auto-scaling service can monitor your resources and replace unhealthy instances. The choice that suggests using the load balancer to replace unhealthy instances is incorrect because a load balancer cannot replace unhealthy instances.

Additional Resources:

Maintaining Fixed Number of Instances
96
Q

Which of the following architecture design choices will effectively reduce costs? (Choose three answers.)

A. Use a VPC Endpoint to deliver static S3 content to the internet instead of a NAT Gateway.
B. Use Amazon CloudFront as your CDN rather than creating a custom CDN.
C. Transfer on-premise Windows File Server files to S3 using Storage Gateway rather than migrating a Windows File Server to AWS using FSx.
D. Keep business-critical EC2 audit logs in local, on-premise storage rather than in S3.

A

A. Use a VPC Endpoint to deliver static S3 content to the internet instead of a NAT Gateway.
B. Use Amazon CloudFront as your CDN rather than creating a custom CDN.
C. Transfer on-premise Windows File Server files to S3 using Storage Gateway rather than migrating a Windows File Server to AWS using FSx.

Explanation:
All of the above are helpful ways to reduce spending except for storing locally rather than in S3. Because the logs are business-critical and relevant to EC2 instances on the AWS network, it makes the most sense to also analyze and store audit logs within the AWS environment. Data transfer costs may otherwise incur.

97
Q

Your company has recently acquired several small start-up techcompanies within the last year. In an effort to consolidate your resources, you are gradually migrating all digital files to your parent company’s AWS accounts, and storing a large number of files within an S3 bucket.You are uploading millions of files, to save costs, but have not had the opportunity to review many of the files and documents to understand which files will be accessed frequently or infrequently.What would be the best way to quickly upload the objects to S3and ensure the best storage class from a cost perspective?

A. Upload all files to the Amazon S3 Standard storage class and immediately set up all objects to be processed withStorage Class Analysis.
B. Upload all files to the Amazon S3 Intelligent Tiering storage class and review costs related to the frequency of access over time.
C. Upload all files to the Amazon S3 Standard-IA storage class and immediately set up all objects to be processed withStorage Class Analysis.
D. Upload all the files to the Amazon S3 Standardstorage class and review costs for access frequency over time.

A

B. Upload all files to the Amazon S3 Intelligent Tiering storage class and review costs related to the frequency of access over time

Explanation:
There are essentially three types of answers here - choices that use no automation, choices that use the incorrect type of automation given the situation, and a choice that uses the correct type of automation.

First, the choices that use little to no automation in this case would be the least recommended decision. Uploading millions of files to either Standard or Standard-IA and then waiting to review costs and access patterns could be very costly.

Second, the choice to use storage class analysis could work eventually, if you have the time to wait for analytics to be gathered (which could still be as costly as the choice above) and the time to then review the analytics, and then sift through the files to migrate them to the correct storage class. This is a better choice, but not the best choice.

Finally, the correct answer would be to use Intelligent_Tiering, which continuously monitors the access frequency and shifts the objects between a standard and infrequent-access tier depending on how access patterns may change. This happens automatically, and starts immediately, so it is the best choice of the options provided.

98
Q

As a member of the data management team, you are reviewing which Amazon EFS storage classes for the company’s various data types.You have a high volume of individual customer receipts that are compiled and replicated to larger inventory sales log files twice a day. The original receipts are rarely accessed and deleted after 30 days. If the receipts are lost or deleted by mistake before 30 days, the sale information can be retrieved from the sales logs.Which EFS storage class would be most effective for storing these customer receipts?

A. EFS Standard
B. EFS Standard-Infrequent Access (IA)
C. EFS One Zone
D. EFS One Zone-IA

A

D. EFS One Zone-IA

Explanation:
The EFS One Zone–IA storage class reduces storage costs for files that are not accessed every day. We recommend EFS One Zone–IA storage if you need your full dataset to be readily accessible and want to automatically save on storage costs for files that are less frequently accessed. One Zone-IA storage is compatible with all Amazon EFS features, and is available in all AWS Regions where Amazon EFS is available.

99
Q

Your application provides real-time analytics on commodity trading. You want to encrypt network communications between clients and your app. Your major concern is to promptly renew expiring SSL/TLS certificates.Which service will help you protect data in transit?

A. AWS DataSync
B. AWS Certificate Manager
C. AWS Glue DataBrew
D. AWS Data Exchange

A

B. AWS Certificate Manager

Explanation:
For this application, you need SSL/TLS certificates to encrypt data in transit and establish the identity of websites over the internet. With AWS Certificate Manager, you easily provision, and manage the renewal and deployment of SSL/TLS certificates. This is your solution.

The remaining choices are incorrect for the following reasons:

AWS DataSync is a data transfer service that enables you to optimize network bandwidth and accelerate data transfer between on-premises storage and AWS storage. DataSync does not manage SSL/TLS certificates.

AWS DataBrew is a data visualization tool that cleans and normalizes data for analytics and machine learning, without writing any code.

While AWS IQ enables you to complete projects faster with the help of third-party AWS-certified experts, AWS Data Exchange allows you to easily exchange data in a secure and compliant way.