More Test Questions - 2 Flashcards

1
Q

A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images. A Solutions Architect has been asked to recommend a solution for hosting the website. Which solution is the MOST cost-effective?

1: Containerize the website and host it in AWS Fargate
2: Create an Amazon S3 bucket and host the website there
3: Deploy a web server on an Amazon EC2 instance to host the website
4: Configure an Application Load Balancer with an AWS Lambda target

A

1: Containerize the website and host it in AWS Fargate

2: Create an Amazon S3 bucket and host the website there

3: Deploy a web server on an Amazon EC2 instance to host the website
4: Configure an Application Load Balancer with an AWS Lambda target

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company requires a solution to allow customers to customize images that are stored in an online catalog. The image customization parameters will be sent in requests to Amazon API Gateway. The customized image will then be generated on-demand and can be accessed online. The solutions architect requires a highly available solution. Which solution will be MOST cost-effective?

1: Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances
2: Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
3: Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances
4: Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin

A

1: Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances

2: Use AWS Lambda to manipulate the original images to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin

3: Use AWS Lambda to manipulate the original images to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances
4: Use Amazon EC2 instances to manipulate the original images into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A solutions architect is finalizing the architecture for a distributed database that will run across multiple Amazon EC2 instances. Data will be replicated across all instances so the loss of an instance will not cause loss of data. The database requires block storage with low latency and throughput that supports up to several million transactions per second per server. Which storage solution should the solutions architect use?

1: Amazon EBS
2: Amazon EC2 instance store
3: Amazon EFS
4: Amazon S3

A

1: Amazon EBS

2: Amazon EC2 instance store

3: Amazon EFS
4: Amazon S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website’s DNS records are hosted in Amazon Route 53 with the domain name pointing to the ALB. A solution is required for displaying a static error page if the website becomes unavailable. Which configuration should a solutions architect use to meet these requirements with the LEAST operational overhead?

1: Create a Route 53 alias record for an Amazon CloudFront distribution and specify the ALB as the origin. Create custom error pages for the distribution
2: Create a Route 53 active-passive failover configuration. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the static website as the passive record for failover
3: Create a Route 53 weighted routing policy. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the record for the S3 static website with a weighting of zero. When an issue occurs increase the weighting
4: Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB

A

1: Create a Route 53 alias record for an Amazon CloudFront distribution and specify the ALB as the origin. Create custom error pages for the distribution

2: Create a Route 53 active-passive failover configuration. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the static website as the passive record for failover
3: Create a Route 53 weighted routing policy. Create a static website using an Amazon S3 bucket that hosts a static error page. Configure the record for the S3 static website with a weighting of zero. When an issue occurs increase the weighting
4: Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company is deploying a new web application that will run on Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. The application requires a shared storage solution that offers strong consistency as the content will be regularly updated. Which solution requires the LEAST amount of effort?

1: Create an Amazon S3 bucket to store the web content and use Amazon CloudFront to deliver the content
2: Create an Amazon Elastic File System (Amazon EFS) file system and mount it on the individual Amazon EC2 instances
3: Create a shared Amazon Block Store (Amazon EBS) volume and mount it on the individual Amazon EC2 instances
4: Create a volume gateway using AWS Storage Gateway to host the data and mount it to the Auto Scaling group

A

1: Create an Amazon S3 bucket to store the web content and use Amazon CloudFront to deliver the content

2: Create an Amazon Elastic File System (Amazon EFS) file system and mount it on the individual Amazon EC2 instances

3: Create a shared Amazon Block Store (Amazon EBS) volume and mount it on the individual Amazon EC2 instances
4: Create a volume gateway using AWS Storage Gateway to host the data and mount it to the Auto Scaling group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website has a mix of dynamic and static content. Customers around the world are reporting performance issues with the website. Which set of actions will improve website performance for users worldwide?

1: Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution
2: Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB
3: Launch new EC2 instances hosting the same web application in different Regions closer to the users. Use an AWS Transit Gateway to connect customers to the closest region
4: Migrate the website to an Amazon S3 bucket in the Regions closest to the users. Then create an Amazon Route 53 geolocation record to point to the S3 buckets

A

1: Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution

2: Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB
3: Launch new EC2 instances hosting the same web application in different Regions closer to the users. Use an AWS Transit Gateway to connect customers to the closest region
4: Migrate the website to an Amazon S3 bucket in the Regions closest to the users. Then create an Amazon Route 53 geolocation record to point to the S3 buckets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A web application has recently been launched on AWS. The architecture includes two tier with a web layer and a database later. It has been identified that the web server layer may be vulnerable to cross-site scripting (XSS) attacks. What should a solutions architect do to remediate the vulnerability?

1: Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
2: Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
3: Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
4: Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard

A

1: Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF
2: Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF

3: Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF

4: Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A static website currently runs in a company’s on-premises data center. The company plan to migrate the website to AWS. The website must load quickly for global users and the solution must also be cost-effective. What should a solutions architect do to accomplish this?

1: Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions
2: Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin
3: Copy the website content to an Amazon EC2 instance. Configure Amazon Route 53 geolocation routing policies to select the closest origin
4: Copy the website content to multiple Amazon EC2 instances in multiple AWS Regions. Configure AWS Route 53 geolocation routing policies to select the closest region

A

1: Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions

2: Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin

3: Copy the website content to an Amazon EC2 instance. Configure Amazon Route 53 geolocation routing policies to select the closest origin
4: Copy the website content to multiple Amazon EC2 instances in multiple AWS Regions. Configure AWS Route 53 geolocation routing policies to select the closest region

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A multi-tier application runs with eight front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an Application Load Balancer. A solutions architect needs to modify the infrastructure to be highly available without modifying the application. Which architecture should the solutions architect choose that provides high availability?

1: Create an Auto Scaling group that uses four instances across each of two Regions
2: Modify the Auto Scaling group to use four instances across each of two Availability Zones
3: Create an Auto Scaling template that can be used to quickly create more instances in another Region
4: Create an Auto Scaling group that uses four instances across each of two subnets

A

1: Create an Auto Scaling group that uses four instances across each of two Regions

2: Modify the Auto Scaling group to use four instances across each of two Availability Zones

3: Create an Auto Scaling template that can be used to quickly create more instances in another Region
4: Create an Auto Scaling group that uses four instances across each of two subnets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company’s web application is using multiple Amazon EC2 Linux instances and storing data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure. What should a solutions architect do to meet these requirements?

1: Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance
2: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance
3: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance
4: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-A)

A

1: Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance
2: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance

3: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance

4: Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A website runs on a Microsoft Windows server in an on-premises data center. The web server is being migrated to Amazon EC2 Windows instances in multiple Availability Zones on AWS. The web server currently uses data stored in an on-premises network-attached storage (NAS) device. Which replacement to the NAS file share is MOST resilient and durable?

1: Migrate the file share to Amazon EBS
2: Migrate the file share to AWS Storage Gateway
3: Migrate the file share to Amazon FSx for Windows File Server
4: Migrate the file share to Amazon Elastic File System (Amazon EFS)

A

1: Migrate the file share to Amazon EBS
2: Migrate the file share to AWS Storage Gateway

3: Migrate the file share to Amazon FSx for Windows File Server

4: Migrate the file share to Amazon Elastic File System (Amazon EFS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company is planning a migration for a high performance computing (HPC) application and associated data from an on-premises data center to the AWS Cloud. The company uses tiered storage on premises with hot high-performance parallel storage to support the application during periodic runs of the application, and more economical cold storage to hold the data when the application is not actively running. Which combination of solutions should a solutions architect recommend to support the storage needs of the application? (Select TWO)

1: Amazon S3 for cold data storage
2: Amazon EFS for cold data storage
3: Amazon S3 for high-performance parallel storage
4: Amazon FSx for Lustre for high-performance parallel storage
5: Amazon FSx for Windows for high-performance parallel storage

A

1: Amazon S3 for cold data storage

2: Amazon EFS for cold data storage
3: Amazon S3 for high-performance parallel storage

4: Amazon FSx for Lustre for high-performance parallel storage

5: Amazon FSx for Windows for high-performance parallel storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A web application that allows users to upload and share documents is running on a single Amazon EC2 instance with an Amazon EBS volume. To increase availability the architecture has been updated to use an Auto Scaling group of several instances across Availability Zones behind an Application Load Balancer. After the change users can only see a subset of the documents. What is the BEST method for a solutions architect to modify the solution so users can see all documents?

1: Run a script to synchronize the data between Amazon EBS volumes
2: Use Sticky Sessions with the ALB to ensure users are directed to the same EC2 instance in a session
3: Copy the data from all EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
4: Configure the Application Load Balancer to send the request to all servers. Return each document from the correct server

A

1: Run a script to synchronize the data between Amazon EBS volumes
2: Use Sticky Sessions with the ALB to ensure users are directed to the same EC2 instance in a session

3: Copy the data from all EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS

4: Configure the Application Load Balancer to send the request to all servers. Return each document from the correct server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by midmorning. How should the scaling be changed to address the staff complaints and keep costs to a minimum?

1: Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens
2: Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period
3: Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
4: Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens

A

1: Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens
2: Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period

3: Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period

4: Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An application uses Amazon EC2 instances and an Amazon RDS MySQL database. The database is not currently encrypted. A solutions architect needs to apply encryption to the database for all new and existing data. How should this be accomplished?

1: Create an Amazon ElastiCache cluster and encrypt data using the cache nodes
2: Enable encryption for the database using the API. Take a full snapshot of the database. Delete old snapshots
3: Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot
4: Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance

A

1: Create an Amazon ElastiCache cluster and encrypt data using the cache nodes
2: Enable encryption for the database using the API. Take a full snapshot of the database. Delete old snapshots

3: Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot

4: Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company have 500 TB of data in an on-premises file share that needs to be moved to Amazon S3 Glacier. The migration must not saturate the company’s low-bandwidth internet connection and the migration must be completed within a few weeks. What is the MOST cost-effective solution?

1: Create an AWS Direct Connect connection and migrate the data straight into Amazon Glacier
2: Order 7 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce a VPC endpoint
3: Use AWS Global Accelerator to accelerate upload and optimize usage of the available bandwidth
4: Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier

A

1: Create an AWS Direct Connect connection and migrate the data straight into Amazon Glacier
2: Order 7 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce a VPC endpoint
3: Use AWS Global Accelerator to accelerate upload and optimize usage of the available bandwidth

4: Order 7 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A company has refactored a legacy application to run as two microservices using Amazon ECS. The application processes data in two parts and the second part of the process takes longer than the first. How can a solutions architect integrate the microservices and allow them to scale independently?

1: Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2
2: Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic
3: Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose
4: Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue

A

1: Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2
2: Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic
3: Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose

4: Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company. How should security groups be configured in this situation? (Select TWO)

1: Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0
2: Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0
3: Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier
4: Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier
5: Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier

A

1: Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0

2: Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0

3: Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier

4: Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier
5: Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A solutions architect has created a new AWS account and must secure AWS account root user access. Which combination of actions will accomplish this? (Select TWO)

1: Ensure the root user uses a strong password
2: Enable multi-factor authentication to the root user
3: Store root user access keys in an encrypted Amazon S3 bucket
4: Add the root user to a group containing administrative permissions
5: Delete the root user account

A

1: Ensure the root user uses a strong password

2: Enable multi-factor authentication to the root user

3: Store root user access keys in an encrypted Amazon S3 bucket
4: Add the root user to a group containing administrative permissions
5: Delete the root user account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A company allows its developers to attach existing IAM policies to existing IAM roles to enable faster experimentation and agility. However, the security operations team is concerned that the developers could attach the existing administrator policy, which would allow the developers to circumvent any other security policies. How should a solutions architect address this issue?

1: Create an Amazon SNS topic to send an alert every time a developer creates a new policy
2: Use service control policies to disable IAM activity across all accounts in the organizational unit
3: Prevent the developers from attaching any policies and assign all IAM duties to the security operations team
4: Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy

A

1: Create an Amazon SNS topic to send an alert every time a developer creates a new policy
2: Use service control policies to disable IAM activity across all accounts in the organizational unit
3: Prevent the developers from attaching any policies and assign all IAM duties to the security operations team

4: Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A solutions architect is optimizing a website for real-time streaming and on-demand videos. The website’s users are located around the world and the solutions architect needs to optimize the performance for both the real-time and on-demand streaming. Which service should the solutions architect choose?

1: Amazon CloudFront
2: AWS Global Accelerator
3: Amazon Route 53
4: Amazon S3 Transfer Acceleration

A

1: Amazon CloudFront

2: AWS Global Accelerator
3: Amazon Route 53
4: Amazon S3 Transfer Acceleration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

An organization is creating a new storage solution and needs to ensure that Amazon S3 objects that are deleted are immediately restorable for up to 30 days. After 30 days the objects should be retained for a further 180 days and be restorable within 24 hours. The solution should be operationally simple and cost-effective. How can these requirements be achieved? (Select TWO)

1: Enable object versioning on the Amazon S3 bucket that will contain the objects
2: Create a lifecycle rule to transition non-current versions to GLACIER after 30 days, and then expire the objects after 180 days
3: Enable multi-factor authentication (MFA) delete protection
4: Enable cross-region replication (CRR) for the Amazon S3 bucket that will contain the objects 5: Create a lifecycle rule to transition non-current versions to STANDARD_IA after 30 days, and then expire the objects after 180 days

A

1: Enable object versioning on the Amazon S3 bucket that will contain the objects

2: Create a lifecycle rule to transition non-current versions to GLACIER after 30 days, and then expire the objects after 180 days

3: Enable multi-factor authentication (MFA) delete protection
4: Enable cross-region replication (CRR) for the Amazon S3 bucket that will contain the objects 5: Create a lifecycle rule to transition non-current versions to STANDARD_IA after 30 days, and then expire the objects after 180 days

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Objects uploaded to Amazon S3 are initially accessed frequently for a period of 30 days. Then, objects are infrequently accessed for up to 90 days. After that, the objects are no longer needed. How should lifecycle management be configured?

1: Transition to STANDARD_IA after 30 days. After 90 days transition to GLACIER
2: Transition to STANDARD_IA after 30 days. After 90 days transition to ONEZONE_IA
3: Transition to ONEZONE_IA after 30 days. After 90 days expire the objects
4: Transition to REDUCED_REDUNDANCY after 30 days. After 90 days expire the objects

A

1: Transition to STANDARD_IA after 30 days. After 90 days transition to GLACIER
2: Transition to STANDARD_IA after 30 days. After 90 days transition to ONEZONE_IA

3: Transition to ONEZONE_IA after 30 days. After 90 days expire the objects

4: Transition to REDUCED_REDUNDANCY after 30 days. After 90 days expire the objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A company has acquired another business and needs to migrate their 50TB of data into AWS within 1 month. They also require a secure, reliable and private connection to the AWS cloud. How are these requirements best accomplished?

1: Provision an AWS Direct Connect connection and migrate the data over the link
2: Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link
3: Launch a Virtual Private Gateway (VPG) and migrate the data over the AWS VPN
4: Provision an AWS VPN CloudHub connection and migrate the data over redundant links

A

1: Provision an AWS Direct Connect connection and migrate the data over the link

2: Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct Connect link

3: Launch a Virtual Private Gateway (VPG) and migrate the data over the AWS VPN
4: Provision an AWS VPN CloudHub connection and migrate the data over redundant links

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

An application on Amazon Elastic Container Service (ECS) performs data processing in two parts. The second part takes much longer to complete. How can an Architect decouple the data processing from the backend application component?

1: Process both parts using the same ECS task. Create an Amazon Kinesis Firehose stream
2: Process each part using a separate ECS task. Create an Amazon SNS topic and send a notification when the processing completes
3: Create an Amazon DynamoDB table and save the output of the first part to the table
4: Process each part using a separate ECS task. Create an Amazon SQS queue

A

1: Process both parts using the same ECS task. Create an Amazon Kinesis Firehose stream
2: Process each part using a separate ECS task. Create an Amazon SNS topic and send a notification when the processing completes
3: Create an Amazon DynamoDB table and save the output of the first part to the table

4: Process each part using a separate ECS task. Create an Amazon SQS queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

An application is running on Amazon EC2 behind an Elastic Load Balancer (ELB). Content is being published using Amazon CloudFront and you need to restrict the ability for users to circumvent CloudFront and access the content directly through the ELB. How can you configure this solution?

1: Create an Origin Access Identity (OAI) and associate it with the distribution
2: Use signed URLs or signed cookies to limit access to the content
3: Use a Network ACL to restrict access to the ELB
4: Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change

A

1: Create an Origin Access Identity (OAI) and associate it with the distribution
2: Use signed URLs or signed cookies to limit access to the content
3: Use a Network ACL to restrict access to the ELB

4: Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the CloudFront internal service IP addresses when they change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A company has divested a single business unit and needs to move the AWS account owned by the business unit to another AWS Organization. How can this be achieved?

1: Create a new account in the destination AWS Organization and migrate resources
2: Create a new account in the destination AWS Organization and share the original resources using AWS Resource Access Manager
3: Migrate the account using AWS CloudFormation
4: Migrate the account using the AWS Organizations console

A

1: Create a new account in the destination AWS Organization and migrate resources
2: Create a new account in the destination AWS Organization and share the original resources using AWS Resource Access Manager
3: Migrate the account using AWS CloudFormation

4: Migrate the account using the AWS Organizations console

28
Q

An application running on Amazon EC2 needs to regularly download large objects from Amazon S3. How can performance be optimized for high-throughput use cases?

1: Issue parallel requests and use byte-range fetches
2: Use Amazon S3 Transfer acceleration
3: Use Amazon CloudFront to cache the content
4: Use AWS Global Accelerator

A

1: Issue parallel requests and use byte-range fetches

2: Use Amazon S3 Transfer acceleration
3: Use Amazon CloudFront to cache the content
4: Use AWS Global Accelerator

29
Q

An Amazon RDS PostgreSQL database is configured as Multi-AZ. A solutions architect needs to scale read performance and the solution must be configured for high availability. What is the most cost-effective solution?

1: Create a read replica as a Multi-AZ DB instance
2: Deploy a read replica in a different AZ to the master DB instance
3: Deploy a read replica using Amazon ElastiCache
4: Deploy a read replica in the same AZ as the master DB instance

A

1: Create a read replica as a Multi-AZ DB instance

2: Deploy a read replica in a different AZ to the master DB instance
3: Deploy a read replica using Amazon ElastiCache
4: Deploy a read replica in the same AZ as the master DB instance

30
Q

A High Performance Computing (HPC) application will be migrated to AWS. The application requires low network latency and high throughput between nodes and will be deployed in a single AZ. How should the application be deployed for best inter-node performance?

1: In a partition placement group
2: In a cluster placement group
3: In a spread placement group
4: Behind a Network Load Balancer (NLB)

A

1: In a partition placement group

2: In a cluster placement group

3: In a spread placement group
4: Behind a Network Load Balancer (NLB)

31
Q

A web application is deployed in multiple regions behind an ELB Application Load Balancer. You need deterministic routing to the closest region and automatic failover. Traffic should traverse the AWS global network for consistent performance. How can this be achieved?

1: Configure AWS Global Accelerator and configure the ALBs as targets
2: Place an EC2 Proxy in front of the ALB and configure automatic failover
3: Create a Route 53 Alias record for each ALB and configure a latency-based routing policy
4: Use a CloudFront distribution with multiple custom origins in each region and configure for high availability

A

1: Configure AWS Global Accelerator and configure the ALBs as targets

2: Place an EC2 Proxy in front of the ALB and configure automatic failover
3: Create a Route 53 Alias record for each ALB and configure a latency-based routing policy
4: Use a CloudFront distribution with multiple custom origins in each region and configure for high availability

32
Q

You are looking for a method to distribute onboarding videos to your company’s numerous remote workers around the world. The training videos are located in an S3 bucket that is not publicly accessible. Which of the options below would allow you to share the videos?

1: Use CloudFront and set the S3 bucket as an origin
2: Use a Route 53 Alias record the points to the S3 bucket
3: Use ElastiCache and attach the S3 bucket as a cache origin
4: Use CloudFront and use a custom origin pointing to an EC2 instance

A

1: Use CloudFront and set the S3 bucket as an origin

2: Use a Route 53 Alias record the points to the S3 bucket
3: Use ElastiCache and attach the S3 bucket as a cache origin
4: Use CloudFront and use a custom origin pointing to an EC2 instance

33
Q

A client is in the design phase of developing an application that will process orders for their online ticketing system. The application will use a number of front-end EC2 instances that pick-up orders and place them in a queue for processing by another set of back-end EC2 instances. The client will have multiple options for customers to choose the level of service they want to pay for. The client has asked how he can design the application to process the orders in a prioritized way based on the level of service the customer has chosen?

1: Create multiple SQS queues, configure exactly-once processing and set the maximum visibility timeout to 12 hours
2: Create multiple SQS queues, configure the front-end application to place orders onto a specific queue based on the level of service requested and configure the back-end instances to sequentially poll the queues in order of priority
3: Create a combination of FIFO queues and Standard queues and configure the applications to place messages into the relevant queue based on priority
4: Create a single SQS queue, configure the front-end application to place orders on the queue in order of priority and configure the back-end instances to poll the queue and pick up messages in the order they are presented

A

1: Create multiple SQS queues, configure exactly-once processing and set the maximum visibility timeout to 12 hours

2: Create multiple SQS queues, configure the front-end application to place orders onto a specific queue based on the level of service requested and configure the back-end instances to sequentially poll the queues in order of priority

3: Create a combination of FIFO queues and Standard queues and configure the applications to place messages into the relevant queue based on priority
4: Create a single SQS queue, configure the front-end application to place orders on the queue in order of priority and configure the back-end instances to poll the queue and pick up messages in the order they are presented

34
Q

Your company is opening a new office in the Asia Pacific region. Users in the new office will need to read data from an RDS database that is hosted in the U.S. To improve performance, you are planning to implement a Read Replica of the database in the Asia Pacific region. However, your Chief Security Officer (CSO) has explained to you that the company policy dictates that all data that leaves the U.S must be encrypted at rest. The master RDS DB is not currently encrypted. What options are available to you? (Select TWO)

1: You can create an encrypted Read Replica that is encrypted with the same key
2: You can create an encrypted Read Replica that is encrypted with a different key
3: You can enable encryption for the master DB by creating a new DB from a snapshot with encryption enabled
4: You can enable encryption for the master DB through the management console 5: You can use an ELB to provide an encrypted transport layer in front of the RDS DB

A

1: You can create an encrypted Read Replica that is encrypted with the same key

2: You can create an encrypted Read Replica that is encrypted with a different key

3: You can enable encryption for the master DB by creating a new DB from a snapshot with encryption enabled

4: You can enable encryption for the master DB through the management console 5: You can use an ELB to provide an encrypted transport layer in front of the RDS DB

35
Q

A company’s Amazon EC2 instances were terminated or stopped, resulting in a loss of important data that was stored on attached EC2 instance stores. They want to avoid this happening in the future and need a solution that can scale as data volumes increase with the LEAST amount of management and configuration. Which storage is most appropriate?

1: Amazon EFS
2: Amazon S3
3: Amazon EBS
4: Amazon RDS

A

1: Amazon EFS

2: Amazon S3
3: Amazon EBS
4: Amazon RDS

36
Q

You would like to grant additional permissions to an individual ECS application container on an ECS cluster that you have deployed. You would like to do this without granting additional permissions to the other containers that are running on the cluster. How can you achieve this?

1: Create a separate Task Definition for the application container that uses a different Task Role
2: In the same Task Definition, specify a separate Task Role for the application container
3: Use EC2 instances instead as you can assign different IAM roles on each instance
4: You cannot implement granular permissions with ECS containers

A

1: Create a separate Task Definition for the application container that uses a different Task Role

2: In the same Task Definition, specify a separate Task Role for the application container
3: Use EC2 instances instead as you can assign different IAM roles on each instance
4: You cannot implement granular permissions with ECS containers

37
Q

The development team in your company has created a new application that you plan to deploy on AWS which runs multiple components in Docker containers. You would prefer to use AWS managed infrastructure for running the containers as you do not want to manage EC2 instances. Which of the below solution options would deliver these requirements? (Select TWO)

1: Use the Elastic Container Service (ECS) with the Fargate Launch Type
2: Put your container images in a private repository
3: Use the Elastic Container Service (ECS) with the EC2 Launch Type
4: Use CloudFront to deploy Docker on EC2
5: Put your container images in the Elastic Container Registry (ECR)

A

1: Use the Elastic Container Service (ECS) with the Fargate Launch Type

2: Put your container images in a private repository
3: Use the Elastic Container Service (ECS) with the EC2 Launch Type
4: Use CloudFront to deploy Docker on EC2

5: Put your container images in the Elastic Container Registry (ECR)

38
Q

A developer is creating a solution for a real-time bidding application for a large retail company that allows users to bid on items of end-of-season clothing. The application is expected to be extremely popular and the back-end DynamoDB database may not perform as required. How can the Solutions Architect enable in-memory read performance with microsecond response times for the DynamoDB database?

1: Enable read replicas
2: Configure DynamoDB Auto Scaling
3: Configure Amazon DAX
4: Increase the provisioned throughput

A

1: Enable read replicas
2: Configure DynamoDB Auto Scaling

3: Configure Amazon DAX

4: Increase the provisioned throughput

39
Q

An application launched on Amazon EC2 instances needs to publish personally identifiable information (PII) about customers using Amazon SNS. The application is launched in private subnets within an Amazon VPC. Which is the MOST secure way to allow the application to access service endpoints in the same region?

1: Use an Internet Gateway
2: Use AWS PrivateLink
3: Use a proxy instance
4: Use a NAT gateway

A

1: Use an Internet Gateway

2: Use AWS PrivateLink

3: Use a proxy instance
4: Use a NAT gateway

40
Q

A mobile client requires data from several application-layer services to populate its user interface. What can the application team use to decouple the client interface from the underlying services behind them?

1: AWS Device Farm
2: Amazon Cognito
3: Amazon API Gateway
4: Application Load Balancer

A

1: AWS Device Farm
2: Amazon Cognito

3: Amazon API Gateway

4: Application Load Balancer

41
Q

A Solutions Architect is designing a web application that runs on Amazon EC2 instances behind an Elastic Load Balancer. All data in transit must be encrypted. Which solution options meet the encryption requirement? (Select TWO)

1: Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances
2: Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances
3: Use an Application Load Balancer (ALB) in passthrough mode, then terminate SSL on EC2 instances
4: Use a Network Load Balancer (NLB) with an HTTPS listener, then install SSL certificates on the NLB and EC2 instances
5: Use an Application Load Balancer (ALB) with a TCP listener, then terminate SSL on EC2 instances

A

1: Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances

2: Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances

3: Use an Application Load Balancer (ALB) in passthrough mode, then terminate SSL on EC2 instances
4: Use a Network Load Balancer (NLB) with an HTTPS listener, then install SSL certificates on the NLB and EC2 instances
5: Use an Application Load Balancer (ALB) with a TCP listener, then terminate SSL on EC2 instances

42
Q

An application running video-editing software is using significant memory on an Amazon EC2 instance. How can a user track memory usage on the Amazon EC2 instance?

1: Install the CloudWatch agent on the EC2 instance to push memory usage to an Amazon CloudWatch custom metric
2: Use an instance type that supports memory usage reporting to a metric by default
3: Call Amazon CloudWatch to retrieve the memory usage metric data that exists for the EC2 instance
4: Assign an IAM role to the EC2 instance with an IAM policy granting access to the desired metric

A

1: Install the CloudWatch agent on the EC2 instance to push memory usage to an Amazon CloudWatch custom metric

2: Use an instance type that supports memory usage reporting to a metric by default
3: Call Amazon CloudWatch to retrieve the memory usage metric data that exists for the EC2 instance
4: Assign an IAM role to the EC2 instance with an IAM policy granting access to the desired metric

43
Q

A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet that was created with default ACL settings. The web servers must be accessible only to customers on an SSL connection. The database should only be accessible to web servers in a public subnet. Which solution meets these requirements without impacting other running applications? (Select TWO)

1: Create a DB server security group that allows MySQL port 3306 inbound and specify the source as a web server security group
2: Create a web server security group that allows HTTPS port 443 inbound traffic from Anywhere (0.0.0.0/0) and apply it to the web servers
3: Create a network ACL on the web server’s subnet, allow HTTPS port 443 inbound, and specify the source as 0.0.0.0/0
4: Create a DB server security group that allows the HTTPS port 443 inbound and specify the source as a web server security group
5: Create a network ACL on the DB subnet, allow MySQL port 3306 inbound for web servers, and deny all outbound traffic

A

1: Create a DB server security group that allows MySQL port 3306 inbound and specify the source as a web server security group

2: Create a web server security group that allows HTTPS port 443 inbound traffic from Anywhere (0.0.0.0/0) and apply it to the web servers

3: Create a network ACL on the web server’s subnet, allow HTTPS port 443 inbound, and specify the source as 0.0.0.0/0
4: Create a DB server security group that allows the HTTPS port 443 inbound and specify the source as a web server security group
5: Create a network ACL on the DB subnet, allow MySQL port 3306 inbound for web servers, and deny all outbound traffic

44
Q

The security team in your company is defining new policies for enabling security analysis, resource change tracking, and compliance auditing. They would like to gain visibility into user activity by recording API calls made within the company’s AWS account. The information that is logged must be encrypted. This requirement applies to all AWS regions in which your company has services running. How will you implement this request? (Select TWO)

1: Create a CloudTrail trail in each region in which you have services
2: Enable encryption with a single KMS key
3: Create a CloudTrail trail and apply it to all regions
4: Enable encryption with multiple KMS keys
5: Use CloudWatch to monitor API calls

A

1: Create a CloudTrail trail in each region in which you have services

2: Enable encryption with a single KMS key

3: Create a CloudTrail trail and apply it to all regions

4: Enable encryption with multiple KMS keys
5: Use CloudWatch to monitor API calls

45
Q

An organization is migrating data to the AWS cloud. An on-premises application uses Network File System shares and must access the data without code changes. The data is critical and is accessed frequently. Which storage solution should a Solutions Architect recommend to maximize availability and durability?

1: Amazon Elastic Block Store
2: Amazon Simple Storage Service
3: AWS Storage Gateway – File Gateway
4: Amazon Elastic File System

A

1: Amazon Elastic Block Store
2: Amazon Simple Storage Service

3: AWS Storage Gateway – File Gateway

4: Amazon Elastic File System

46
Q

A bespoke application consisting of three tiers is being deployed in a VPC. You need to create three security groups. You have configured the WebSG (web server) security group and now need to configure the AppSG (application tier) and DBSG (database tier). The application runs on port 1030 and the database runs on 3306. Which rules should be created according to security best practice? (Select TWO)

1: On the DBSG security group, create a custom TCP rule for TCP 3306 and configure the AppSG security group as the source
2: On the AppSG security group, create a custom TCP rule for TCP 1030 and configure the WebSG security group as the source
3: On the AppSG security group, create a custom TCP rule for TCP 1030 and configure the DBSG security group as the source
4: On the DBSG security group, create a custom TCP rule for TCP 3306 and configure the WebSG security group as the source
5: On the WebSG security group, create a custom TCP rule for TCP 1030 and configure the AppSG security group as the source

A

1: On the DBSG security group, create a custom TCP rule for TCP 3306 and configure the AppSG security group as the source

2: On the AppSG security group, create a custom TCP rule for TCP 1030 and configure the WebSG security group as the source

3: On the AppSG security group, create a custom TCP rule for TCP 1030 and configure the DBSG security group as the source
4: On the DBSG security group, create a custom TCP rule for TCP 3306 and configure the WebSG security group as the source
5: On the WebSG security group, create a custom TCP rule for TCP 1030 and configure the AppSG security group as the source

47
Q

A Solutions Architect needs to design a solution that will allow Website Developers to deploy static web content without managing server infrastructure. All web content must be accessed over HTTPS with a custom domain name. The solution should be scalable as the company continues to grow. Which of the following will provide the MOST cost-effective solution?

1: Amazon S3 with a static website
2: Amazon CloudFront with an Amazon S3 bucket origin
3: AWS Lambda function with Amazon API Gateway
4: Amazon EC2 instance with Amazon EBS

A

1: Amazon S3 with a static website

2: Amazon CloudFront with an Amazon S3 bucket origin

3: AWS Lambda function with Amazon API Gateway
4: Amazon EC2 instance with Amazon EBS

48
Q

A Solutions Architect must design a storage solution for incoming billing reports in CSV format. The data will be analyzed infrequently and discarded after 30 days. Which combination of services will be MOST cost-effective in meeting these requirements?

1: Write the files to an S3 bucket and use Amazon Athena to query the data
2: Import the logs to an Amazon Redshift cluster
3: Use AWS Data Pipeline to import the logs into a DynamoDB table
4: Import the logs into an RDS MySQL instance

A

1: Write the files to an S3 bucket and use Amazon Athena to query the data

2: Import the logs to an Amazon Redshift cluster
3: Use AWS Data Pipeline to import the logs into a DynamoDB table
4: Import the logs into an RDS MySQL instance

49
Q

A Solutions Architect must design a solution that encrypts data in Amazon S3. Corporate policy mandates encryption keys be generated and managed on premises. Which solution should the Architect use to meet the security requirements?

1: SSE-C: Server-side encryption with customer-provided encryption keys
2: SSE-S3: Server-side encryption with Amazon-managed master key
3: SSE-KMS: Server-side encryption with AWS KMS managed keys
4: AWS CloudHSM

A

1: SSE-C: Server-side encryption with customer-provided encryption keys

2: SSE-S3: Server-side encryption with Amazon-managed master key
3: SSE-KMS: Server-side encryption with AWS KMS managed keys
4: AWS CloudHSM

50
Q

A Solutions Architect must select the most appropriate database service for two use cases. A team of data scientists perform complex queries on a data warehouse that take several hours to complete. Another team of scientists need to run fast, repeat queries and update dashboards for customer support staff. Which solution delivers these requirements MOST cost-effectively?

1: RedShift for both use cases
2: RDS for both use cases
3: RedShift for the analytics use case and ElastiCache in front of RedShift for the customer support dashboard
4: RedShift for the analytics use case and RDS for the customer support dashboard

A

1: RedShift for both use cases

2: RDS for both use cases
3: RedShift for the analytics use case and ElastiCache in front of RedShift for the customer support dashboard
4: RedShift for the analytics use case and RDS for the customer support dashboard

51
Q

A DynamoDB database you manage is randomly experiencing heavy read requests that are causing latency. What is the simplest way to alleviate the performance issues?

1: Create DynamoDB read replicas
2: Enable EC2 Auto Scaling for DynamoDB
3: Create an ElastiCache cluster in front of DynamoDB
4: Enable DynamoDB DAX

A

1: Create DynamoDB read replicas
2: Enable EC2 Auto Scaling for DynamoDB
3: Create an ElastiCache cluster in front of DynamoDB

4: Enable DynamoDB DAX

52
Q

A customer has a production application running on Amazon EC2. The application frequently overwrites and deletes data, and it is essential that the application receives the most up-to-date version of the data whenever it is requested. Which service is most appropriate for these requirements?

1: Amazon RedShift
2: Amazon S3
3: AWS Storage Gateway
4: Amazon RDS

A

1: Amazon RedShift
2: Amazon S3
3: AWS Storage Gateway

4: Amazon RDS

53
Q

A Solutions Architect is developing a new web application on AWS that needs to be able to scale to support unpredictable workloads. The Architect prefers to focus on value-add activities such as software development and product roadmap development rather than provisioning and managing instances. Which solution is most appropriate for this use case?

1: Amazon API Gateway and AWS Lambda
2: Elastic Load Balancing with Auto Scaling groups and Amazon EC2
3: Amazon CloudFront and AWS Lambda
4: Amazon API Gateway and Amazon EC2

A

1: Amazon API Gateway and AWS Lambda

2: Elastic Load Balancing with Auto Scaling groups and Amazon EC2
3: Amazon CloudFront and AWS Lambda
4: Amazon API Gateway and Amazon EC2

54
Q

A client needs to implement a shared directory system. Requirements are that it should provide a hierarchical structure, support strong data consistency, and be accessible from multiple accounts, regions and on-premises servers using their AWS Direct Connect link. Which storage service would you recommend to the client?

1: AWS Storage Gateway
2: Amazon EBS
3: Amazon EFS
4: Amazon S3

A

1: AWS Storage Gateway
2: Amazon EBS

3: Amazon EFS

4: Amazon S3

55
Q

A customer runs an API on their website that receives around 1,000 requests each day and has an average response time of 50 ms. It is currently hosted on a single c4.large EC2 instance. How can high availability be added to the architecture at the LOWEST cost?

1: Create an Auto Scaling group with a minimum of one instance and a maximum of two instances, then use an Application Load Balancer to balance the traffic
2: Recreate the API using API Gateway and use AWS Lambda as the service back-end
3: Create an Auto Scaling group with a maximum of two instances, then use an Application Load Balancer to balance the traffic
4: Recreate the API using API Gateway and integrate the API with the existing back-end

A

1: Create an Auto Scaling group with a minimum of one instance and a maximum of two instances, then use an Application Load Balancer to balance the traffic

2: Recreate the API using API Gateway and use AWS Lambda as the service back-end

3: Create an Auto Scaling group with a maximum of two instances, then use an Application Load Balancer to balance the traffic
4: Recreate the API using API Gateway and integrate the API with the existing back-end

56
Q

A large media site has multiple applications running on Amazon ECS. A Solutions Architect needs to use content metadata to route traffic to specific services. What is the MOST efficient method to fulfil this requirement?

1: Use an AWS Classic Load Balancer with a host-based routing rule to route traffic to the correct service
2: Use the AWS CLI to update an Amazon Route 53 hosted zone to route traffic as services get updated
3: Use an AWS Application Load Balancer with a path-based routing rule to route traffic to the correct service
4: Use Amazon CloudFront to manage and route traffic to the correct service

A

1: Use an AWS Classic Load Balancer with a host-based routing rule to route traffic to the correct service
2: Use the AWS CLI to update an Amazon Route 53 hosted zone to route traffic as services get updated

3: Use an AWS Application Load Balancer with a path-based routing rule to route traffic to the correct service

4: Use Amazon CloudFront to manage and route traffic to the correct service

57
Q

You have created a file system using Amazon Elastic File System (EFS) which will hold home directories for users. What else needs to be done to enable users to save files to the EFS file system?

1: Create a separate EFS file system for each user and grant read-write-execute permissions on the root directory to the respective user. Then mount the file system to the users’ home directory
2: Modify permissions on the root directory to grant read-write-execute permissions to the users. Then create a subdirectory and mount it to the users’ home directory
3: Instruct the users to create a subdirectory on the file system and mount the subdirectory to their home directory
4: Create a subdirectory for each user and grant read-write-execute permissions to the users. Then mount the subdirectory to the users’ home directory

A

1: Create a separate EFS file system for each user and grant read-write-execute permissions on the root directory to the respective user. Then mount the file system to the users’ home directory
2: Modify permissions on the root directory to grant read-write-execute permissions to the users. Then create a subdirectory and mount it to the users’ home directory
3: Instruct the users to create a subdirectory on the file system and mount the subdirectory to their home directory

4: Create a subdirectory for each user and grant read-write-execute permissions to the users. Then mount the subdirectory to the users’ home directory

58
Q

An application that you will be deploying in your VPC requires 14 EC2 instances that must be placed on distinct underlying hardware to reduce the impact of the failure of a hardware node. The instances will use varying instance types. What configuration will cater to these requirements taking cost-effectiveness into account?

1: You cannot control which nodes your instances are placed on
2: Use dedicated hosts and deploy each instance on a dedicated host
3: Use a Spread Placement Group across two AZs
4: Use a Cluster Placement Group within a single AZ

A

1: You cannot control which nodes your instances are placed on
2: Use dedicated hosts and deploy each instance on a dedicated host

3: Use a Spread Placement Group across two AZs

4: Use a Cluster Placement Group within a single AZ

59
Q

A VPC has a fleet of EC2 instances running in a private subnet that need to connect to Internet-based hosts using the IPv6 protocol. What needs to be configured to enable this connectivity?

1: VPN CloudHub
2: A NAT Gateway
3: An Egress-Only Internet Gateway
4: AWS Direct Connect

A

1: VPN CloudHub
2: A NAT Gateway

3: An Egress-Only Internet Gateway

4: AWS Direct Connect

60
Q

An AWS workload in a VPC is running a legacy database on an Amazon EC2 instance. Data is stored on a 2000GB Amazon EBS (gp2) volume. At peak load times, logs show excessive wait time. What should be implemented to improve database performance using persistent storage?

1: Change the EC2 instance type to one with burstable performance
2: Change the EC2 instance type to one with EC2 instance store volumes
3: Migrate the data on the Amazon EBS volume to an SSD-backed volume
4: Migrate the data on the EBS volume to provisioned IOPS SSD (io1)

A

1: Change the EC2 instance type to one with burstable performance
2: Change the EC2 instance type to one with EC2 instance store volumes
3: Migrate the data on the Amazon EBS volume to an SSD-backed volume

4: Migrate the data on the EBS volume to provisioned IOPS SSD (io1)

61
Q

Developers regularly create and update CloudFormation stacks using API calls. For security reasons you need to ensure that users are restricted to a specified template. How can this be achieved?

1: Store the template on Amazon S3 and use a bucket policy to restrict access
2: Create an IAM policy with a Condition: ResourceTypes parameter
3: Create an IAM policy with a Condition: TemplateURL parameter
4: Create an IAM policy with a Condition: StackPolicyURL parameter

A

1: Store the template on Amazon S3 and use a bucket policy to restrict access
2: Create an IAM policy with a Condition: ResourceTypes parameter

3: Create an IAM policy with a Condition: TemplateURL parameter

4: Create an IAM policy with a Condition: StackPolicyURL parameter

62
Q

A data-processing application runs on an i3.large EC2 instance with a single 100 GB EBS gp2 volume. The application stores temporary data in a small database (less than 30 GB) located on the EBS root volume. The application is struggling to process the data fast enough, and a Solutions Architect has determined that the I/O speed of the temporary database is the bottleneck. What is the MOST cost-efficient way to improve the database response times?

1: Put the temporary database on a new 50-GB EBS io1 volume with a 3000 IOPS allocation
2: Move the temporary database onto instance storage
3: Put the temporary database on a new 50-GB EBS gp2 volume
4: Enable EBS optimization on the instance and keep the temporary files on the existing volume

A

1: Put the temporary database on a new 50-GB EBS io1 volume with a 3000 IOPS allocation

2: Move the temporary database onto instance storage

3: Put the temporary database on a new 50-GB EBS gp2 volume
4: Enable EBS optimization on the instance and keep the temporary files on the existing volume

63
Q

A Solutions Architect is designing a shared service for hosting containers from several customers on Amazon ECS. These containers will use several AWS services. A container from one customer must not be able to access data from another customer. Which solution should the Architect use to meet the requirements?

1: IAM roles for tasks
2: IAM roles for EC2 instances
3: IAM Instance Profile for EC2 instances
4: Network ACL

A

1: IAM roles for tasks

2: IAM roles for EC2 instances
3: IAM Instance Profile for EC2 instances
4: Network ACL

64
Q

An EC2 instance that you manage has an IAM role attached to it that provides it with access to Amazon S3 for saving log data to a bucket. A change in the application architecture means that you now need to provide the additional ability for the application to securely make API requests to Amazon API Gateway. Which two methods could you use to resolve this challenge? (Select TWO)

1: Delegate access to the EC2 instance from the API Gateway management console
2: Create an IAM role with a policy granting permissions to Amazon API Gateway and add it to the EC2 instance as an additional IAM role
3: You cannot modify the IAM role assigned to an EC2 instance after it has been launched. You’ll need to recreate the EC2 instance and assign a new IAM role
4: Add an IAM policy to the existing IAM role that the EC2 instance is using granting permissions to access Amazon API Gateway
5: Create a new IAM role with multiple IAM policies attached that grants access to Amazon S3 and Amazon API Gateway, and replace the existing IAM role that is attached to the EC2 instance

A

1: Delegate access to the EC2 instance from the API Gateway management console
2: Create an IAM role with a policy granting permissions to Amazon API Gateway and add it to the EC2 instance as an additional IAM role
3: You cannot modify the IAM role assigned to an EC2 instance after it has been launched. You’ll need to recreate the EC2 instance and assign a new IAM role

4: Add an IAM policy to the existing IAM role that the EC2 instance is using granting permissions to access Amazon API Gateway

5: Create a new IAM role with multiple IAM policies attached that grants access to Amazon S3 and Amazon API Gateway, and replace

65
Q

An application is hosted on the U.S west coast. Users there have no problems, but users on the east coast are experiencing performance issues. The users have reported slow response times with the search bar autocomplete and display of account listings. How can you improve the performance for users on the east coast?

1: Host the static content in an Amazon S3 bucket and distribute it using CloudFront
2: Setup cross-region replication and use Route 53 geolocation routing
3: Create a DynamoDB Read Replica in the U.S east region
4: Create an ElastiCache database in the U.S east region

A

1: Host the static content in an Amazon S3 bucket and distribute it using CloudFront
2: Setup cross-region replication and use Route 53 geolocation routing
3: Create a DynamoDB Read Replica in the U.S east region

4: Create an ElastiCache database in the U.S east region