Domain 3: Design High-Performing Architectures Flashcards

1
Q

Your company has decided to migrate its on-premises data centers to the AWS cloud. The data centers host several virtual machines, including vSphere VMs and Hyper-V VMs. You have been tasked with finding an efficient and easy method to migrate all the VMs to AWS as Amazon EC2 AMIs while minimizing potential downtime. Which AWS service would be the best fit for this scenario? Which AWS service is the best fit for this?

Enable AWS DMS to incrementally perform migrations of all VMs in the data center.

Use AWS Refactor Service (AWS RFS) to incrementally perform migrations of all VMs in the data center.

Start the process via AWS Migration Hub to incrementally perform migrations of all VMs in the data center to AWS as AMIs for Amazon EC2.

Use the AWS Application Migration Service (AWS MGN) to perform migrations of all VMs in the data center to AWS.

A

Start the process via AWS Migration Hub to incrementally perform migrations of all VMs in the data center to AWS as AMIs for Amazon EC2.

AWS Migration Hub does not perform migrations as it is meant to provide a single place to view and track existing and new migration efforts between other AWS services.

Selected
Use the AWS Application Migration Service (AWS MGN) to perform migrations of all VMs in the data center to AWS.

AWS Application Migration Service (AWS MGN) simplifies the migration of virtual machines from on-premises data centers to AWS. It allows for a near-zero downtime migration, which is crucial for minimizing disruptions during the migration process. The service supports various types of VMs including vSphere VMs and Hyper-V VMs, making it a suitable choice for migrating mixed VM environments to AWS. AWS MGN replicates servers at a high frequency to keep the source and target servers in sync, ensuring a seamless cutover once migration is complete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You work for an advertising company that has a large amount of data hosted on-premises. They would like to move this data to AWS and then decommission the on-premises data center. What would be the easiest way to achieve this?

AWS Storage Gateway - Cache Gateway

Direct Connect

AWS Storage Gateway - Tape Gateway

AWS DataSync

A

AWS Storage Gateway - Cache Gateway

This would not be a good solution, as it does not achieve a full migration to AWS. You would also have to decommission your Storage Gateway when you decommission your data center.

Selected

AWS DataSync

This is the best answer because you are decommissioning the on-premises infrastructure, so you should use AWS DataSync to migrate the data to AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Jessica is a Database Administrator who has been given the task to migrate all the team Oracle databases running on on-premises virtual machines to the AWS cloud. During the migration efforts, it was requested that she find an automated way to convert to using an Amazon Aurora PostgreSQL database instead of Oracle as well as replicating any ongoing transactions during the migration itself.

Which AWS service configurations would Jessica use in this scenario? CHOOSE 2

Use the Amazon Aurora Serverless migration conversion tool to easily convert to a PostgreSQL database during the migration.

Create a new AWS DMS task using the Migrate existing data and replicate ongoing changes (CDC) option to migrate to AWS while capturing changed data.

Use the AWS DMS SCT to enable CDC on the migration task so that changed data is captured during the migration efforts.

Manually convert the Oracle database to PostgreSQL on-premises. Use AWS MGN to migrate the server to the AWS cloud. Then, create an AWS DMS task to replicate data changes only and configure the source as the on-premises PostgreSQL database VM and the target as the Amazon EC2 instance running PostgreSQL.

Use the AWS SCT to convert the Oracle database to a PostgreSQL compatible database to deploy using Amazon Aurora.

A

Create a new AWS DMS task using the Migrate existing data and replicate ongoing changes (CDC) option to migrate to AWS while capturing changed data.

AWS DMS offers the ability to enable Change Data Capture during migrations, which allows replication of ongoing data changes from the source to your target data store. This allows you to ensure all data is synced post migration.

Reference: [AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html) Use the AWS DMS SCT to enable CDC on the migration task so that changed data is captured during the migration efforts.

The AWS SCT does not perform migrations; it is only used for converting database schemas and engine types.

Selected

Use the AWS SCT to convert the Oracle database to a PostgreSQL compatible database to deploy using Amazon Aurora.

Using the AWS SCT allows users to convert source database schemas to a new target database schema. It can even convert relational to non-relational. There are many supported conversions, including Oracle to MySQL.

Reference: Schema Conversion Tool

Selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an online store that has recently been featured in a major national news channel, and since then, traffic has been going through the roof. The store consists of a fleet of EC2 instances behind an Auto Scaling group and application load balancer, which then connect to a single RDS instance. The store is struggling with the demand, and you believe this could be a database performance issue. Which options below would help to scale your relational database?

Migrate the database to Aurora Serverless.

Scale your RDS database out so that Multi-AZ is available.

Refactor the database to DynamoDB and migrate the database to DynamoDB with DAX enabled.

Add read replicas to the RDS database and update your web application to send read traffic to the read replicas.

A

Scale your RDS database out so that Multi-AZ is available.

This would increase availability but not performance.

Selected

Add read replicas to the RDS database and update your web application to send read traffic to the read replicas.

This would help scale your relational database.

Selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your boss recently asked you to investigate how to move your containerized application into AWS. During this migration, you’ll need to be able to easily move containers back and forth between on-premises and AWS. It has also been requested that you use an open-source container orchestration service. Which AWS tool would you pick to meet these requirements?

ECS

EC2 and Docker Swarm

EKS

ECR

A

EKS

EKS is a managed version of the open-source tool Kubernetes.

https://aws.amazon.com/eks/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You are designing an architecture for a financial company that provides a day trading application to customers. After viewing the traffic patterns for the existing application, you notice that traffic is fairly steady throughout the day, with the exception of large spikes at the opening of the market in the morning and at closing around 3 pm. Your architecture will include an Auto Scaling Group of EC2 instances. How can you configure the Auto Scaling Group to ensure that system performance meets the increased demands at opening and closing of the market?

Use a load balancer to ensure that the load is distributed evenly during high-traffic periods.

Configure your Auto Scaling Group to have a desired size which will be able to meet the demands of the high-traffic periods.

Configure a Dynamic Scaling Policy to scale based on CPU Utilization.

Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.

A

Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.

Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns. The model needs at least one day’s of historical data to start making predictions; it is re-evaluated every 24 hours to create a forecast for the next 48 hours.

What we can gather from the question is that the spikes at the beginning and end of day can potentially affect performance. Sure, we can use dynamic scaling, but remember, scaling up takes a little bit of time. We have the information to be proactive, use predictive scaling, and be ready for these spikes at opening and closing. If scale by schedule was an option here, it would be a GREAT option. On your AWS exam, you won’t always be given the option of the most correct solution.

https://aws.amazon.com/blogs/aws/new-predictive-scaling-for-ec2-powered-by-machine-learning/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have decided to decouple your infrastructure and use a combination of EC2 instances behind an Auto Scaling group and SQS. You notice that a lot of your messages are being processed twice, and as you dig deeper, you also notice that many messages do not get processed at all. What SQS system could you set up to help debug this issue?

SQS LIFO queue

SQS FIFO queue

SQS dead-letter queue

SQS standard queue

A

SQS dead-letter queue

An SQS dead-letter queue is where other SQS queues (source queues) can send messages that can’t be processed (consumed) successfully. It’s great for debugging as it allows you to isolate issues so you can debug why their processing doesn’t succeed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You work for a national weather agency that has weather stations all over the world. The weather stations take temperature readings every 15 seconds from thousands of locations globally. The agency is moving to AWS, and they need a good time-series database service in which to store this information while keeping costs to a minimum. Which AWS service do you recommend they use?

Amazon Timestream

Amazon RDS

Amazon Neptune

Amazon QLDB

A

Amazon Timestream

This would be the best and most cost-effective solution for storing time-series data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You work for an experimental automotive company that is trying to create self-driving car technology using a combination of AWS and 5G connectivity. You need to deploy an application within the vehicles themselves that will need ultra-low latency using 5G to AWS resources. Which AWS service would best suit this need?

AWS Direct Connect

Amazon Neptune

AWS Wavelength

AWS Outposts

A

AWS Wavelength

AWS Wavelength embeds AWS compute and storage services within 5G networks and would be the best service to use in this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You work for a large film company, and you are about to launch a new movie that is a popular sequel to a blockbuster from 20 years ago. You suspect the traffic for the launch of the film is going to be insanely high. You would like to host a static website on S3 to handle this traffic. However, there is a need for layer 4 connectivity and dynamic websites. You will need a load balancer that is able to handle extreme levels of performance. What load balancer should you recommend?

Web Application Firewall

Network Load Balancer

Classic Load Balancer

Application Load Balancer

A

Network Load Balancer

This would be suitable, as it supports layer 4 connectivity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A software company is developing an online “learn a new language” application. The application will be designed to teach up to 20 different languages for native English and Spanish speakers. It should leverage services that are capable of keeping up with 24,000 read units per second and 3,300 write units per second, and scale for spikes and off-peak. The application will also need to store user progress data. Which AWS services would meet these requirements?

S3

EBS

RDS

DynamoDB

A

DynamoDB

DynamoDB is highly scalable and provides very high performance, supporting 24,000 read units per second and 3,300 write units per second. A great case study for this is Duolingo, which uses Amazon DynamoDB to store 31 billion items in support of an online learning site that delivers lessons for 80 languages.

https://aws.amazon.com/solutions/case-studies/duolingo-case-study-dynamodb/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You work at a small local dairy as their IT consultant. They have been using IoT to monitor the health and wellbeing of their cows, and they have a large dataset of 50 TB that they need to move to AWS as quickly as possible for analysis. They have a cable internet connection that is capable of 5 Mbps upload speed. What would be the quickest and most efficient way to migrate this data to AWS?

Establish an AWS Direct Connect connection with the dairy.

Use a VPN concentrator to migrate the data to AWS.

Use S3 Transfer Acceleration, and migrate the data to S3.

Use AWS Snowball to securely migrate the data to AWS.

A

Use AWS Snowball to securely migrate the data to AWS.

This would be your best solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. You were considering using an Application Load Balancer, but some of the requirements you have been given seem to point to a Classic Load Balancer. Which requirement would be better served by an Application Load Balancer?

Support for TCP and SSL listeners

Path-based routing

Support for custom security policies

Support for EC2-Classic

A

Path-based routing

Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:

Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
Support for redirecting requests from one URL to another.
Support for returning a custom HTTP response.
Support for registering targets by IP address, including targets outside the VPC for the load balancer.
Support for registering Lambda functions as targets.
Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling Group enables you to scale each service dynamically based on demand.
Access logs contain additional information and are stored in compressed format.
Improved load balancer performance.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Your organization has begun to migrate all of their on-premises infrastructure and applications to AWS. Currently, in addition to the hundreds of Java and .NET applications, there are also many MySQL databases running on virtual machines with VMware vCenter. Your manager has tasked you with identifying the most efficient way of migrating the databases from on-premises to AWS, while another team is handling the application migrations.

Which service can you leverage to perform the actual migration of the databases to AWS with little to no impact?

AWS Application Discovery Service

AWS DMS

AWS MGN

AWS Migration Hub

A

AWS DMS

AWS Database Migration Service (AWS DMS) makes it easy to migrate your relational databases, data warehouses, NoSQL databases, and other data stores to or from the AWS cloud. You can do ongoing replications or just one-time migrations.

Reference: [AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/Wel
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You are decoupling your infrastructure and decide to implement an SQS queue as part of the overall architecture to make your web application more resilient. You need to create an SQS queue that allows your messages to be processed exactly once and in order. Which SQS queue should you choose?

SQS dead-letter queue

SQS FIFO queue

SQS LIFO queue

SQS standard queue

A

SQS FIFO queue

FIFO (First-In-First-Out) queues are designed to enhance messaging between applications when the order of operations and events is critical or where duplicates can’t be tolerated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly