saa-c02-part-16 Flashcards

1
Q

A user wants to list the IAM role that is attached to their Amazon EC2 instance. The user has login access to the EC2 instance but does not have IAM permissions.

What should a solutions architect do to retrieve this information?

  1. Run the following EC2 command:
    curl http://169.254.169.254/latest/meta-data/iam/info
  2. Run the following EC2 command:
    curl http://169.254.169.254/latest/user-data/iam/info
  3. Run the following EC2 command:
    http://169.254.169.254/latest/dynamic/instance-identity/
  4. Run the following AWS CLI command:
    aws iam get-instance-profile –instance-profile-name ExampleInstanceProfile
A
  1. Run the following EC2 command:
    curl http://169.254.169.254/latest/meta-data/iam/info

IAM role that is attached to their Amazon EC2 instance = meta-data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company has an application that is hosted on Amazon EC2 instances in two private subnets. A solutions architect must make the application available on the public internet with the least amount of administrative effort.

What should the solutions architect recommend?

  1. Create a load balancer and associate two public subnets from the same Availability Zones as the private instances. Add the private instances to the load balancer.
  2. Create a load balancer and associate two private subnets from the same Availability Zones as the private instances. Add the private instances to the load balancer.
  3. Create an Amazon Machine Image (AMI) of the instances in the private subnet and restore in the public subnet. Create a load balancer and associate two public subnets from the same Availability Zones as the public instances.
  4. Create an Amazon Machine Image (AMI) of the instances in the private subnet and restore in the public subnet. Create a load balancer and associate two private subnets from the same Availability Zones as the public instances.
A
  1. Create a load balancer and associate two public subnets from the same Availability Zones as the private instances. Add the private instances to the load balancer.

public internet = public subnet needed = 1,3

least amount of administrative effort = 1

ALBs go in public subnets

https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended to receive messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The sender application can send about 1,000 messages each hour. The messages may take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages.

Which solution meets these requirements and is the MOST operationally efficient?

  1. Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the messages, respectively.
  2. Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the Kinesis Client Library (KCL).
  3. Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process.
  4. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process. Integrate the sender application to write to the SNS topic.
A
  1. Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process.

handle messages between the two applications = SQS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company’s website hosted on Amazon EC2 instances processes classified data stored in Amazon S3. Due to security concerns, the company requires a private and secure connection between its EC2 resources and Amazon S3.

Which solution meets these requirements?

  1. Set up S3 bucket policies to allow access from a VPC endpoint.
  2. Set up an IAM policy to grant read-write access to the S3 bucket.
  3. Set up a NAT gateway to access resources outside the private subnet.
  4. Set up an access key ID and a secret access key to access the S3 bucket.
A
  1. Set up S3 bucket policies to allow access from a VPC endpoint.

private and secure connection between its EC2 resources and Amazon S3 = endpoint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company hosts its multi-tier, public web application in the AWS Cloud. The web application runs on Amazon EC2 instances and its database runs on Amazon RDS. The company is anticipating a large increase in sales during an upcoming holiday weekend. A solutions architect needs to build a solution to analyze the performance of the web application with a granularity of no more than 2 minutes.

What should the solutions architect do to meet this requirement?

  1. Send Amazon CloudWatch logs to Amazon Redshift. Use Amazon QuickSight to perform further analysis.
  2. Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.
  3. Create an AWS Lambda function to fetch EC2 logs from Amazon CloudWatch Logs. Use Amazon CloudWatch metrics to perform further analysis.
  4. Send EC2 logs to Amazon S3. Use Amazon Redshift to fetch logs from the S3 bucket to process raw data for further analysis with Amazon QuickSight.
A
  1. Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.

granularity of no more than 2 minutes = Cloudwatch default = 5 minutes, faster = enable detailed monitoring on the instance = 1 minute

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for MySQL. In the database layer several players will compete concurrently online. The game’s developers want to display a top-10 scoreboard in near-real time and offer the ability to stop and restore the game while preserving the current scores.

What should a solutions architect do to meet these requirements?

  1. Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
  2. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
  3. Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
  4. Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
A
  1. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.

scoreboard = leaderboard = redis or dynamodb

real-time analytics = redis + ElastiCache

https://aws.amazon.com/blogs/database/building-a-real-time-gaming-leaderboard-with-amazon-elasticache-for-redis/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company is moving its on-premises Oracle database to Amazon Aurora PostgreSQL. The database has several applications that write to the same tables. The applications need to be migrated one by one with a month in between each migration Management has expressed concerns that the database has a high number of reads and writes. The data must be kept in sync across both databases throughout the migration.

What should a solutions architect recommend?

  1. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC) replication task and a table mapping to select all cables.
  2. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
  3. Use the AWS Schema Conversion Tool with AWS DataBase Migration Service (AWS DMS) using a memory optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
  4. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.
A
  1. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.

migrating Oracle to PostgreSQL = Database Migration Service = 1,2

“(AWS DMS) to replicate the data first in a bulk load” = 2

https://aws.amazon.com/blogs/database/migrating-an-application-from-an-on-premises-oracle-database-to-amazon-rds-for-postgresql/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a MySQL database running on Amazon EC2. The company wants this application to be highly available with low operational complexity.

Which architecture offers the HIGHEST availability?

  1. Add a second ActiveMQ server to another Availability Zone. Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.
  2. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.
  3. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.
  4. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.
A
  1. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.

highly available with low operational complexity = Multi-AZ = 3,4

HIGHEST availability = ASG = 4 wins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company is planning on deploying a newly built application on AWS in a default VPC. The application will consist of a web layer and database layer. The web server was created in public subnets, and the MySQL database was created in private subnets. All subnets are created with the default network ACL settings, and the default security group in the VPC will be replaced with new custom security groups.

The following are the key requirements:
– The web servers must be accessible only to users on an SSL connection.
– The database should be accessible to the web layer, which is created in a public subnet only.
– All traffic to and from the IP range 182.20.0.0/16 subnet should be blocked.

Which combination of steps meets these requirements? (Select two.)

  1. Create a database server security group with inbound and outbound rules for MySQL port 3306 traffic to and from anywhere (0 0.0.0/0).
  2. Create a database server security group with an inbound rule for MySQL port 3306 and specify the source as a web server security group.
  3. Create a web server security group with an inbound allow rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0) and an inbound deny rule for IP range 182.20.0.0/16.
  4. Create a web server security group with an inbound rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0). Create network ACL inbound and outbound deny rules for IP range 182.20.0.0/16.
  5. Create a web server security group with inbound and outbound rules for HTTPS port 443 traffic to and from anywhere (0.0.0.0/0). Create a network ACL inbound deny rule for IP range 182.20.0.0/16.
A
  1. Create a database server security group with an inbound rule for MySQL port 3306 and specify the source as a web server security group.
  2. Create a web server security group with an inbound rule for HTTPS port 443 traffic from anywhere (0.0.0.0/0). Create network ACL inbound and outbound deny rules for IP range 182.20.0.0/16.

web servers +SSl = 443 = 3,4,5

database should be accessible = MySQL port 3306 = 1,2

to and from anywhere (0 0.0.0/0). = not least privilege = not 1 = 2 wins

All traffic to and from the IP range 182.20.0.0/16 subnet should be blocked = not subgroup needs higher level = ACL = 4,5

port 443 traffic to and from anywhere (0.0.0.0/0)= 5 is wrong only inbound needed = 4 wins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company has an on-premises application that collects data and stores it to an on-premises NFS server. The company recently set up a 10 Gbps AWS Direct Connect connection. The company is running out of storage capacity on premises. The company needs to migrate the application data from on premises to the AWS Cloud while maintaining low-latency access to the data from the on-premises application.

What should a solutions architect do to meet these requirements?

  1. Deploy AWS Storage Gateway for the application data, and use the file gateway to store the data in Amazon S3. Connect the on-premises application servers to the file gateway using NFS.
  2. Attach an Amazon Elastic File System (Amazon EFS) file system to the NFS server, and copy the application data to the EFS file system. Then connect the on-premises application to Amazon EFS.
  3. Configure AWS Storage Gateway as a volume gateway. Make the application data available to the on-premises application from the NFS server and with Amazon Elastic Block Store (Amazon EBS) snapshots.
  4. Create an AWS DataSync agent with the NFS server as the source location and an Amazon Elastic File System (Amazon EFS) file system as the destination for application data transfer. Connect the on-premises application to the EFS file system.
A
  1. Deploy AWS Storage Gateway for the application data, and use the file gateway to store the data in Amazon S3. Connect the on-premises application servers to the file gateway using NFS.

on-premises = gateway needed = 1,3

low-latency access + NFS=file gateway

When do I use AWS DataSync and when do I use AWS Storage Gateway?
Use AWS DataSync to migrate existing data to Amazon S3, and subsequently use the File Gateway configuration of AWS Storage Gateway to retain access to the migrated data and for ongoing updates from your on-premises file-based applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A solutions architect needs to design a network that will allow multiple Amazon EC2 instances to access a common data source used for mission-critical data that can be accessed by all the EC2 instances simultaneously. The solution must be highly scalable, easy to implement and support the NFS protocol.

Which solution meets these requirements?

  1. Create an Amazon Elastic File System (Amazon EFS) file system. Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target.
  2. Create an additional EC2 instance and configure it as a file server. Create a security group that allows communication between the Instances and apply that to the additional instance.
  3. Create an Amazon S3 bucket with the appropriate permissions. Create a role in AWS IAM that grants the correct permissions to the S3 bucket. Attach the role to the EC2 Instances that need access to the data.
  4. Create an Amazon Elastic Block Store (Amazon EBS) volume with the appropriate permissions. Create a role in AWS IAM that grants the correct permissions to the EBS volume. Attach the role to the EC2 instances that need access to the data.
A
  1. Create an Amazon Elastic File System (Amazon EFS) file system. Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target.

common data source = concurrent = EFS

NFS = EFS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company hosts its application using Amazon Elastic Container Service (Amazon ECS) and wants to ensure high availability. The company wants to be able to deploy updates to its application even if nodes in one Availability Zone are not accessible.

The expected request volume for the application is 100 requests per second, and each container task is able to serve at least 60 requests per second. The company set up Amazon ECS with a rolling update deployment type with the minimum healthy percent parameter set to 50% and the maximum percent set to 100%.

Which configuration of tasks and Availability Zones meets these requirements?

  1. Deploy the application across two Availability Zones, with one task in each Availability Zone.
  2. Deploy the application across two Availability Zones, with two tasks in each Availability Zone.
  3. Deploy the application across three Availability Zones, with one task in each Availability Zone.
  4. Deploy the application across three Availability Zones, with two tasks in each Availability Zone.
A

100 requests per second = 2*60 tasks needed for processing = two tasks in each Availability Zone = 2,4

HA = Multi-AZ = 3 AZ needed if deploy updates to its application even if nodes in one Availability Zone are not accessible” = 4 wins

minimum healthy percent parameter set to 50%

The 50% minimum healthy limit is critical here. IT means worst case we may have only 1 task running out of 2 tasks in order for that AZ to be up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A solutions architect wants all new users to have specific complexity requirements and mandatory rotation periods for IAM user passwords. What should the solutions architect do to accomplish this?

  1. Set an overall password policy for the entire AWS account
  2. Set a password policy for each IAM user in the AWS account.
  3. Use third-party vendor software to set password requirements.
  4. Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the appropriate requirements.
A
  1. Set an overall password policy for the entire AWS account

for each IAM user = for each user is typically wrong answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company wants to improve the availability and performance of its hybrid application. The application consists of a stateful TCP-based workload hosted on Amazon EC2 instances in different AWS Regions and a stateless UOP-based workload hosted on premises.

Which combination of actions should a solutions architect take to improve availability and performance? (Choose two.)

  1. Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.
  2. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the load balancers.
  3. Configure two Application Load Balancers in each Region. The first will route to the EC2 endpoints and the second will route to the on-premises endpoints.
  4. Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure a Network Load Balancer in each Region that routes to the on-premises endpoints.
  5. Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure an Application Load Balancer in each Region that routes to the on-premises endpoints
A
  1. Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.
  2. Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure a Network Load Balancer in each Region that routes to the on-premises endpoints.

TCP layer4 = NLB

different AWS Regions = Global Accelerator

ALB = layer7 = 5 wrong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2 On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session data management. The company is willing to make changes to code if needed.

What should the solutions architect do to ensure that the architecture supports distributed session data management?

  1. Use Amazon ElastiCache to manage and store session data.
  2. Use session affinity (sticky sessions) of the ALB to manage session data.
  3. Use Session Manager from AWS Systems Manager to manage the session.
  4. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.
A
  1. Use Amazon ElastiCache to manage and store session data.

distributed session data management = not sticky = not 2

Session Manager is to manage EC2 instances and other devices, servers, and VMs you operate = 3 wrong

distributed session data management = good use case for ElastiCache

STS is to request temporary credentials for IAM users = 4 wrong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company has an ecommerce application running in a single VPC. The application stack has a single web server and an Amazon RDS Multi-AZ DB instance.

The company launches new products twice a month. This increases website traffic by approximately 400% for a minimum of 72 hours. During product launches, users experience slow response times and frequent timeout errors in their browsers.

What should a solutions architect do to mitigate the slow response times and timeout errors while minimizing operational overhead?

  1. Increase the instance size of the web server.
  2. Add an Application Load Balancer and an additional web server.
  3. Add Amazon EC2 Auto Scaling and an Application Load Balancer.
  4. Deploy an Amazon ElastiCache cluster to store frequently accessed data.
A
  1. Add Amazon EC2 Auto Scaling and an Application Load Balancer.

website traffic by approximately 400% = more servers needed = Auto Scaling + SLB

17
Q

A solutions architect is designing an architecture to run a third-party database server. The database software is memory intensive and has a CPU-based licensing model where the cost increases with the number of vCPU cores within the operating system. The solutions architect must select an Amazon EC2 instance with sufficient memory to run the database software, but the selected instance has a large number of vCPUs. The solutions architect must ensure that the vCPUs will not be underutilized and must minimize costs.

Which solution meets these requirements?

  1. Select and launch a smaller EC2 instance with an appropriate number of vCPUs.
  2. Configure the CPU cores and threads on the selected EC2 instance during instance launch.
  3. Create a new EC2 instance and ensure multithreading is enabled when configuring the instance details.
  4. Create a new Capacity Reservation and select the appropriate instance type. Launch the instance into this new Capacity Reservation.
A
  1. Create a new Capacity Reservation and select the appropriate instance type. Launch the instance into this new Capacity Reservation.

cost increases with the number of vCPU cores within the operating system = cant share with multi tenant = reserved instances

CPU-based licensing model = reserved

18
Q

A company receives 10 TB of instrumentation data each day from several machines located at a single factory. The data consists of JSON files stored on a storage area network (SAN) in an on-premises data center located within the factory. The company wants to send this data to Amazon S3 where it can be accessed by several additional systems that provide critical near-real-lime analytics. A secure transfer is important because the data is considered sensitive.

Which solution offers the MOST reliable data transfer?

  1. AWS DataSync over public internet
  2. AWS DataSync over AWS Direct Connect
  3. AWS Database Migration Service (AWS DMS) over public internet
  4. AWS Database Migration Service (AWS DMS) over AWS Direct Connect
A
  1. AWS DataSync over AWS Direct Connect

secure transfer = direct connect = 2,4

json files = not DB = 2 wins

19
Q

A company is creating a web application that will store a large number of images in Amazon S3. The images will be accessed by users over variable periods of time. The company wants to:
– Retain all the images
Incur no cost for retrieval.
– Have minimal management overhead.
Have the images available with no impact on retrieval time.

Which solution meets these requirements?

  1. Implement S3 Intelligent-Tiering
  2. Implement S3 storage class analysis
  3. Implement an S3 Lifecycle policy to move data to S3 Standard-Infrequent Access (S3 Standard-IA).
  4. Implement an S3 Lifecycle policy to move data to S3 One Zone-Infrequent Access (S3 One Zone-IA).
A
  1. Implement S3 Intelligent-Tiering

variable periods of time = no retrieval frequency mentioned = Intelligent

20
Q

A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day.

What should a solutions architect do to transmit and process the clickstream data?

  1. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR cluster with the data to generate analytics.
  2. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use for analysis.
  3. Cache the data to Amazon CloudFront. Store the data in an Amazon S3 bucket. When an object is added to the S3 bucket, run an AWS Lambda function to process the data for analysis.
  4. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data firehose to transmit the data to an Amazon S3 data lake. Load the data in Amazon Redshift for analysis.
A
  1. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data firehose to transmit the data to an Amazon S3 data lake. Load the data in Amazon Redshift for analysis.

30 TB of clickstream = Kinesis

https://aws.amazon.com/solutions/case-studies/hearst-data-analytics/