Set 4 Kindle SAA-003 Practice Test Flashcards

1
Q

A company is deploying an Amazon ElastiCache for Redis cluster. To enhance security a password should be required to access the database. What should the solutions architect use?

A. AWS Directory Service
B. AWS IAM Policy
C. Redis AUTH command
D. VPC Security Group

A

C. Redis AUTH command

Explanation:
Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands, thereby improving data security. You can require that users enter a token on a token-protected Redis server. To do this, include the parameter–auth-token(API:AuthToken) with the correct token when you create your replication group or cluster. Also include it in all subsequent commands to the replication group or cluster. CORRECT: “Redis AUTH command” is the correct answer. INCORRECT: “AWS Directory Service” is incorrect. This is a managed Microsoft Active Directory service and cannot add password protection to Redis. INCORRECT: “AWS IAM Policy” is incorrect. You cannot use an IAM policy to enforce a password on Redis. INCORRECT: “VPC Security Group” is incorrect. A security group protects at the network layer, it does not affect application authentication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

To increase performance and redundancy for an application a company has decided to run multiple implementations in different AWS Regions behind network load balancers. The company currently advertise the application using two public IP addresses from separate /24 address ranges and would prefer not to change these. Users should be directed to the closest available application endpoint. Which actions should a solutions architect take? (Select TWO.)

A. Create an Amazon Route 53 geolocation based routing policy
B. Create an AWS Global Accelerator and attach endpoints in each AWS Region
C. Assign new static anycast IP addresses and modify any existing pointers
D. Migrate both public IP addresses to the AWS Global Accelerator
E. Create PTR records to map existing public IP addresses to an Alias

A

B. Create an AWS Global Accelerator and attach endpoints in each AWS Region
D. Migrate both public IP addresses to the AWS Global Accelerator

Explanation:

AWS Global Accelerator uses static IP addresses as fixed entry points for your application. You can migrate up to two /24 IPv4 address ranges and choose which /32 IP addresses to use when you create your accelerator. This solution ensures the company can continue using the same IP addresses and they are able to direct traffic to the application endpoint in the AWS Region closest to the end user. Traffic is sent over the AWS global network for consistent performance. CORRECT: “Create an AWS Global Accelerator and attach endpoints in each AWS Region” is a correct answer. CORRECT: “Migrate both public IP addresses to the AWS Global Accelerator” is also a correct answer. INCORRECT: “Create an Amazon Route 53 geolocation based routing policy” is incorrect. With this solution new IP addresses will be required as there will be application endpoints in different regions. INCORRECT: “Assign new static anycast IP addresses and modify any existing pointers” is incorrect. This is unnecessary as you can bring your own IP addresses to AWS Global Accelerator and this is preferred in this scenario. INCORRECT: “Create PTR records to map existing public IP addresses to an Alias” is incorrect. This is not a workable solution for mapping existing IP addresses to an Amazon Route 53 Alias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Three Amazon VPCs are used by a company in the same region. The company has two AWS Direct Connect connections to two separate company offices and wishes to share these with all three VPCs. A Solutions Architect has created an AWS Direct Connect gateway. How can the required connectivity be configured?

A. Associate the Direct Connect gateway to a transit gateway
B. Associate the Direct Connect gateway to a virtual private gateway in each VPC
C. Create a VPC peering connection between the VPCs and route entries for the Direct Connect Gateway
D. Create a transit virtual interface between the Direct Connect gateway and each VPC

A

A. Associate the Direct Connect gateway to a transit gateway

Explanation;
You can manage a single connection for multiple VPCs or VPNs that are in the same Region by associating a Direct Connect gateway to a transit gateway. The solution involves the following components: - A transit gateway that has VPC attachments. - A Direct Connect gateway. - An association between the Direct Connect gateway and the transit gateway. - A transit virtual interface that is attached to the Direct Connect gateway. The following diagram depicts this configuration: CORRECT: “Associate the Direct Connect gateway to a transit gateway” is the correct answer. INCORRECT: “Associate the Direct Connect gateway to a virtual private gateway in each VPC” is incorrect. For VPCs in the same region a VPG is not necessary. A transit gateway can instead be configured. INCORRECT: “Create a VPC peering connection between the VPCs and route entries for the Direct Connect Gateway” is incorrect. You cannot add route entries for a Direct Connect gateway to each VPC and enable routing. Use a transit gateway instead. INCORRECT: “Create a transit virtual interface between the Direct Connect gateway and each VPC” is incorrect. The transit virtual interface is attached to the Direct Connect gateway on the connection side, not the VPC/transit gateway side.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A retail organization sends coupons out twice a week and this results in a predictable surge in sales traffic. The application runs on Amazon EC2 instances behind an Elastic Load Balancer. The organization is looking for ways lower costs while ensuring they meet the demands of their customers. How can they achieve this goal?

A. Use capacity reservations with savings plans
B. Use a mixture of spot instances and on demand instances
C. Increase the instance size of the existing EC2 instances
D. Purchase Amazon EC2 dedicated hosts

A

A. Use capacity reservations with savings plans

Explanation:
On-Demand Capacity Reservations enable you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. By creating Capacity Reservations, you ensure that you always have access to EC2 capacity when you need it, for as long as you need it. When used in combination with savings plans, you can also gain the advantages of cost reduction. CORRECT: “ Use capacity reservations with savings plans” is the correct answer. INCORRECT: “Use a mixture of spot instances and on demand instances” is incorrect. You can mix spot and on-demand in an auto scaling group. However, there’s a risk the spot price may not be good, and this is a regular, predictable increase in traffic. INCORRECT: “Increase the instance size of the existing EC2 instances” is incorrect. This would add more cost all the time rather than catering for the temporary increases in traffic. INCORRECT: “Purchase Amazon EC2 dedicated hosts” is incorrect. This is not a way to save cost as dedicated hosts are much more expensive than shared hosts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Over 500 TB of data must be analyzed using standard SQL business intelligence tools. The dataset consists of a combination of structured data and unstructured data. The unstructured data is small and stored on Amazon S3. Which AWS services are most suitable for performing analytics on the data?

A. Amazon RDS MariaDB with Amazon Athena
B. Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)
C. Amazon ElastiCache for Redis with cluster mode enabled
D. Amazon Redshift with Amazon Redshift Spectrum

A

D. Amazon Redshift with Amazon Redshift Spectrum

Explanation:
Amazon Redshift is an enterprise-level, petabyte scale, fully managed data warehousing service. An Amazon Redshift data warehouse is an enterprise-class relational database query and management system. Redshift supports client connections with many types of applications, including business intelligence (BI), reporting, data, and analytics tools. Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables. Redshift Spectrum queries employ massive parallelism to execute very fast against large datasets. Used together, RedShift and RedShift spectrum are suitable for running massive analytics jobs on both the structured (RedShift data warehouse) and unstructured (Amazon S3) data. CORRECT: “Amazon Redshift with Amazon Redshift Spectrum” is the correct answer. INCORRECT: “Amazon RDS MariaDB with Amazon Athena” is incorrect. Amazon RDS is not suitable for analytics (OLAP) use cases as it is designed for transactional (OLTP) use cases. Athena can however be used for running SQL queries on data on S3. INCORRECT: “Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)” is incorrect. This is an example of a non-relational DB with a caching layer and is not suitable for an OLAP use case. INCORRECT: “Amazon ElastiCache for Redis with cluster mode enabled” is incorrect. This is an example of an in-memory caching service. It is good for performance for transactional use cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An application is being monitored using Amazon GuardDuty. A Solutions Architect needs to be notified by email of medium to high severity events. How can this be achieved?

A. Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric
B. Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
C. Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function
D. Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity

A

B. Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic

Explanation:
A CloudWatch Events rule can be used to set up automatic email notifications for Medium to High Severity findings to the email address of your choice. You simply create an Amazon SNS topic and then associate it with an Amazon CloudWatch events rule. Note: step by step procedures for how to set this up can be found in the article linked in the references below. CORRECT: “Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic” is the correct answer. INCORRECT: “Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric” is incorrect. There is no metric for GuardDuty that can be used for specific findings. INCORRECT: “Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function” is incorrect. CloudWatch logs is not the right CloudWatch service to use. CloudWatch events is used for reacting to changes in service state. INCORRECT: “Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity” is incorrect. CloudTrail cannot be used to trigger alarms based on GuardDuty API activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company is migrating a decoupled application to AWS. The application uses a message broker based on the MQTT protocol. The application will be migrated to Amazon EC2 instances and the solution for the message broker must not require rewriting application code. Which AWS service can be used for the migrated message broker?

A. Amazon SQS
B. Amazon SNS
C. Amazon MQ
D. AWS Step Functions

A

C. Amazon MQ

Explanation:
Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Connecting current applications to Amazon MQ is easy because it uses industry-standard APIs and protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. Using standards means that in most cases, there’s no need to rewrite any messaging code when you migrate to AWS. CORRECT: “Amazon MQ” is the correct answer. INCORRECT: “Amazon SQS” is incorrect. This is an Amazon proprietary service and does not support industry-standard messaging APIs and protocols. INCORRECT: “Amazon SNS” is incorrect. This is a notification service not a message bus. INCORRECT: “AWS Step Functions” is incorrect. This is a workflow orchestration service, not a message bus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A HR application stores employment records on Amazon S3. Regulations mandate the records are retained for seven years. Once created the records are accessed infrequently for the first three months and then must be available within 10 minutes if required thereafter. Which lifecycle action meets the requirements whilst MINIMIZING cost?

A. Store the data in S3 Standard for 3 months, then transition to S3 Glacier
B. Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier
C. Store the data in S3 Standard for 3 months, then transition to S3 Standard-IA
D. Store the data in S3 Intelligent Tiering for 3 months, then transition to S3 Standard-IA

A

B. Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier

Explanation:
The most cost-effective solution is to first store the data in S3 Standard-IA where it will be infrequently accessed for the first three months. Then, after three months expires, transition the data to S3 Glacier where it can be stored at lower cost for the remainder of the seven year period. Expedited retrieval can bring retrieval times down to 1-5 minutes. CORRECT: “Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier” is the correct answer. INCORRECT: “Store the data in S3 Standard for 3 months, then transition to S3 Glacier” is incorrect. S3 Standard is more costly than S3 Standard-IA and the data is only accessed infrequently. INCORRECT: “Store the data in S3 Standard for 3 months, then transition to S3 Standard-IA” is incorrect. Neither storage class in this answer is the most cost-effective option. INCORRECT: “Store the data in S3 Intelligent Tiering for 3 months, then transition to S3 Standard-IA” is incorrect. Intelligent tiering moves data between tiers based on access patterns, this is more costly and better suited to use cases that are unknown or unpredictable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A highly elastic application consists of three tiers. The application tier runs in an Auto Scaling group and processes data and writes it to an Amazon RDS MySQL database. The Solutions Architect wants to restrict access to the database tier to only accept traffic from the instances in the application tier. However, instances in the application tier are being constantly launched and terminated. How can the Solutions Architect configure secure access to the database tier?

A. Configure the database security group to allow traffic only from the application security group
B. Configure the database security group to allow traffic only from port 3306
C. Configure a Network ACL on the database subnet to deny all traffic to ports other than 3306
D. Configure a Network ACL on the database subnet to allow all traffic from the application subnet

A

A. Configure the database security group to allow traffic only from the application security group

Explanation:
The best option is to configure the database security group to only allow traffic that originates from the application security group. You can also define the destination port as the database port. This setup will allow any instance that is launched and attached to this security group to connect to the database. CORRECT: “Configure the database security group to allow traffic only from the application security group” is the correct answer. INCORRECT: “Configure the database security group to allow traffic only from port 3306” is incorrect. Port 3306 for MySQL should be the destination port, not the source. INCORRECT: “Configure a Network ACL on the database subnet to deny all traffic to ports other than 3306” is incorrect. This does not restrict access specifically to the application instances. INCORRECT: “Configure a Network ACL on the database subnet to allow all traffic from the application subnet” is incorrect. This does not restrict access specifically to the application instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A Solutions Architect is rearchitecting an application with decoupling. The application will send batches of up to 1000 messages per second that must be received in the correct order by the consumers. Which action should the Solutions Architect take?

A. Create an Amazon SQS Standard queue
B. Create an Amazon SNS topic
C. Create an Amazon SQS FIFO queue
D. Create an AWS Step Functions state machine

A

C. Create an Amazon SQS FIFO queue

Explanation:
Only FIFO queues guarantee the ordering of messages and therefore a standard queue would not work. The FIFO queue supports up to 3,000 messages per second with batching so this is a supported scenario. CORRECT: “Create an Amazon SQS FIFO queue” is the correct answer. INCORRECT: “Create an Amazon SQS Standard queue” is incorrect as it does not guarantee ordering of messages. INCORRECT: “Create an Amazon SNS topic” is incorrect. SNS is a notification service and a message queue is a better fit for this use case. INCORRECT: “Create an AWS Step Functions state machine” is incorrect. Step Functions is a workflow orchestration service and is not useful for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A Solutions Architect is designing an application that consists of AWS Lambda and Amazon RDS Aurora MySQL. The Lambda function must use database credentials to authenticate to MySQL and security policy mandates that these credentials must not be stored in the function code. How can the Solutions Architect securely store the database credentials and make them available to the function?

A. Store the credentials in AWS Key Management Service and use environment variables in the function code pointing to KMS
B. Store the credentials in Systems Manager Parameter Store and update the function code and execution role
C. Use the AWSAuthenticationPlugin and associate an IAM user account in the MySQL database
D. Create an IAM policy and store the credentials in the policy. Attach the policy to the Lambda function execution role

A

B. Store the credentials in Systems Manager Parameter Store and update the function code and execution role

Explanation:
In this case the scenario requires that credentials are used for authenticating to MySQL. The credentials need to be securely stored outside of the function code. Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can easily reference the parameters from services including AWS Lambda as depicted in the diagram below: CORRECT: “Store the credentials in Systems Manager Parameter Store and update the function code and execution role” is the correct answer. INCORRECT: “Store the credentials in AWS Key Management Service and use environment variables in the function code pointing to KMS” is incorrect. You cannot store credentials in KMS, it is used for creating and managing encryption keys INCORRECT: “Use the AWSAuthenticationPlugin and associate an IAM user account in the MySQL database” is incorrect. This is a great way to securely authenticate to RDS using IAM users or roles. However, in this case the scenario requires database credentials to be used by the function. INCORRECT: “Create an IAM policy and store the credentials in the policy. Attach the policy to the Lambda function execution role” is incorrect. You cannot store credentials in IAM policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company are finalizing their disaster recovery plan. A limited set of core services will be replicated to the DR site ready to seamlessly take over the in the event of a disaster. All other services will be switched off. Which DR strategy is the company using?

A. Backup and restore
B. Pilot light
C. Warm standby
D. Multi-site

A

B. Pilot light

Explanation:
In this DR approach, you simply replicate part of your IT structure for a limited set of core services so that the AWS cloud environment seamlessly takes over in the event of a disaster. A small part of your infrastructure is always running simultaneously syncing mutable data (as databases or documents), while other parts of your infrastructure are switched off and used only during testing. Unlike a backup and recovery approach, you must ensure that your most critical core elements are already configured and running in AWS (the pilot light). When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core. CORRECT: “Pilot light” is the correct answer. INCORRECT: “Backup and restore” is incorrect. This is the lowest cost DR approach that simply entails creating online backups of all data and applications. INCORRECT: “Warm standby” is incorrect. The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud. INCORRECT: “Multi-site” is incorrect. A multi-site solution runs on AWS as well as on your existing on-site infrastructure in an active- active configuration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An application that runs a computational fluid dynamics workload uses a tightly-coupled HPC architecture that uses the MPI protocol and runs across many nodes. A service-managed deployment is required to minimize operational overhead. Which deployment option is MOST suitable for provisioning and managing the resources required for this use case?

A. Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets
B. Use AWS CloudFormation to deploy a Cluster Placement Group on EC2
C. Use AWS Batch to deploy a multi-node parallel job
D. Use AWS Elastic Beanstalk to provision and manage the EC2 instances

A

C. Use AWS Batch to deploy a multi-node parallel job

Explanation:
AWS Batch Multi-node parallel jobs enable you to run single jobs that span multiple Amazon EC2 instances. With AWS Batch multi-node parallel jobs, you can run large-scale, tightly coupled, high performance computing applications and distributed GPU model training without the need to launch, configure, and manage Amazon EC2 resources directly. An AWS Batch multi-node parallel job is compatible with any framework that supports IP-based, internode communication, such as Apache MXNet, TensorFlow, Caffe2, or Message Passing Interface (MPI). This is the most efficient approach to deploy the resources required and supports the application requirements most effectively. CORRECT: “Use AWS Batch to deploy a multi-node parallel job” is the correct answer. INCORRECT: “Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets “ is incorrect. This is not the best solution for a tightly-coupled HPC workload with specific requirements such as MPI support. INCORRECT: “Use AWS CloudFormation to deploy a Cluster Placement Group on EC2” is incorrect. This would deploy a cluster placement group but not manage it. AWS Batch is a better fit for large scale workloads such as this. INCORRECT: “Use AWS Elastic Beanstalk to provision and manage the EC2 instances” is incorrect. You can certainly provision and manage EC2 instances with Elastic Beanstalk but this scenario is for a specific workload that requires MPI support and managing a HPC deployment across a large number of nodes. AWS Batch is more suitable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A Solutions Architect is designing an application that will run on an Amazon EC2 instance. The application must asynchronously invoke and AWS Lambda function to analyze thousands of .CSV files. The services should be decoupled. Which service can be used to decouple the compute services?

A. Amazon SWF
B. Amazon SNS
C. Amazon Kinesis
D. Amazon OpsWorks

A

B. Amazon SNS

Explanation:
You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked. CORRECT: “Amazon SNS” is the correct answer. INCORRECT: “Amazon SWF” is incorrect. The Simple Workflow Service (SWF) is used for process automation. It is not well suited to this requirement. INCORRECT: “Amazon Kinesis” is incorrect as this service is used for ingesting and processing real time streaming data, it is not a suitable service to be used solely for invoking a Lambda function. INCORRECT: “Amazon OpsWorks” is incorrect as this service is used for configuration management of systems using Chef or Puppet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A large MongoDB database running on-premises must be migrated to Amazon DynamoDB within the next few weeks. The database is too large to migrate over the company’s limited internet bandwidth so an alternative solution must be used. What should a Solutions Architect recommend?

A. Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the AWS Database Migration Service (DMS)
B. Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB
C. Enable compression on the MongoDB database and use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon DynamoDB
D. Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the AWS Cloud

A

B. Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB

Explanation:
Larger data migrations with AWS DMS can include many terabytes of information. This process can be cumbersome due to network bandwidth limits or just the sheer amount of data. AWS DMS can useSnowball Edgeand Amazon S3 to migrate large databases more quickly than by other methods. When you’re using an Edge device, the data migration process has the following stages: You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally and move it to an Edge device. You ship the Edge device or devices back to AWS. After AWS receives your shipment, the Edge device automatically loads its data into an Amazon S3 bucket. AWS DMS takes the files and migrates the data to the target data store. If you are using change data capture (CDC), those updates are written to the Amazon S3 bucket and then applied to the target data store. CORRECT: “Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB” is the correct answer. INCORRECT: “Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the AWS Database Migration Service (DMS)” is incorrect as Direct Connect connections can take several weeks to implement. INCORRECT: “Enable compression on the MongoDB database and use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon DynamoDB” is incorrect. It’s unlikely that compression is going to make the difference and the company want to avoid the internet link as stated in the scenario. INCORRECT: “Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the AWS Cloud” is incorrect. This is the wrong method, the Solutions Architect should use the SCT to extract and load to Snowball Edge and then AWS DMS in the AWS Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Every time an item in an Amazon DynamoDB table is modified a record must be retained for compliance reasons. What is the most efficient solution to recording this information?

A. Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log files and record deleted item data to an Amazon S3 bucket
B. Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket
C. Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the CloudTrail log files and record changed items in another DynamoDB table
D. Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and save the output directly to an Amazon S3 bucket

A

B. Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket

Explanation:
Amazon DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time. For example, in the diagram below a DynamoDB stream is being consumed by a Lambda function which processes the item data and records a record in CloudWatch Logs CORRECT: “Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket” is the correct answer. INCORRECT: “Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log files and record deleted item data to an Amazon S3 bucket” is incorrect. The deleted item data will not be recorded in CloudWatch Logs. INCORRECT: “Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the CloudTrail log files and record changed items in another DynamoDB table” is incorrect. CloudTrail records API actions so it will not record the data from the item that was modified. INCORRECT: “Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and save the output directly to an Amazon S3 bucket” is incorrect. Global Tables is used for creating a multi-region, multi-master database. It is of no additional value for this requirement as you could just enable DynamoDB streams on the main table. You also cannot save modified data straight to an S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

An application in a private subnet needs to query data in an Amazon DynamoDB table. Use of the DynamoDB public endpoints must be avoided. What is the most EFFICIENT and secure method of enabling access to the table?

A. Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)
B. Create a gateway VPC endpoint and add an entry to the route table
C. Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN
D. Create a software VPN between DynamoDB and the application in the private subnet

A

B. Create a gateway VPC endpoint and add an entry to the route table

Explanation:
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. With a gateway endpoint you configure your route table to point to the endpoint. Amazon S3 and DynamoDB use gateway endpoints. The table below helps you to understand the key differences between the two different types of VPC endpoint: CORRECT: “Create a gateway VPC endpoint and add an entry to the route table” is the correct answer. INCORRECT: “Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)” is incorrect. This would be used for services that are supported by interface endpoints, not gateway endpoints. INCORRECT: “Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN” is incorrect. You cannot create an Amazon DynamoDB private endpoint and connect to it over VPN. Private endpoints are VPC endpoints and are connected to by instances in subnets via route table entries or via ENIs (depending on which service). INCORRECT: “Create a software VPN between DynamoDB and the application in the private subnet” is incorrect. You cannot create a software VPN between DynamoDB and an application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A Solutions Architect needs to select a low-cost, short-term option for adding resilience to an AWS Direct Connect connection. What is the MOST cost-effective solution to provide a backup for the Direct Connect connection?

A. Implement a second AWS Direct Connection
B. Implement an IPSec VPN connection and use the same BGP prefix
C. Configure AWS Transit Gateway with an IPSec VPN backup
D. Configure an IPSec VPN connection over the Direct Connect link

A

B. Implement an IPSec VPN connection and use the same BGP prefix

Explanation:
This is the most cost-effective solution. With this option both the Direct Connect connection and IPSec VPN are active and being advertised using the Border Gateway Protocol (BGP). The Direct Connect link will always be preferred unless it is unavailable. CORRECT: “Implement an IPSec VPN connection and use the same BGP prefix” is the correct answer. INCORRECT: “Implement a second AWS Direct Connection” is incorrect. This is not a short-term or low-cost option as it takes time to implement and is costly. INCORRECT: “Configure AWS Transit Gateway with an IPSec VPN backup” is incorrect. This is a workable solution and provides some advantages. However, you do need to pay for the Transit Gateway so it is not the most cost-effective option and probably not suitable for a short-term need. INCORRECT: “Configure an IPSec VPN connection over the Direct Connect link” is incorrect. This is not a solution to the problem as the VPN connection is going over the Direct Connect link. This is something you might do to add encryption to Direct Connect but it doesn’t make it more resilient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The disk configuration for an Amazon EC2 instance must be finalized. The instance will be running an application that requires heavy read/write IOPS. A single volume is required that is 500 GiB in size and needs to support 20,000 IOPS. What EBS volume type should be selected?

A. EBS General Purpose SSD
B. EBS Provisioned IOPS SSD
C. EBS General Purpose SSD in a RAID 1 configuration
D. EBS Throughput Optimized HDD

A

B. EBS Provisioned IOPS SSD

Explanation:
This is simply about understanding the performance characteristics of the different EBS volume types. The only EBS volume type that supports over 16,000 IOPS per volume is Provisioned IOPS SSD. SSD, General Purpose – gp2 – Volume size 1 GiB – 16 TiB. – Max IOPS/volume 16,000. SSD, Provisioned IOPS – i01 – Volume size 4 GiB – 16 TiB. – Max IOPS/volume 64,000. –HDD, Throughput Optimized – (st1) – Volume size 500 GiB – 16 TiB. Throughput measured in MB/s, and includes the ability to burst up to 250 MB/s per TB, with a baseline throughput of 40 MB/s per TB and a maximum throughput of 500 MB/s per volume. HDD, Cold – (sc1) – Volume size 500 GiB – 16 TiB. Lowest cost storage – cannot be a boot volume. – These volumes can burst up to 80 MB/s per TB, with a baseline throughput of 12 MB/s per TB and a maximum throughput of 250 MB/s per volume HDD, Magnetic – Standard – cheap, infrequently accessed storage – lowest cost storage that can be a boot volume. CORRECT: “EBS Provisioned IOPS SSD” is the correct answer. INCORRECT: “EBS General Purpose SSD” is incorrect as the max IOPS is 16,000. INCORRECT: “EBS General Purpose SSD in a RAID 1 configuration” is incorrect. RAID 1 is mirroring and does not increase the amount of IOPS you can generate. INCORRECT: “EBS Throughput Optimized HDD” is incorrect as this type of disk does not support the IOPS requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A new application you are designing will store data in an Amazon Aurora MySQL DB. You are looking for a way to enable inter-region disaster recovery capabilities with fast replication and fast failover. Which of the following options is the BEST solution?

A. Use Amazon Aurora Global Database
B. Enable Multi-AZ for the Aurora DB
C. Create an EBS backup of the Aurora volumes and use cross-region replication to copy the snapshot D. Create a cross-region Aurora Read Replica

A

A. Use Amazon Aurora Global Database

Explanation:
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages. Aurora Global Database uses storage-based replication with typical latency of less than 1 second, using dedicated infrastructure that leaves your database fully available to serve application workloads. In the unlikely event of a regional degradation or outage, one of the secondary regions can be promoted to full read/write capabilities in less than 1 minute. CORRECT: “Use Amazon Aurora Global Database” is the correct answer. INCORRECT: “Enable Multi-AZ for the Aurora DB” is incorrect. Enabling Multi-AZ for the Aurora DB would provide AZ-level resiliency within the region not across regions. INCORRECT: “Create an EBS backup of the Aurora volumes and use cross-region replication to copy the snapshot” is incorrect. Though you can take a DB snapshot and replicate it across regions, it does not provide an automated solution and it would not enable fast failover INCORRECT: “Create a cross-region Aurora Read Replica” is incorrect. This solution would not provide the fast storage replication and fast failover capabilities of the Aurora Global Database and is therefore not the best option.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A Solutions Architect regularly launches EC2 instances manually from the console and wants to streamline the process to reduce administrative overhead. Which feature of EC2 enables storing of settings such as AMI ID, instance type, key pairs and Security Groups?

A. Placement Groups
B. Launch Templates
C. Run Command
D. Launch Configurations

A

B. Launch Templates

Explanation:
Launch templates enable you to store launch parameters so that you do not have to specify them every time you launch an instance. When you launch an instance using the Amazon EC2 console, an AWS SDK, or a command line tool, you can specify the launch template to use. CORRECT: “Launch Templates” is the correct answer. INCORRECT: “Placement Groups” is incorrect. You can launch or start instances in aplacement group, which determines how instances are placed on underlying hardware. INCORRECT: “Run Command” is incorrect. Run Command automates common administrative tasks, and lets you perform ad hoc configuration changes at scale. INCORRECT: “Launch Configurations” is incorrect. Launch Configurations are used with Auto Scaling Groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You recently noticed that your Network Load Balancer (NLB) in one of your VPCs is not distributing traffic evenly between EC2 instances in your AZs. There are an odd number of EC2 instances spread across two AZs. The NLB is configured with a TCP listener on port 80 and is using active health checks. What is the most likely problem?

A. There is no HTTP listener
B. Health checks are failing in one AZ due to latency
C. NLB can only load balance within a single AZ
D. Cross-zone load balancing is disabled

A

D. Cross-zone load balancing is disabled

Explanation:
Without cross-zone load balancing enabled, the NLB will distribute traffic 50/50 between AZs. As there are an odd number of instances across the two AZs some instances will not receive any traffic. Therefore enabling cross-zone load balancing will ensure traffic is distributed evenly between available instances in all AZs. The diagram below shows an ELB with cross-zone load balancing enabled: CORRECT: “Cross-zone load balancing is disabled” is the correct answer. INCORRECT: “There is no HTTP listener” is incorrect. Listeners are used to receive incoming connections. An NLB listens on TCP not on HTTP therefore having no HTTP listener is not the issue here. INCORRECT: “Health checks are failing in one AZ due to latency” is incorrect. If health checks fail this will cause the NLB to stop sending traffic to these instances. However, the health check packets are very small and it is unlikely that latency would be the issue within a region. INCORRECT: “NLB can only load balance within a single AZ” is incorrect. An NLB can load balance across multiple AZs just like the other ELB types.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A Solutions Architect is creating a design for a multi-tiered serverless application. Which two services form the application facing services from the AWS serverless infrastructure? (Select TWO.)

A. API Gateway
B. AWS Cognito
C. AWS Lambda
D. Amazon ECS
E. Elastic Load Balancer

A

A. API Gateway
C. AWS Lambda

Explanation:
The only application services here are API Gateway and Lambda and these are considered to be serverless services. CORRECT: “API Gateway” is a correct answer. CORRECT: “AWS Lambda” is also a correct answer. INCORRECT: “AWS Cognito” is incorrect. AWS Cognito is used for providing authentication services for web and mobile apps. INCORRECT: “Amazon ECS” is incorrect. ECS provides the platform for running containers and uses Amazon EC2 instances. INCORRECT: “Elastic Load Balancer” is incorrect. ELB provides distribution of incoming network connections and also uses Amazon EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A Solutions Architect attempted to restart a stopped EC2 instance and it immediately changed from a pending state to a terminated state. What are the most likely explanations? (Select TWO.)

A. You’ve reached your EBS volume limit
B. An EBS snapshot is corrupt
C. AWS does not currently have enough available On-Demand capacity to service your request
D. You have reached the limit on the number of instances that you can launch in a region The AMI is unsupported

A

A. You’ve reached your EBS volume limit
B. An EBS snapshot is corrupt

Explanation:
The following are a few reasons why an instance might immediately terminate: – You’ve reached your EBS volume limit. – An EBS snapshot is corrupt. – The root EBS volume is encrypted and you do not have permissions to access the KMS key for decryption. – The instance store-backed AMI that you used to launch the instance is missing a required part (an image.part.xx file). CORRECT: “You’ve reached your EBS volume limit” is a correct answer. CORRECT: “An EBS snapshot is corrupt” is also a correct answer. INCORRECT: “AWS does not currently have enough available On-Demand capacity to service your request” is incorrect. If AWS does not have capacity available a InsufficientInstanceCapacity error will be generated when you try to launch a new instance or restart a stopped instance. INCORRECT: “You have reached the limit on the number of instances that you can launch in a region” is incorrect. If you’ve reached the limit on the number of instances you can launch in a region you get an InstanceLimitExceeded error when you try to launch a new instance or restart a stopped instance. INCORRECT: “The AMI is unsupported” is incorrect. It is possible that an instance type is not supported by an AMI and this can cause an “UnsupportedOperation” client error. However, in this case the instance was previously running (it is in a stopped state) so it is unlikely that this is the issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

One of the applications you manage on RDS uses the MySQL DB and has been suffering from performance issues. You would like to setup a reporting process that will perform queries on the database but you’re concerned that the extra load will further impact the performance of the DB and may lead to poor customer experience. What would be the best course of action to take so you can implement the reporting process?

A. Configure Multi-AZ to setup a secondary database instance in another region
B. Deploy a Read Replica to setup a secondary read-only database instance
C. Deploy a Read Replica to setup a secondary read and write database instance
D. Configure Multi-AZ to setup a secondary database instance in another Availability Zone

A

B. Deploy a Read Replica to setup a secondary read-only database instance

Explanation:
The reporting process will perform queries on the database but not writes. Therefore you can use a read replica which will provide a secondary read-only database and configure the reporting process to use the read replica. Multi-AZ is used for implementing fault tolerance. With Multi-AZ you can failover to a DB in another AZ within the region in the event of a failure of the primary DB. However, you can only read and write to the primary DB so still need a read replica to offload the reporting job CORRECT: “Deploy a Read Replica to setup a secondary read-only database instance” is the correct answer. INCORRECT: “Configure Multi-AZ to setup a secondary database instance in another region” is incorrect as described above. INCORRECT: “Deploy a Read Replica to setup a secondary read and write database instance” is incorrect. Read replicas are for workload offloading only and do not provide the ability to write to the database. INCORRECT: “Configure Multi-AZ to setup a secondary database instance in another Availability Zone” is incorrect as described above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A Solutions Architect is building a new Amazon Elastic Container Service (ECS) cluster. The ECS instances are running the EC2 launch type and load balancing is required to distribute connections to the tasks. It is required that the mapping of ports is performed dynamically and connections are routed to different groups of servers based on the path in the URL. Which AWS service should the Solutions Architect choose to fulfil these requirements?

A. An Amazon ECS Service
B. Application Load Balancer
C. Network Load Balancer
D. Classic Load Balancer

A

B. Application Load Balancer

Explanation:
An ALB allows containers to use dynamic host port mapping so that multiple tasks from the same service are allowed on the same container host. An ALB can also route requests based on the content of the request in the host field: host-based or path-based. The NLB and CLB types of Elastic Load Balancer do not support path-based routing or host-based routing so they cannot be used for this use case. CORRECT: “Application Load Balancer” is the correct answer. INCORRECT: “ECS Services” is incorrect. An Amazon ECS service enables you to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. It does not distributed connections to tasks. INCORRECT: “Network Load Balancer” is incorrect as described above. INCORRECT: “Classic Load Balancer” is incorrect as described above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A Solutions Architect needs to connect from an office location to a Linux instance that is running in a public subnet in an Amazon VPC using the Internet. Which of the following items are required to enable this access? (Select TWO.)

A. A bastion host
B. A NAT Gateway
C. A Public or Elastic IP address on the EC2 instance
D. An Internet Gateway attached to the VPC and route table attached to the public subnet pointing to it
E. An IPSec VPN

Davis, Neal. AWS Certified Solutions Architect Associate Practice Tests 2022 [SAA-C03]: 390 AWS Practice Exam Questions with Answers & detailed Explanations (p. 534). Kindle Edition.

A

C. A Public or Elastic IP address on the EC2 instance
D. An Internet Gateway attached to the VPC and route table attached to the public subnet pointing to it

Explanation:
A public subnet is a subnet that has an Internet Gateway attached and “Enable auto-assign public IPv4 address” enabled. Instances require a public IP or Elastic IP address. It is also necessary to have the subnet route table updated to point to the Internet Gateway and security groups and network ACLs must be configured to allow the SSH traffic on port 22. CORRECT: “A Public or Elastic IP address on the EC2 instance” is a correct answer. CORRECT: “An Internet Gateway attached to the VPC and route table attached to the public subnet pointing to it” is also a correct answer. INCORRECT: “A bastion host” is incorrect. A bastion host can be used to access instances in private subnets but is not required for instances in public subnets. INCORRECT: “A NAT Gateway” is incorrect. A NAT Gateway allows instances in private subnets to access the Internet, it is not used for remote access. INCORRECT: “An IPSec VPN” is incorrect. An IPSec VPN is not required to connect to an instance in a public subnet.

28
Q

An Auto Scaling Group is unable to respond quickly enough to load changes resulting in lost messages from another application tier. The messages are typically around 128KB in size. What is the best design option to prevent the messages from being lost?

A. Store the messages on Amazon S3
B. Launch an Elastic Load Balancer
C. Store the messages on an SQS queue
D. Use larger EC2 instance sizes

A

C. Store the messages on an SQS queue

Explanation:
In this circumstance the ASG cannot launch EC2 instances fast enough. You need to be able to store the messages somewhere so they don’t get lost whilst the EC2 instances are launched. This is a classic use case for decoupling and SQS is designed for exactly this purpose. Amazon Simple Queue Service (Amazon SQS) is a web service that gives you access to message queues that store messages waiting to be processed. SQS offers a reliable, highly-scalable, hosted queue for storing messages in transit between computers. An SQS queue can be used to create distributed/decoupled applications. CORRECT: “Store the messages on an SQS queue” is the correct answer. INCORRECT: “Store the messages on Amazon S3” is incorrect. Storing the messages on S3 is potentially feasible but SQS is the preferred solution as it is designed for decoupling. If the messages are over 256KB and therefore cannot be stored in SQS, you may want to consider using S3 and it can be used in combination with SQS by using the Amazon SQS Extended Client Library for Java. INCORRECT: “Launch an Elastic Load Balancer” is incorrect. An ELB can help to distribute incoming connections to the back-end EC2 instances however if the ASG is not scaling fast enough then there aren’t enough resources for the ELB to distribute traffic to. INCORRECT: “Use larger EC2 instance sizes” is incorrect. Scaling horizontally and decoupling will have a greater effect over using larger instance sizes.

29
Q

A Solutions Architect needs to run a production batch process quickly that will use several EC2 instances. The process cannot be interrupted and must be completed within a short time period. What is likely to be the MOST cost-effective choice of EC2 instance type to use for this requirement?

A. Reserved instances
B. Spot instances
C. Flexible instances
D. On-demand instances

A

D. On-demand instances

Explanation:
The key requirements here are that you need to deploy several EC2 instances quickly to run the batch process and you must ensure that the job completes. The on-demand pricing model is the best for this ad-hoc requirement as though spot pricing may be cheaper you cannot afford to risk that the instances are terminated by AWS when the market price increases. CORRECT: “On-demand instances” is the correct answer. INCORRECT: “Reserved instances” is incorrect. Reserved instances are used for longer more stable requirements where you can get a discount for a fixed 1 or 3 year term. This pricing model is not good for temporary requirements. INCORRECT: “Spot instances” is incorrect. Spot instances provide a very low hourly compute cost and are good when you have flexible start and end times. They are often used for use cases such as grid computing and high-performance computing (HPC). INCORRECT: “Flexible instances” is incorrect. There is no such thing as a “flexible instance”.

30
Q

A Solutions Architect would like to implement a method of automating the creation, retention, and deletion of backups for the Amazon EBS volumes in an Amazon VPC. What is the easiest way to automate these tasks using AWS tools?

A. Configure EBS volume replication to create a backup on Amazon S3
B. Use the EBS Data Lifecycle Manager (DLM) to manage snapshots of the volumes
C. Create a scheduled job and run the AWS CLI command “create-backup” to take backups of the EBS volumes
D. Create a scheduled job and run the AWS CLI command “create-snapshot” to take backups of the EBS volumes

A

B. Use the EBS Data Lifecycle Manager (DLM) to manage snapshots of the volumes

Explanation:
You backup EBS volumes by taking snapshots. This can be automated via the AWS CLI command “create-snapshot”. However the question is asking for a way to automate not just the creation of the snapshot but the retention and deletion too. The EBS Data Lifecycle Manager (DLM) can automate all of these actions for you and this can be performed centrally from within the management console. CORRECT: “Use the EBS Data Lifecycle Manager (DLM) to manage snapshots of the volumes” is the correct answer. INCORRECT: “Configure EBS volume replication to create a backup on S3” is incorrect. You cannot configure volume replication for EBS volumes using AWS tools. INCORRECT: “Create a scheduled job and run the AWS CLI command “create-backup” to take backups of the EBS volumes” is incorrect. This is the wrong command (use create-snapshot) and is not the easiest method. INCORRECT: “Create a scheduled job and run the AWS CLI command “create-snapshot” to take backups of the EBS volumes” is incorrect. This is not the easiest method, DLM would be a much better solution.

31
Q

A mobile app uploads usage information to a database. Amazon Cognito is being used for authentication, authorization and user management and users sign-in with Facebook IDs. In order to securely store data in DynamoDB, the design should use temporary AWS credentials. What feature of Amazon Cognito is used to obtain temporary credentials to access AWS services?

A. User Pools
B. Identity Pools
C. Key Pairs
D. SAML Identity Providers

A

B. Identity Pools

Explanation:
Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. An identity pool is a store of user identity data specific to your account. With an identity pool, users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB. CORRECT: “Identity Pools” is the correct answer. INCORRECT: “User Pools” is incorrect. A user pool is a user directory in Amazon Cognito. With a user pool, users can sign in to web or mobile apps through Amazon Cognito, or federate through a third-party identity provider (IdP). INCORRECT: “Key Pairs” is incorrect. Key pairs are used in Amazon EC2 for access to instances. INCORRECT: “SAML Identity Providers” is incorrect. SAML Identity Providers are supported IDPs for identity pools but cannot be used for gaining temporary credentials for AWS services.

32
Q

A website uses web servers behind an Internet-facing Elastic Load Balancer. What record set should be created to point the customer’s DNS zone apex record at the ELB?

A. Create a PTR record pointing to the DNS name of the load balancer
B. Create an A record pointing to the DNS name of the load balancer
C. Create a CNAME record that is an Alias, and select the ELB DNS as a target
D. Create an A record that is an Alias, and select the ELB DNS as a target

A

D. Create an A record that is an Alias, and select the ELB DNS as a target

Explanation:
An Alias record can be used for resolving apex or naked domain names (e.g. example.com). You can create an A record that is an Alias that uses the customer’s website zone apex domain name and map it to the ELB DNS name. CORRECT: “Create an A record that is an Alias, and select the ELB DNS as a target” is the correct answer. INCORRECT: “Create a PTR record pointing to the DNS name of the load balancer” is incorrect. PTR records are reverse lookup records where you use the IP to find the DNS name. INCORRECT: “Create an A record pointing to the DNS name of the load balancer” is incorrect. A standard A record maps the DNS domain name to the IP address of a resource. You cannot obtain the IP of the ELB so you must use an Alias record which maps the DNS domain name of the customer’s website to the ELB DNS name (rather than its IP). INCORRECT: “Create a CNAME record that is an Alias, and select the ELB DNS as a target” is incorrect. A CNAME record can’t be used for resolving apex or naked domain names.

33
Q

A Solutions Architect has been assigned the task of moving some sensitive documents into the AWS cloud. The security of the documents must be maintained. Which AWS features can help ensure that the sensitive documents cannot be read even if they are compromised? (Select TWO.)

A. AWS IAM Access Policy
B. Amazon S3 Server-Side Encryption
C. Amazon EBS snapshots
D. Amazon S3 cross region replication
E. Amazon EBS encryption with Customer Managed Keys

A

B. Amazon S3 Server-Side Encryption
E. Amazon EBS encryption with Customer Managed Keys

Explanation:
It is not specified what types of documents are being moved into the cloud or what services they will be placed on. Therefore we can assume that options include S3 and EBS. To prevent the documents from being read if they are compromised we need to encrypt them. Both of these services provide native encryption functionality to ensure security of the sensitive documents. With EBS you can use KMS-managed or customer-managed encryption keys. With S3 you can use client-side or server-side encryption. CORRECT: “Amazon S3 Server-Side Encryption” is a correct answer. CORRECT: “Amazon EBS encryption with Customer Managed Keys” is also a correct answer. INCORRECT: “AWS IAM Access Policy” is incorrect. IAM access policies can be used to control access but if the documents are somehow compromised they will not stop the documents from being read. For this we need encryption, and IAM access policies are not used for controlling encryption. INCORRECT: “Amazon EBS snapshots” is incorrect. EBS snapshots are used for creating a point-in-time backup or data. They do maintain the encryption status of the data from the EBS volume but are not used for actually encrypting the data in the first place. INCORRECT: “Amazon S3 cross region replication” is incorrect. S3 cross-region replication can be used for fault tolerance but does not apply any additional security to the data.

34
Q

A membership website has become quite popular and is gaining members quickly. The website currently runs on Amazon EC2 instances with one web server instance and one database instance running MySQL. A Solutions Architect is concerned about the lack of high-availability in the current architecture. What can the Solutions Architect do to easily enable high availability without making major changes to the architecture?

A. Create a Read Replica in another availability zone
B. Enable Multi-AZ for the MySQL instance
C. Install MySQL on an EC2 instance in the same availability zone and enable replication
D. Install MySQL on an EC2 instance in another availability zone and enable replication

A

D. Install MySQL on an EC2 instance in another availability zone and enable replication

Explanation:
If you are installing MySQL on an EC2 instance you cannot enable read replicas or multi-AZ. Instead you would need to use Amazon RDS with a MySQL DB engine to use these features. In this example a good solution is to use the native HA features of MySQL. You would want to place the second MySQL DB instance in another AZ to enable high availability and fault tolerance. Migrating to Amazon RDS may be a good solution but is not presented as an option. CORRECT: “Install MySQL on an EC2 instance in another availability zone and enable replication” is the correct answer. INCORRECT: “Create a Read Replica in another availability zone” is incorrect as described above. INCORRECT: “Enable Multi-AZ for the MySQL instance” is incorrect as described above. INCORRECT: “Install MySQL on an EC2 instance in the same availability zone and enable replication” is incorrect as described above.

35
Q

A Solutions Architect has setup a VPC with a public subnet and a VPN-only subnet. The public subnet is associated with a custom route table that has a route to an Internet Gateway. The VPN-only subnet is associated with the main route table and has a route to a virtual private gateway. The Architect has created a new subnet in the VPC and launched an EC2 instance in it. However, the instance cannot connect to the Internet. What is the MOST likely reason?

A. The subnet has been automatically associated with the main route table which does not have a route to the Internet
B. The new subnet has not been associated with a route table
C. The Internet Gateway is experiencing connectivity problems
D. There is no NAT Gateway available in the new subnet so Internet connectivity is not possible

A

A. The subnet has been automatically associated with the main route table which does not have a route to the Internet

Explanation:
When you create a new subnet, it is automatically associated with the main route table. Therefore, the EC2 instance will not have a route to the Internet. The Architect should associate the new subnet with the custom route table. CORRECT: “The subnet has been automatically associated with the main route table which does not have a route to the Internet” is the correct answer. INCORRECT: “The new subnet has not been associated with a route table” is incorrect. Subnets are always associated to a route table when created.. INCORRECT: “The Internet Gateway is experiencing connectivity problems” is incorrect. Internet Gateways are highly-available so it’s unlikely that IGW connectivity is the issue. INCORRECT: “There is no NAT Gateway available in the new subnet so Internet connectivity is not possible” is incorrect. NAT Gateways are used for connecting EC2 instances in private subnets to the Internet. This is a valid reason for a private subnet to not have connectivity, however in this case the Architect is attempting to use an Internet Gateway.

36
Q

A customer has a public-facing web application hosted on a single Amazon Elastic Compute Cloud (EC2) instance serving videos directly from an Amazon S3 bucket. Which of the following will restrict third parties from directly accessing the video assets in the bucket?

A. Launch the website Amazon EC2 instance using an IAM role that is authorized to access the videos
B. Restrict access to the bucket to the public CIDR range of the company locations
C. Use a bucket policy to only allow referrals from the main website URL
D. Use a bucket policy to only allow the public IP address of the Amazon EC2 instance hosting the customer website

A

C. Use a bucket policy to only allow referrals from the main website URL

Explanation:
To allow read access to the S3 video assets from the public-facing web application, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. This is a good answer as it fully satisfies the objective of ensuring the that EC2 instance can access the videos but direct access to the videos from other sources is prevented. CORRECT: “Use a bucket policy to only allow referrals from the main website URL” is the correct answer. INCORRECT: “Launch the website Amazon EC2 instance using an IAM role that is authorized to access the videos” is incorrect. Launching the EC2 instance with an IAM role that is authorized to access the videos is only half a solution as you would also need to create a bucket policy that specifies that the IAM role is granted access. INCORRECT: “Restrict access to the bucket to the public CIDR range of the company locations” is incorrect. Restricting access to the bucket to the public CIDR range of the company locations will stop third-parties from accessing the bucket however it will also stop the EC2 instance from accessing the bucket and the question states that the EC2 instance is serving the files directly. INCORRECT: “Use a bucket policy to only allow the public IP address of the Amazon EC2 instance hosting the customer website” is incorrect. You can use condition statements in a bucket policy to restrict access via IP address. However, using the referrer condition in a bucket policy is preferable as it is a best practice to use DNS names / URLs instead of hard-coding IPs whenever possible.

37
Q

A Solutions Architect is creating an AWS CloudFormation template that will provision a new EC2 instance and new EBS volume. What must be specified to associate the block store with the instance?

A. Both the EC2 physical ID and the EBS physical ID
B. The EC2 physical ID
C. Both the EC2 logical ID and the EBS logical ID
D. The EC2 logical ID

A

C. Both the EC2 logical ID and the EBS logical ID

Explanation:
The logical ID is used to reference the resource in parts of the template. For example, if you want to map an Amazon Elastic Block Store volume to an Amazon EC2 instance, you reference the logical IDs to associate the block stores with the instance. In addition to the logical ID, certain resources also have a physical ID, which is the actual assigned name for that resource, such as an EC2 instance ID or an S3 bucket name. Use the physical IDs to identify resources outside of AWS CloudFormation templates, but only after the resources have been created. Think of logical IDs as being used to reference resources within the template and Physical IDs being used to identify resources outside of AWS CloudFormation templates after they have been created. CORRECT: “Both the EC2 logical ID and the EBS logical ID” is the correct answer. INCORRECT: “Both the EC2 physical ID and the EBS physical ID” is incorrect as logical IDs can be used within the template. INCORRECT: “The EC2 physical ID” is incorrect as logical IDs can be used. INCORRECT: “The EC2 logical ID” is incorrect as the EBS logical ID should also be specified.

38
Q

An application stores encrypted data in Amazon S3 buckets. A Solutions Architect needs to be able to query the encrypted data using SQL queries and write the encrypted results back the S3 bucket. As the data is sensitive fine-grained control must be implemented over access to the S3 bucket. What combination of services represent the BEST options support these requirements? (Select TWO.)

A. Use AWS Glue to extract the data, analyze it, and load it back to the S3 bucket
B. Use bucket ACLs to restrict access to the bucket
C. Use IAM policies to restrict access to the bucket
D. Use Athena for querying the data and writing the results back to the bucket
E. Use the AWS KMS API to query the encrypted data, and the S3 API for writing the results

A

C. Use IAM policies to restrict access to the bucket
D. Use Athena for querying the data and writing the results back to the bucket

Explanation:
Athena allows you to easily query encrypted data stored in Amazon S3 and write encrypted results back to your S3 bucket. Both, server-side encryption and client-side encryption are supported. AWS IAM policies can be used to grant IAM users’ with fine-grained control to Amazon S3 buckets. CORRECT: “Use IAM policies to restrict access to the bucket” is a correct answer. CORRECT: “Use Athena for querying the data and writing the results back to the bucket” is also a correct answer. INCORRECT: “Use AWS Glue to extract the data, analyze it, and load it back to the S3 bucket” is incorrect. AWS Glue is an ETL service and is not used for querying and analyzing data in S3. INCORRECT: “Use bucket ACLs to restrict access to the bucket” is incorrect. With IAM policies, you can grant IAM users fine-grained control to your S3 buckets, and is preferable to using bucket ACLs. INCORRECT: “Use the AWS KMS API to query the encrypted data, and the S3 API for writing the results” is incorrect. The AWS KMS API can be used for encryption purposes, however it cannot perform analytics so is not suitable.

39
Q

A Solutions Architect works for a systems integrator running a platform that stores medical records. The government security policy mandates that patient data that contains personally identifiable information (PII) must be encrypted at all times, both at rest and in transit. Amazon S3 is used to back up data into the AWS cloud. How can the Solutions Architect ensure the medical records are properly secured? (Select TWO.)

A. Before uploading the data to S3 over HTTPS, encrypt the data locally using your own encryption keys
B. Enable Server Side Encryption with S3 managed keys on an S3 bucket using AES-128
C. Attach an encrypted EBS volume to an EC2 instance
D. Enable Server Side Encryption with S3 managed keys on an S3 bucket using AES-256
E. Upload the data using CloudFront with an EC2 origin

A

A. Before uploading the data to S3 over HTTPS, encrypt the data locally using your own encryption keys
D. Enable Server Side Encryption with S3 managed keys on an S3 bucket using AES-256

Explanation:
When data is stored in an encrypted state it is referred to as encrypted “at rest” and when it is encrypted as it is being transferred over a network it is referred to as encrypted “in transit”. You can securely upload/download your data to Amazon S3 via SSL endpoints using the HTTPS protocol (In Transit – SSL/TLS). You have the option of encrypting the data locally before it is uploaded or uploading using SSL/TLS so it is secure in transit and encrypting on the Amazon S3 side using S3 managed keys. The S3 managed keys will be AES-256 (not AES-128) bit keys Uploading data using CloudFront with an EC2 origin or using an encrypted EBS volume attached to an EC2 instance is not a solution to this problem as your company wants to backup these records onto S3 (not EC2/EBS). CORRECT: “Before uploading the data to S3 over HTTPS, encrypt the data locally using your own encryption keys” is a correct answer. CORRECT: “Enable Server Side Encryption with S3 managed keys on an S3 bucket using AES-256” is also a correct answer. INCORRECT: “Enable Server Side Encryption with S3 managed keys on an S3 bucket using AES-128” is incorrect as AES 256 should be used. INCORRECT: “Attach an encrypted EBS volume to an EC2 instance” is incorrect as explained above. INCORRECT: “Upload the data using CloudFront with an EC2 origin” is incorrect as explained

40
Q

A Solutions Architect is considering the best approach to enabling Internet access for EC2 instances in a private subnet. What advantages do NAT Gateways have over NAT Instances? (Select TWO.)

A. Can be assigned to security groups
B. Can be used as a bastion host
C. Managed for you by AWS
D. Highly available within each AZ
E. Can be scaled up manually

A

C. Managed for you by AWS
D. Highly available within each AZ

Explanation:
NAT gateways are managed for you by AWS. NAT gateways are highly available in each AZ into which they are deployed. They are not associated with any security groups and can scale automatically up to 45Gbps NAT instances are managed by you. They must be scaled manually and do not provide HA. NAT Instances can be used as bastion hosts and can be assigned to security groups. CORRECT: “Managed for you by AWS” is a correct answer. CORRECT: “Highly available within each AZ” is also a correct answer. INCORRECT: “Can be assigned to security groups” is incorrect as you cannot assign security groups to NAT gateways but you can to NAT instances. INCORRECT: “Can be used as a bastion host” is incorrect, only a NAT instance can be used as a bastion host. INCORRECT: “Can be scaled up manually” is incorrect, though automatic is better anyway!

41
Q

A Solutions Architect must design a solution for providing single sign-on to existing staff in a company. The staff manage on-premise web applications and also need access to the AWS management console to manage resources in the AWS cloud. Which combination of services are BEST suited to delivering these requirements?

A. Use IAM and Amazon Cognito
B. Use your on-premise LDAP directory with IAM
C. Use the AWS Secure Token Service (STS) and SAML
D. Use IAM and MFA

A

C. Use the AWS Secure Token Service (STS) and SAML

Explanation:
Single sign-on using federation allows users to login to the AWS console without assigning IAM credentials. The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for IAM users or for users that you authenticate (such as federated users from an on-premise directory). Federation (typically Active Directory) uses SAML 2.0 for authentication and grants temporary access based on the users AD credentials. The user does not need to be a user in IAM. CORRECT: “Use the AWS Secure Token Service (STS) and SAML” is the correct answer. INCORRECT: “Use IAM and Amazon Cognito” is incorrect. Amazon Cognito is used for authenticating users to web and mobile apps not for providing single sign-on between on-premises directories and the AWS management console. INCORRECT: “Use your on-premise LDAP directory with IAM” is incorrect. You cannot use your on-premise LDAP directory with IAM, you must use federation. INCORRECT: “Use IAM and MFA” is incorrect. Enabling multi-factor authentication (MFA) for IAM is not a federation solution..

42
Q

A Solutions Architect is designing a three-tier web application that includes an Auto Scaling group of Amazon EC2 Instances running behind an Elastic Load Balancer. The security team requires that all web servers must be accessible only through the Elastic Load Balancer and that none of the web servers are directly accessible from the Internet. How should the Architect meet these requirements?

A. Create an Amazon CloudFront distribution in front of the Elastic Load Balancer
B. Configure the web servers’ security group to deny traffic from the Internet
C. Configure the web tier security group to allow only traffic from the Elastic Load Balancer
D. Install a Load Balancer on an Amazon EC2 instance

A

C. Configure the web tier security group to allow only traffic from the Elastic Load Balancer

Explanation:
The web servers must be kept private so they will be not have public IP addresses. The ELB is Internet-facing so it will be publicly accessible via it’s DNS address (and corresponding public IP). To restrict web servers to be accessible only through the ELB you can configure the web tier security group to allow only traffic from the ELB. You would normally do this by adding the ELBs security group to the rule on the web tier security group CORRECT: “Configure the web tier security group to allow only traffic from the Elastic Load Balancer” is the correct answer. INCORRECT: “Create an Amazon CloudFront distribution in front of the Elastic Load Balancer” is incorrect. CloudFront distributions are used for caching content to improve performance for users on the Internet. They are not security devices to be used for restricting access to EC2 instances. INCORRECT: “Configure the web servers’ security group to deny traffic from the Internet” is incorrect. You cannot create deny rules in security groups. INCORRECT: “Install a Load Balancer on an Amazon EC2 instance” is incorrect. This scenario is using an Elastic Load Balancer and these cannot be installed on EC2 instances (at least not by you, in reality all ELBs are actually running on EC2 instances but these are transparent to the

43
Q

A Solutions Architect is creating a URL that lets users who sign in to the organization’s network securely access the AWS Management Console. The URL will include a sign-in token that authenticates the user to AWS. Microsoft Active Directory Federation Services is being used as the identity provider (IdP). Which of the steps below will the Solutions Architect need to include when developing the custom identity broker? (Select TWO.)

A. Call the AWS federation endpoint and supply the temporary security credentials to request a sign-in token
B. Call the AWS Security Token Service (AWS STS) AssumeRole or GetFederationToken API operations to obtain temporary security credentials for the user
C. Assume an IAM Role through the console or programmatically with the AWS CLI, Tools for Windows PowerShell or API
D. Generate a pre-signed URL programmatically using the AWS SDK for Java or the AWS SDK for .NET
E. Delegate access to the IdP through the “Configure Provider” wizard in the IAM console

A

A. Call the AWS federation endpoint and supply the temporary security credentials to request a sign-in token
B. Call the AWS Security Token Service (AWS STS) AssumeRole or GetFederationToken API operations to obtain temporary security credentials for the user

Explanation:
The aim of this solution is to create a single sign-on solution that enables users signed in to the organization’s Active Directory service to be able to connect to AWS resources. When developing a custom identity broker you use the AWS STS service. The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for IAM users or for users that you authenticate (federated users). The steps performed by the custom identity broker to sign users into the AWS management console are: - Verify that the user is authenticated by your local identity system - Call the AWS Security Token Service (AWS STS) AssumeRole or GetFederationToken API operations to obtain temporary security credentials for the user - Call the AWS federation endpoint and supply the temporary security credentials to request a sign-in token - Construct a URL for the console that includes the token - Give the URL to the user or invoke the URL on the user’s behalf CORRECT: “Call the AWS federation endpoint and supply the temporary security credentials to request a sign-in token” is the correct answer. CORRECT: “Call the AWS Security Token Service (AWS STS) AssumeRole or GetFederationToken API operations to obtain temporary security credentials for the user” is the correct answer. INCORRECT: “Assume an IAM Role through the console or programmatically with the AWS CLI, Tools for Windows PowerShell or API” is incorrect as this is an example of federation so assuming a role is the wrong procedure. INCORRECT: “Generate a pre-signed URL programmatically using the AWS SDK for Java or the AWS SDK for .NET” is incorrect. You cannot generate a pre-signed URL for this purpose using SDKs, delegate access through the IAM console or directly assume IAM roles.. INCORRECT: “Delegate access to the IdP through the “Configure Provider” wizard in the IAM console” is incorrect as this is not something you can do

44
Q

Some Amazon ECS containers are running on a cluster using the EC2 launch type. The current configuration uses the container instance’s IAM roles for assigning permissions to the containerized applications. A Solutions Architect needs to implement more granular permissions so that some applications can be assigned more restrictive permissions. How can this be achieved?

A. This cannot be changed as IAM roles can only be linked to container instances
B. This can be achieved using IAM roles for tasks, and splitting the containers according to the permissions required to different task definition profiles
C. This can be achieved by configuring a resource-based policy for each application
D. This can only be achieved using the Fargate launch type

A

B. This can be achieved using IAM roles for tasks, and splitting the containers according to the permissions required to different task definition profiles

Explanation:
With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Using this feature you can achieve the required outcome by using IAM roles for tasks and splitting the containers according to the permissions required to different task profiles. CORRECT: “This can be achieved using IAM roles for tasks, and splitting the containers according to the permissions required to different task definition profiles” is the correct answer. INCORRECT: “This cannot be changed as IAM roles can only be linked to container instances” is incorrect as you can also link them to tasks. INCORRECT: “This can be achieved by configuring a resource-based policy for each application” is incorrect. Amazon ECS does not support IAM resource-based policies. INCORRECT: “This can only be achieved using the Fargate launch type” is incorrect. The solution can be achieved whether using the EC2 or Fargate launch types.

45
Q

An application uses a combination of Reserved and On-Demand instances to handle typical load. The application involves performing analytics on a set of data. A Solutions Architect needs to temporarily deploy a large number of EC2 instances. The instances must be available for a short period of time until the analytics job is completed. If job completion is not time-critical, what is likely to be the MOST cost-effective choice of EC2 instance type to use for this requirement?

A. Use Spot instances
B. Use dedicated hosts
C. Use On-Demand instances
D. Use Reserved instances

A

A. Use Spot instances

Explanation:
The key requirements here are that you need to temporarily deploy a large number of instances, can tolerate an delay (not time-critical), and need the most economical solution. In this case Spot instances are likely to be the most economical solution. You must be able to tolerate delays if using Spot instances as if the market price increases your instances will be terminated and you may have to wait for the price to lower back to your budgeted allowance. CORRECT: “Use Spot instances” is the correct answer. INCORRECT: “Use dedicated hosts” is incorrect. An EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. They are much more expensive than on-demand or Spot instances and are used for use cases such as bringing your own socket-based software licences to AWS or for compliance reasons. INCORRECT: “Use On-Demand instances” is incorrect. On-demand is good for temporary deployments when you cannot tolerate any delays (instances being terminated by AWS). It is likely to be more expensive than Spot however so if delays can be tolerated it is not the best solution. INCORRECT: “Use Reserved instances” is incorrect. Reserved instances are used for longer more stable requirements where you can get a discount for a fixed 1 or 3 year term. This pricing model is not good for temporary requirements.

46
Q

There is a problem with an EC2 instance that was launched by Amazon EC2 Auto Scaling. The EC2 status checks have reported that the instance is “Impaired”. What action will EC2 Auto Scaling take?

A. Auto Scaling will perform Availability Zone rebalancing
B. It will wait a few minutes for the instance to recover and if it does not it will mark the instance for termination, terminate it, and then launch a replacement
C. Auto Scaling performs its own status checks and does not integrate with EC2 status checks
D. It will launch a new instance immediately and then mark the impaired one for replacement

A

B. It will wait a few minutes for the instance to recover and if it does not it will mark the instance for termination, terminate it, and then launch a replacement

Explanation:
If any health check returns an unhealthy status the instance will be terminated. For the “impaired” status, the ASG will wait a few minutes to see if the instance recovers before taking action. If the “impaired” status persists, termination occurs. Unlike AZ rebalancing, termination of unhealthy instances happens first, then Auto Scaling attempts to launch new instances to replace terminated instances. CORRECT: “It will wait a few minutes for the instance to recover and if it does not it will mark the instance for termination, terminate it, and then launch a replacement” is the correct answer. INCORRECT: “Auto Scaling will perform Availability Zone rebalancing” is incorrect. Auto Scaling will not perform Availability Zone rebalancing due to an impaired status check. INCORRECT: “Auto Scaling performs its own status checks and does not integrate with EC2 status checks” is incorrect. Auto Scaling does integrate with EC2 status checks as well as having its own status checks. INCORRECT: “It will launch a new instance immediately and then mark the impaired one for replacement” is incorrect. Auto Scaling will not launch a new instance immediately as it always terminates unhealthy instance before launching a replacement.

47
Q

A pharmaceutical company uses a strict process for release automation that involves building and testing services in 3 separate VPCs. A peering topology is configured with VPC-A peered with VPC-B and VPC-B peered with VPC-C. The development team wants to modify the process so that they can release code directly from VPC-A to VPC-C. How can this be accomplished?

A. Update VPC-Bs route table with peering targets for VPC-A and VPC-C and enable route propagation B. Create a new VPC peering connection between VPC-A and VPC-C
C. Update the CIDR blocks to match to enable inter-VPC routing
D. Update VPC-As route table with an entry using the VPC peering as a target

A

B. Create a new VPC peering connection between VPC-A and VPC-C

Explanation:
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection). It is not possible to use transitive peering relationships with VPC peering and therefore you must create an additional VPC peering connection between VPC-A and VPC-C. CORRECT: “Create a new VPC peering connection between VPC-A and VPC-C” is the correct answer. INCORRECT: “Update VPC-Bs route table with peering targets for VPC-A and VPC-C and enable route propagation” is incorrect. Route propagation cannot be used to extend VPC peering connections. INCORRECT: “Update the CIDR blocks to match to enable inter-VPC routing” is incorrect. You cannot have matching (overlapping) CIDR blocks with VPC peering. INCORRECT: “Update VPC-As route table with an entry using the VPC peering as a target” is incorrect. You must update route tables to configure routing however updating VPC-As route table alone will not lead to the desired result without first creating the additional peering connection.

48
Q

A Solutions Architect needs to work programmatically with IAM. Which feature of IAM allows direct access to the IAM web service using HTTPS to call service actions and what is the method of authentication that must be used? (Select TWO.)

A. OpenID Connect
B. Query API
C. API Gateway
D. Access key ID and secret access key
E. IAM role

A

B. Query API
D. Access key ID and secret access key

Explanation:
AWS recommend that you use the AWS SDKs to make programmatic API calls to IAM. However, you can also use the IAM Query API to make direct calls to the IAM web service. An access key ID and secret access key must be used for authentication when using the Query API. CORRECT: “Query API” is a correct answer. CORRECT: “Access key ID and secret access key” is also a correct answer. INCORRECT: “OpenID Connect” is incorrect. OpenID Connect is a provider for connecting external directories. INCORRECT: “API Gateway” is incorrect. API gateway is a separate service for accepting and processing API calls. INCORRECT: “IAM role” is incorrect. An IAM role is not used for authentication to the Query API.

49
Q

The Systems Administrators in a company currently use Chef for configuration management of on-premise servers. Which AWS service can a Solutions Architect use that will provide a fully-managed configuration management service that will enable the use of existing Chef cookbooks?

A. Elastic Beanstalk
B. CloudFormation
C. OpsWorks for Chef Automate
D. Opsworks Stacks

A

C. OpsWorks for Chef Automate

Explanation:
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. AWS OpsWorks for Chef Automate is a fully-managed configuration management service that hosts Chef Automate, a suite of automation tools from Chef for configuration management, compliance and security, and continuous deployment. OpsWorks for Chef Automate is completely compatible with tooling and cookbooks from the Chef community and automatically registers new nodes with your Chef server. CORRECT: “OpsWorks for Chef Automate” is the correct answer. INCORRECT: “Opsworks Stacks” is incorrect. AWS OpsWorks Stacks lets you manage applications and servers on AWS and on-premises and uses Chef Solo. The question does not require the managed solution on AWS to manage on-premises resources, just to use existing cookbooks so this is not the preferred solution. INCORRECT: “Elastic Beanstalk” is incorrect. AWS Elastic Beanstalk is not able to build infrastructure using Chef cookbooks. INCORRECT: “CloudFormation” is incorrect. AWS CloudFormation is not able to build infrastructure using Chef cookbooks.

50
Q

An Amazon RDS Multi-AZ deployment is running in an Amazon VPC. An outage occurs in the availability zone of the primary RDS database instance. What actions will take place in this circumstance? (Select TWO.)

A. The failover mechanism automatically changes the DNS record of the DB instance to point to the standby DB instance
B. A failover will take place once the connection draining timer has expired
C. A manual failover of the DB instance will need to be initiated using Reboot with failover
D. The primary DB instance will switch over automatically to the standby replica
E. Due to the loss of network connectivity the process to switch to the standby replica cannot take place

A

A. The failover mechanism automatically changes the DNS record of the DB instance to point to the standby DB instance

D. The primary DB instance will switch over automatically to the standby replica

Explanation:
Multi-AZ RDS creates a replica in another AZ and synchronously replicates to it (DR only). A failover may be triggered in the following circumstances: - Loss of primary AZ or primary DB instance failure - Loss of network connectivity on primary - Compute (EC2) unit failure on primary - Storage (EBS) unit failure on primary - The primary DB instance is changed - Patching of the OS on the primary DB instance - Manual failover (reboot with failover selected on primary) During failover RDS automatically updates configuration (including DNS endpoint) to use the second node. CORRECT: “The failover mechanism automatically changes the DNS record of the DB instance to point to the standby DB instance” is a correct answer. CORRECT: “The primary DB instance will switch over automatically to the standby replica” is also a correct answer. INCORRECT: “A failover will take place once the connection draining timer has expired” is incorrect. Connection draining timers are applicable to ELBs not RDS. INCORRECT: “A manual failover of the DB instance will need to be initiated using Reboot with failover” is incorrect. You do not need to manually failover the DB instance, multi-AZ has an automatic process as outlined above. INCORRECT: “Due to the loss of network connectivity the process to switch to the standby replica cannot take place” is incorrect. The process to failover is not reliant on network connectivity as it is designed for fault tolerance.

51
Q

A Solutions Architect is designing a web-facing application. The application will run on Amazon EC2 instances behind Elastic Load Balancers in multiple regions in an active/passive configuration. The website address the application runs on is example.com. AWS Route 53 will be used to perform DNS resolution for the application. How should the Solutions Architect configure AWS Route 53 in this scenario based on AWS best practices? (Select TWO.)

A. Use a Failover Routing Policy
B. Set Evaluate Target Health to “No” for the primary
C. Use a Weighted Routing Policy
D. Connect the ELBs using Alias records
E. Connect the ELBs using CNAME records

A

A. Use a Failover Routing Policy

D. Connect the ELBs using Alias records

Explanation:
The failover routing policy is used for active/passive configurations. Alias records can be used to map the domain apex (example.com) to the Elastic Load Balancers. Failover routing lets you route traffic to a resource when the resource is healthy or to a different resource when the first resource is unhealthy. The primary and secondary records can route traffic to anything from an Amazon S3 bucket that is configured as a website to a complex tree of records. CORRECT: “Use a Failover Routing Policy” is a correct answer. CORRECT: “Connect the ELBs using Alias records” is also a correct answer. INCORRECT: “Set Evaluate Target Health to “No” for the primary” is incorrect. ForEvaluate Target HealthchooseYesfor your primary record and chooseNofor your secondary record. For your primary record chooseYesforAssociate with Health Check. Then forHealth Check to Associateselect the health check that you created for your primary resource. INCORRECT: “Use a Weighted Routing Policy” is incorrect. Weighted routing is not an active/passive routing policy. All records are active and the traffic is distributed according to the weighting. INCORRECT: “Connect the ELBs using CNAME records” is incorrect. You cannot use CNAME records for the domain apex record, you must use Alias records.

52
Q

A Solutions Architect is designing a new retail website for a high-profile company. The company has previously been the victim of targeted distributed denial-of-service (DDoS) attacks and has requested that the design includes mitigation techniques. Which of the following are the BEST techniques to help ensure the availability of the services is not compromised in an attack? (Select TWO.)

A. Configure Auto Scaling with a high maximum number of instances to ensure it can scale accordingly B. Use CloudFront for distributing both static and dynamic content
C. Use Spot instances to reduce the cost impact in case of attack
D. Use encryption on your EBS volumes
E. Use Placement Groups to ensure high bandwidth and low latency

A

A. Configure Auto Scaling with a high maximum number of instances to ensure it can scale accordingly B. Use CloudFront for distributing both static and dynamic content

Explanation:
CloudFront distributes traffic across multiple edge locations and filters requests to ensure that only valid HTTP(S) requests will be forwarded to backend hosts. CloudFront also supports geoblocking, which you can use to prevent requests from particular geographic locations from being served. Auto Scaling helps to maintain a desired count of EC2 instances running at all times and setting a high maximum number of instances allows your fleet to grow and absorb some of the impact of the attack. CORRECT: “Configure Auto Scaling with a high maximum number of instances to ensure it can scale accordingly” is a correct answer. CORRECT: “Use CloudFront for distributing both static and dynamic content” is also a correct answer. INCORRECT: “Use Spot instances to reduce the cost impact in case of attack” is incorrect. Spot instances may reduce the cost (depending on the current Spot price) however the questions asks us to focus on availability not cost. INCORRECT: “Use encryption on your EBS volumes” is incorrect. Encrypting EBS volumes does not help in a DDoS attack as the attack is targeted at reducing availability rather than compromising data. INCORRECT: “Use Placement Groups to ensure high bandwidth and low latency” is incorrect as this will not assist with mitigation of DDoS attacks.

53
Q

An application running on Amazon EC2 requires an EBS volume for saving structured data. The application vendor suggests that the performance of the disk should be up to 3 IOPS per GB. The capacity is expected to grow to 2 TB. Taking into account cost effectiveness, which EBS volume type should be used?

A. Throughput Optimized HDD (ST1)
B. General Purpose (GP2)
C. Provisioned IOPS (Io1)
D. Cold HDD (SC1)

A

B. General Purpose (GP2)

Explanation:
SSD, General Purpose (GP2) provides enough IOPS to support this requirement and is the most economical option that does. Using Provisioned IOPS would be more expensive and the other two options do not provide an SLA for IOPS. More information on the volume types: – SSD, General Purpose (GP2) provides 3 IOPS per GB up to 16,000 IOPS. Volume size is 1 GB to 16 TB. – Provisioned IOPS (Io1) provides the IOPS you assign up to 50 IOPS per GiB and up to 64,000 IOPS per volume. Volume size is 4 GB to 16TB. – Throughput Optimized HDD (ST1) provides up to 500 IOPS per volume but does not provide an SLA for IOPS. – Cold HDD (SC1) provides up to 250 IOPS per volume but does not provide an SLA for IOPS. CORRECT: “General Purpose (GP2)” is the correct answer. INCORRECT: “Throughput Optimized HDD (ST1)” is incorrect as this will not provide an SLA for IOPS. INCORRECT: “Provisioned IOPS (Io1)” is incorrect as this will be less cost-effective. INCORRECT: “Cold HDD (SC1)” is incorrect as this will not provide an SLA for IOPS.

54
Q

An application in an Amazon VPC uses an Auto Scaling Group that spans 3 AZs and there are currently 4 Amazon EC2 instances running in the group. What actions will Auto Scaling take, by default, if it needs to terminate an EC2 instance?

A. Randomly select one of the 3 AZs, and then terminate an instance in that AZ
B. Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected
C. Send an SNS notification, if configured to do so
D. Wait for the cooldown period and then terminate the instance that has been running the longest E. E. Terminate an instance in the AZ which currently has 2 running EC2 instances

A

C. Send an SNS notification, if configured to do so
E. Terminate an instance in the AZ which currently has 2 running EC2 instances

Explanation:
Auto Scaling can perform rebalancing when it finds that the number of instances across AZs is not balanced. Auto Scaling rebalances by launching new EC2 instances in the AZs that have fewer instances first, only then will it start terminating instances in AZs that had more instances Auto Scaling can be configured to send an SNS email when: – An instance is launched. – An instance is terminated. – An instance fails to launch. – An instance fails to terminate. CORRECT: “Send an SNS notification, if configured to do so” is a correct answer. CORRECT: “Terminate an instance in the AZ which currently has 2 running EC2 instances” is also a correct answer. INCORRECT: “Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected” is incorrect. Auto Scaling will only terminate an instance randomly after it has first gone through several other selection steps. Please see the AWS article below for detailed information on the process INCORRECT: “Wait for the cooldown period and then terminate the instance that has been running the longest” is incorrect. Auto Scaling does not terminate the instance that has been running the longest. INCORRECT: “Randomly select one of the 3 AZs, and then terminate an instance in that AZ” is incorrect as described above.

55
Q

Several environments are being created in a single Amazon VPC. The Solutions Architect needs to implement a system of categorization that allows for identification of Amazon EC2 resources by business unit, owner, or environment. Which AWS feature can be used?

A. Parameters
B. Metadata
C. Custom filters
D. Tags

A

D. Tags

Explanation:
A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value, both of which you define. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. CORRECT: “Tags” is the correct answer. INCORRECT: “Parameters” is incorrect. Parameters are not used for categorization INCORRECT: “Metadata” is incorrect. Instance metadata is data about your instance that you can use to configure or manage the running instance. INCORRECT: “Custom filters” is incorrect. Custom filters are not used for categorization.

56
Q

An organization has a data lake on Amazon S3 and needs to find a solution for performing in-place queries of the data assets in the data lake. The requirement is to perform both data discovery and SQL querying, and complex queries from a large number of concurrent users using BI tools. What is the BEST combination of AWS services to use in this situation? (Select TWO.)

A. RedShift Spectrum for the complex queries
B. Amazon Athena for the ad hoc SQL querying
C. AWS Glue for the ad hoc SQL querying
D. AWS Lambda for the complex queries
E. Amazon Kinesis for the complex queries

A

A. RedShift Spectrum for the complex queries
B. Amazon Athena for the ad hoc SQL querying

Explanation:
Performing in-place queries on a data lake allows you to run sophisticated analytics queries directly on the data in S3 without having to load it into a data warehouse. You can use both Athena and Redshift Spectrum against the same data assets. You would typically use Athena for ad hoc data discovery and SQL querying, and then use Redshift Spectrum for more complex queries and scenarios where a large number of data lake users want to run concurrent BI and reporting workloads. CORRECT: “RedShift Spectrum for the complex queries” is a correct answer. CORRECT: “Amazon Athena for the ad hoc SQL querying” is also a correct answer. INCORRECT: “AWS Glue for the ad hoc SQL querying” is incorrect. AWS Glue is an extract, transform and load (ETL) service. INCORRECT: “AWS Lambda for the complex queries” is incorrect. AWS Lambda is a serverless technology for running functions, it is not the best solution for running analytics queries. INCORRECT: “Amazon Kinesis for the complex queries” is incorrect. Amazon Kinesis is used for ingesting and processing real time streaming data, not performing queries.

57
Q

When using throttling controls with API Gateway what happens when request submissions exceed the steady-state request rate and burst limits?

A. API Gateway fails the limit-exceeding requests and returns “429 Too Many Requests” error responses to the client
B. The requests will be buffered in a cache until the load reduces
C. API Gateway drops the requests and does not return a response to the client
D. API Gateway fails the limit-exceeding requests and returns “500 Internal Server Error” error responses to the client

A

A. API Gateway fails the limit-exceeding requests and returns “429 Too Many Requests” error responses to the client

Explanation:
You can throttle and monitor requests to protect your backend. Resiliency through throttling rules based on the number of requests per second for each HTTP method (GET, PUT). Throttling can be configured at multiple levels including Global and Service Call. When request submissions exceed the steady-state request rate and burst limits, API Gateway fails the limit-exceeding requests and returns 429 Too Many Requests error responses to the client. CORRECT: “API Gateway fails the limit-exceeding requests and returns “429 Too Many Requests” error responses to the client” is the correct answer. INCORRECT: “The requests will be buffered in a cache until the load reduces” is incorrect as the requests are actually failed. INCORRECT: “API Gateway drops the requests and does not return a response to the client” is incorrect as it does return a response as detailed above. INCORRECT: “API Gateway fails the limit-exceeding requests and returns “500 Internal Server Error” error responses to the client” is incorrect as a 429 error is returned.

58
Q

A Solutions Architect created a new VPC and setup an Auto Scaling Group to maintain a desired count of 2 Amazon EC2 instances. The security team has requested that the EC2 instances be located in a private subnet. To distribute load, an Internet-facing Application Load Balancer (ALB) is also required. With the security team’s requirements in mind, what else needs to be done to get this configuration to work? (Select TWO.)

A. Attach an Internet Gateway to the private subnets
B. Associate the public subnets with the ALB
C. Add an Elastic IP address to each EC2 instance in the private subnet
D. Add a NAT gateway to the private subnet
E. For each private subnet create a corresponding public subnet in the same AZ

A

B. Associate the public subnets with the ALB
E. For each private subnet create a corresponding public subnet in the same AZ

Explanation:
ELB nodes have public IPs and route traffic to the private IP addresses of the EC2 instances. You need one public subnet in each AZ where the ELB is defined and the private subnets are located CORRECT: “Associate the public subnets with the ALB” is a correct answer. CORRECT: “For each private subnet create a corresponding public subnet in the same AZ” is also a correct answer. INCORRECT: “Attach an Internet Gateway to the private subnets” is incorrect. Attaching an Internet gateway (which is done at the VPC level, not the subnet level) or a NAT gateway will not assist as these are both used for outbound communications which is not the goal here. INCORRECT: “Add an Elastic IP address to each EC2 instance in the private subnet” is incorrect. ELBs talk to the private IP addresses of the EC2 instances so adding an Elastic IP address to the instance won’t help. Additionally Elastic IP addresses are used in public subnets to allow Internet access via an Internet Gateway. INCORRECT: “Add a NAT gateway to the private subnet” is incorrect as this would only enable outbound internet access.

59
Q

An application running AWS uses an Elastic Load Balancer (ELB) to distribute connections between EC2 instances. A Solutions Architect needs to record information on the requester, IP, and request type for connections made to the ELB. Additionally, the Architect will also need to perform some analysis on the log files. Which AWS services and configuration options can be used to collect and then analyze the logs? (Select TWO.)

A. Use EMR for analyzing the log files
B. Update the application to use DynamoDB for storing log files
C. Use Elastic Transcoder to analyze the log files
D. Enable Access Logs on the ELB and store the log files on S3
E. Enable Access Logs on the EC2 instances and store the log files on S3

A

A. Use EMR for analyzing the log files
D. Enable Access Logs on the ELB and store the log files on S3

Explanation:
The best way to deliver these requirements is to enable access logs on the ELB and then use EMR for analyzing the log files Access Logs on ELB are disabled by default. Information includes information about the clients (not included in CloudWatch metrics) such as the identity of the requester, IP, request type etc. Logs can be optionally stored and retained in S3 Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3. CORRECT: “Use EMR for analyzing the log files” is the correct answer. CORRECT: “Enable Access Logs on the ELB and store the log files on S3” is the correct answer. INCORRECT: “Update the application to use DynamoDB for storing log files” is incorrect. The information recorded by ELB access logs is exactly what you require so there is no need to get the application to record the information into DynamoDB. INCORRECT: “Use Elastic Transcoder to analyze the log files” is incorrect. Elastic Transcoder is used for converting media file formats not analyzing files. INCORRECT: “Enable Access Logs on the EC2 instances and store the log files on S3” is incorrect as the access logs on the ELB should be enabled.

60
Q

A Solutions Architect would like to store a backup of an Amazon EBS volume on Amazon S3. What is the easiest way of achieving this?

A. Use SWF to automatically create a backup of your EBS volumes and then upload them to an S3 bucket
B. You don’t need to do anything, EBS volumes are automatically backed up by default
C. Write a custom script to automatically copy your data to an S3 bucket
D. Create a snapshot of the volume

A

D. Create a snapshot of the volume

Explanation:
Snapshots capture a point-in-time state of an instance. Snapshots of Amazon EBS volumes are stored on S3 by design so you only need to take a snapshot and it will automatically be stored on Amazon S3. CORRECT: “Create a snapshot of the volume” is the correct answer. INCORRECT: “Use SWF to automatically create a backup of your EBS volumes and then upload them to an S3 bucket” is incorrect. This is not a good use case for Amazon SWF. INCORRECT: “You don’t need to do anything, EBS volumes are automatically backed up by default” is incorrect. Amazon EBS volumes are not automatically backed up using snapshots. You need to manually take a snapshot or you can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation, retention, and deletion of snapshots. INCORRECT: “Write a custom script to automatically copy your data to an S3 bucket” is incorrect as this is not the simplest solution available.

61
Q

An application will gather data from a website hosted on an EC2 instance and write the data to an S3 bucket. The application will use API calls to interact with the EC2 instance and S3 bucket. Which Amazon S3 access control method will be the MOST operationally efficient? (Select TWO.)

A. Create a bucket policy
B. Grant programmatic access
C. Use key pairs
D. Grant AWS Management Console access
E. Create an IAM policy

A

B. Grant programmatic access
E. Create an IAM policy

Explanation:
Policies are documents that define permissions and can be applied to users, groups and roles. Policy documents are written in JSON (key value pair that consists of an attribute and a value). Within an IAM policy you can grant either programmatic access or AWS Management Console access to Amazon S3 resources. CORRECT: “Grant programmatic access” is a correct answer. CORRECT: “Create an IAM policy” is also a correct answer. INCORRECT: “Create a bucket policy” is incorrect as it is more efficient to use an IAM policy. INCORRECT: “Use key pairs” is incorrect. Key pairs are used for access to EC2 instances; a bucket policy would not assist with access control with EC2 and granting management console access will not assist the application which is making API calls to the services. INCORRECT: “Grant AWS Management Console access” is incorrect as programmatic access is required.

62
Q

An Amazon CloudWatch alarm recently notified a Solutions Architect that the load on an Amazon DynamoDB table is getting close to the provisioned capacity for writes. The DynamoDB table is part of a two-tier customer-facing application and is configured using provisioned capacity. What will happen if the limit for the provisioned capacity for writes is reached?

A. The requests will be throttled, and fail with an HTTP 503 code (Service Unavailable)
B. DynamoDB scales automatically so there’s no need to worry
C. The requests will be throttled, and fail with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceededException
D. The requests will succeed, and an HTTP 200 status code will be returned

A

C. The requests will be throttled, and fail with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceededException

Explanation:
Amazon DynamoDB can throttle requests that exceed the provisioned throughput for a table. When a request is throttled it fails with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceeded exception (not a 503 or 200 status code). When using the provisioned capacity pricing model DynamoDB does not automatically scale. DynamoDB can automatically scale when using the new on-demand capacity mode, however this is not configured for this database. CORRECT: “The requests will be throttled, and fail with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceededException” is the correct answer. INCORRECT: “The requests will be throttled, and fail with an HTTP 503 code (Service Unavailable)” is incorrect as this is not the code that is used (see above). INCORRECT: “DynamoDB scales automatically so there’s no need to worry” is incorrect as provisioned capacity mode does not automatically scale. INCORRECT: “The requests will succeed, and an HTTP 200 status code will be returned” is incorrect as the request will fail as described above.

63
Q

A Solutions Architect is creating the business process workflows associated with an order fulfilment system. What AWS service can assist with coordinating tasks across distributed application components?

A. AWS STS
B. Amazon SQS
C. Amazon SWF
D. Amazon SNS

A

C. Amazon SWF

Explanation:
Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components. SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks. CORRECT: “Amazon SWF” is the correct answer. INCORRECT: “AWS STS” is incorrect. AWS Security Token Service (STS) is used for requesting temporary credentials. INCORRECT: “Amazon SQS” is incorrect. Amazon Simple Queue Service (SQS) is a message queue used for decoupling application components. INCORRECT: “Amazon SNS” is incorrect. Amazon Simple Notification Service (SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud. SNS supports notifications over multiple transports including HTTP/HTTPS, Email/Email-JSON, SQS and SMS.

64
Q

An EC2 instance in an Auto Scaling group is having some issues that are causing it to launch new instances based on the dynamic scaling policy. A Solutions Architect needs to troubleshoot the EC2 instance and prevent the Auto Scaling group from launching new instances temporarily. What is the best method to accomplish this? (Select TWO.)

A.Remove the EC2 instance from the Target Group
B. Disable the launch configuration associated with the EC2 instance
C. Place the EC2 instance that is experiencing issues into the Standby state
D. Suspend the scaling processes responsible for launching new instances
E. Disable the dynamic scaling policy

A

C. Place the EC2 instance that is experiencing issues into the Standby state
D. Suspend the scaling processes responsible for launching new instances

Explanation:
You can suspend and then resume one or more of the scaling processes for your Auto Scaling group. This can be useful when you want to investigate a configuration problem or other issue with your web application and then make changes to your application, without invoking the scaling processes. You can manually move an instance from an ASG and put it in the standby state Instances in standby state are still managed by Auto Scaling, are charged as normal, and do not count towards available EC2 instance for workload/application use. Auto scaling does not perform health checks on instances in the standby state. Standby state can be used for performing updates/changes/troubleshooting etc. without health checks being performed or replacement instances being launched. CORRECT: “Place the EC2 instance that is experiencing issues into the Standby state” is a correct answer. CORRECT: “Suspend the scaling processes responsible for launching new instances” is also a correct answer. INCORRECT: “Remove the EC2 instance from the Target Group” is incorrect. Target Groups are features of ELB (specifically ALB/NLB). Removing the instance from the target group will stop the ELB from sending connections to it but will not stop Auto Scaling from launching new instances while you are troubleshooting it. INCORRECT: “Disable the launch configuration associated with the EC2 instance” is incorrect. You cannot disable the launch configuration and you can’t modify a launch configuration after you’ve created it. INCORRECT: “Disable the dynamic scaling policy” is incorrect. You do not need to disable the dynamic scaling policy, you can just suspend it as previously described.

65
Q

An Amazon VPC has been deployed with private and public subnets. A MySQL database server running on an Amazon EC2 instance will soon be launched. According to AWS best practice, which subnet should the database server be launched into?

A. It doesn’t matter
B. The private subnet
C. The public subnet
D. The subnet that is mapped to the primary AZ in the region

A

B. The private subnet

Explanation:
AWS best practice is to deploy databases into private subnets wherever possible. You can then deploy your web front-ends into public subnets and configure these, or an additional application tier to write data to the database. CORRECT: “The private subnet” is the correct answer. INCORRECT: “It doesn’t matter” is incorrect as the best practice does recommend using a private subnet. INCORRECT: “The public subnet” is incorrect. Public subnets are typically used for web front-ends as they are directly accessible from the Internet. It is preferable to launch your database in a private subnet. INCORRECT: “The subnet that is mapped to the primary AZ in the region” is incorrect. There is no such thing as a “primary” Availability Zone (AZ). All AZs are essentially created equal and your subnets map 1:1 to a single AZ.