Ûdemy Flashcards

1
Q

What is a proper definition of an IAM role?

A) IAM Users in multiple User Groups

B) An IAM entity that defines a password policy for IAM users

C) An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS service

D) Permissions assigned to IAM Users to perform Actions

A

C) An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS service

Some AWS services need to perform actions on your behalf. To do so, you assign permissions to AWS services with IAM Roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following is an IAM Security Tool?

A) IAM Credentials Report

B) IAM Root Account Manager

C) IAM Services Report

D) IAM Security Advisor

A

A) IAM Credentials Report

IAM Credentials report lists all your AWS Account’s IAM Users and the status of their various credentials.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which answer is INCORRECT regarding IAM Users?

A) IAM Users can belong to multiple User Groups

B) IAM Users don’t have to belong to a User Group

C) IAM Policies can be attached directly to IAM Users

D) IAM Users access AWS services using root account credentials

A

D) IAM Users access AWS services using root account credentials

IAM Users access AWS services using their own credentials (username & password or Access Keys).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is an IAM best practice?

A) Create several IAM Users for one physical person

B) Don’t use the root user account

C) Share your AWS account credentials with your colleague, so (s)he can perform a task for you

D) Do not enable MFA for easier access

A

B) Don’t use the root user account

Use the root account only to create your first IAM User and a few account/service management tasks. For everyday tasks, use an IAM User.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are IAM Policies?

A) A set of policies that defines how AWS accounts interact with each other

B) JSON documents that define a set of permissions for making requests to AWS services, and can be used by IAM Users, User Groups, and IAM Roles

C) A set of policies that define a password for IAM Users

D) A set of policies define by AWS that show how customers interact with AWS

A

B) JSON documents that define a set of permissions for making requests to AWS services, and can be used by IAM Users, User Groups, and IAM Roles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which principle should you apply regarding IAM Permissions?

A) Grant most privilege

B) Grant more permissions if your employee asks you to

C) Grant least privilege

D) Restrict root account permissions

A

C) Grant least privilege

Don’t give more permissions than the user needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What should you do to increase your root account security?

A) Remove permissions from the root account

B) Only access AWS services through AWS Command Line Interface (CLI)

C) Don’t create IAM Users, only access your AWS account using the root account

D) Enable Multi-Factor Authentication (MFA)

A

D) Enable Multi-Factor Authentication (MFA)

When you enable MFA, this adds another layer of security. Even if your password is stolen, lost, or hacked your account is not compromised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

TRUE / FALSE

IAM User Groups can contain IAM Users and other User Groups.

A

FALSE

IAM User Groups can contain only IAM Users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An IAM policy consists of one or more statements. A statement in an IAM Policy consists of the following, EXCEPT:

A) Effect

B) Principal

C) Version

D) Action

A

C) Version

A statement in an IAM Policy consists of Sid, Effect, Principal, Action, Resource, and Condition. Version is part of the IAM Policy itself, not the statement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which EC2 Purchasing Option can provide you the biggest discount, but it is not suitable for critical jobs or databases?

A) Convertible Reserved Instances

B) Dedicated Hosts

C) Spot Instances

A

C) Spot Instances

Spot Instances are good for short workloads and this is the cheapest EC2 Purchasing Option. But, they are less reliable because you can lose your EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What should you use to control traffic in and out of EC2 instances?

A) Network Access Control List (NACL)

B) Security Groups

C) IAM Policies

A

B) Security Groups

Security Groups operate at the EC2 instance level and can control traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How long can you reserve an EC2 Reserved Instance?

A) 1 or 3 years

B) 2 or 4 years

C) 6 months or 1 year

D) Anytime between 1 and 3 years

A

A) 1 or 3 years

EC2 Reserved Instances can be reserved for 1 or 3 years only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You would like to deploy a High-Performance Computing (HPC) application on EC2 instances. Which EC2 instance type should you choose?

A) Storage Optimized

B) Compute Optimized

C) Memory Optimized

D) General Purpose

A

B) Compute Optimized

Compute Optimized EC2 instances are great for compute-intensive workloads requiring high-performance processors (e.g., batch processing, media transcoding, high-performance computing, scientific modeling & machine learning, and dedicated gaming servers).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which EC2 Purchasing Option should you use for an application you plan to run on a server continuously for 1 year?

A) Reserved Instances

B) Spot Instances

C) On-Demand Instances

A

A) Reserved Instances

Reserved Instances are good for long workloads. You can reserve EC2 instances for 1 or 3 years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You are preparing to launch an application that will be hosted on a set of EC2 instances. This application needs some software installation and some OS packages need to be updated during the first launch. What is the best way to achieve this when you launch the EC2 instances?

A) Connect to each EC2 instance using SSH, then install the required software and update your OS packages manually

B) Write a bash script that installs the required software and updates to your OS, then contact AWS Support and provide them with the script. They will run it on your EC2 instances at launch

C) Write a bash script that installs the required software and updates to your OS, then use this script in EC2 User Data when you launch your EC2 instance

A

C) Write a bash script that installs the required software and updates to your OS, then use this script in EC2 User Data when you launch your EC2 instance

EC2 User Data is used to bootstrap your EC2 instances using a bash script. This script can contain commands such as installing software/packages, download files from the Internet, or anything you want.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which EC2 Instance Type should you choose for a critical application that uses an in-memory database?

A) Compute Optimized

B) Storage Optimized

C) Memory Optimized

D) General Purpose

A

C) Memory Optimized

Memory Optimized EC2 instances are great for workloads requiring large data sets in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You have an e-commerce application with an OLTP database hosted on-premises. This application has popularity which results in its database has thousands of requests per second. You want to migrate the database to an EC2 instance. Which EC2 Instance Type should you choose to handle this high-frequency OLTP database?

A) Compute Optimized

B) Storage Optimized

C) Memory Optimized

D) General Purpose

A

B) Storage Optimized

Storage Optimized EC2 instances are great for workloads requiring high, sequential read/write access to large data sets on local storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

TRUE / FALSE

Security Groups can be attached to only one EC2 instance

A

FALSE

Security Groups can be attached to multiple EC2 instances within the same AWS Region/VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You’re planning to migrate on-premises applications to AWS. Your company has strict compliance requirements that require your applications to run on dedicated servers. You also need to use your own server-bound software license to reduce costs. Which EC2 Purchasing Option is suitable for you?

A) Convertible Reserved Instances

B) Dedicated Hosts

C) Spot Instances

A

B) Dedicated Hosts

Dedicated Hosts are good for companies with strong compliance needs or for software that have complicated licensing models. This is the most expensive EC2 Purchasing Option available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You would like to deploy a database technology on an EC2 instance and the vendor license bills you based on the physical cores and underlying network socket visibility. Which EC2 Purchasing Option allows you to get visibility into them?

A) Spot Instances

B) On-Demand

C) Dedicated Hosts

D) Reserved Instances

A

C) Dedicated Hosts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You have launched an EC2 instance that will host a NodeJS application. After installing all the required software and configured your application, you noted down the EC2 instance public IPv4 so you can access it. Then, you stopped and then started your EC2 instance to complete the application configuration. After restart, you can’t access the EC2 instance, and you found that the EC2 instance public IPv4 has been changed. What should you do to assign a fixed public IPv4 to your EC2 instance?

A) Allocate an Elastic IP and assign it to your EC2 instance

B) From inside your EC2 instance OS, change network configuration from DHCP to static and assign it a public IPv4

C) Contact AWS Support and request a fixed public IPv4 to your EC2 Instance

D) This can’t be done, you can only assign a fixed private IPv4 to your EC2 instance

A

A) Allocate an Elastic IP and assign it to your EC2 instance

Elastic IP is a public IPv4 that you own as long as you want and you can attach it to one EC2 instance at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Spot Fleet is a set of Spot Instances and optionally ……………

A) Reserved Instances

B) On-Demand Instances

C) Dedicated Hosts

D) Dedicated Instances

A

B) On-Demand Instances

Spot Fleet is a set of Spot Instances and optionally On-demand Instances. It allows you to automatically request Spot Instances with the lowest price.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have an application performing big data analysis hosted on a fleet of EC2 instances. You want to ensure your EC2 instances have the highest networking performance while communicating with each other. Which EC2 Placement Group should you choose?

A) Spread Placement Group

B) Cluster Placement Group

C) Partition Placement Group

A

B) Cluster Placement Group

Cluster Placement Groups place your EC2 instances next to each other which gives you high-performance computing and networking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You have a critical application hosted on a fleet of EC2 instances in which you want to achieve maximum availability when there’s an AZ failure. Which EC2 Placement Group should you choose?

A) Cluster Placement Group

B) Partition Placement Group

C) Spread Placement Group

A

C) Spread Placement Group

Spread Placement Group places your EC2 instances on different physical hardware across different AZs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

TRUE / FALSE

Elastic Network Interface (ENI) can be attached to EC2 instances in another AZ.

A

FALSE

Elastic Network Interfaces (ENIs) are bounded to a specific AZ. You can not attach an ENI to an EC2 instance in a different AZ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

The following are true regarding EC2 Hibernate, EXCEPT:

A) EC2 Instance Root Volume must be an Instance Store volume

B) Supports On-Demand and Reserved Instances

C) EC2 Instance RAM must be less than 150GB

D) EC2 Instance Root Volume type must be an EBS volume

A

A) EC2 Instance Root Volume must be an Instance Store volume

To enable EC2 Hibernate, the EC2 Instance Root Volume type must be an EBS volume and must be encrypted to ensure the protection of sensitive content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

You have just terminated an EC2 instance in us-east-1a, and its attached EBS volume is now available. Your teammate tries to attach it to an EC2 instance in us-east-1b but he can’t. What is a possible cause for this?

A) He’s missing IAM permissions

B) EBS volumes are locked to an AWS Region

C) EBS volumes are locked to an Availability Zone

A

C) EBS volumes are locked to an Availability Zone

EBS Volumes are created for a specific AZ. It is possible to migrate them between different AZs using EBS Snapshots.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You have launched an EC2 instance with two EBS volumes, Root volume type and the other EBS volume type to store the data. A month later you are planning to terminate the EC2 instance. What’s the default behavior that will happen to each EBS volume?

A) Both the root volume type and the EBS volume type will be deleted

B) The Root Volume type will be deleted and the EBS volume type will not be deleted

C) The root volume type will not be deleted and the EBS volume type will be deleted

D) Both the root volume type and the EBS volume type will not be deleted

A

B) The Root Volume type will be deleted and the EBS volume type will not be deleted

By default, the Root volume type will be deleted as its “Delete On Termination” attribute checked by default. Any other EBS volume types will not be deleted as its “Delete On Termination” attribute disabled by default.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

TRUE / FALSE

You can use an AMI in N.Virginia Region us-east-1 to launch an EC2 instance in any AWS Region.

A

FALSE

AMIs are built for a specific AWS Region, they’re unique for each AWS Region. You can’t launch an EC2 instance using an AMI in another AWS Region, but you can copy the AMI to the target AWS Region and then use it to create your EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Which of the following EBS volume types can be used as boot volumes when you create EC2 instances?

A) gp2, gp3, st1, sc1

B) gp2, gp3, io1, io2

C) io1, io2, st1, sc1

A

B) gp2, gp3, io1, io2

When creating EC2 instances, you can only use the following EBS volume types as boot volumes: gp2, gp3, io1, io2, and Magnetic (Standard).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is EBS Multi-Attach?

A) Attach the same EBS volume to multiple EC2 instances in multiple AZs

B) Attach multiple EBS volumes in the same AZ to the same EC2 instance

C) Attach the same EBS volume to multiple EC2 instances in the same AZ

D) Attach multiple EBS volumes in multiple AZs to the same EC2 instance

A

C) Attach the same EBS volume to multiple EC2 instances in the same AZ

Using EBS Multi-Attach, you can attach the same EBS volume to multiple EC2 instances in the same AZ. Each EC2 instance has full read/write permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

You would like to encrypt an unencrypted EBS volume attached to your EC2 instance. What should you do?

A) Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume

B) Select your EBS volume, choose Edit Attributes, then tick the Encrypt using KMS option

C) Create a new encrypted EBS volume, then copy data from your unencrypted EBS volume to the new EBS volume

D) Submit a request to AWS Support to encrypt your EBS volume

A

A) Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

You have a fleet of EC2 instances distributes across AZs that process a large data set. What do you recommend to make the same data to be accessible as an NFS drive to all of your EC2 instances?

A) Use EBS

B) Use EFS

C) Use an Instance Store

A

B) Use EFS

EFS is a network file system (NFS) that allows you to mount the same file system on EC2 instances that are in different AZs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

You would like to have a high-performance local cache for your application hosted on an EC2 instance. You don’t mind losing the cache upon the termination of your EC2 instance. Which storage mechanism do you recommend as a Solutions Architect?

A) EBS

B) EFS

C) Instance Store

A

C) Instance Store

EC2 Instance Store provides the best disk I/O performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

You are running a high-performance database that requires an IOPS of 310,000 for its underlying storage. What do you recommend?

A) Use an EBS gp2 drive

B) Use an EBS io1 drive

C) Use an EC2 Instance Store

D) Use an EBS io2 Block Express Drive

A

C) Use an EC2 Instance Store

You can run a database on an EC2 instance that uses an Instance Store, but you’ll have a problem that the data will be lost if the EC2 instance is stopped (it can be restarted without problems). One solution is that you can set up a replication mechanism on another EC2 instance with an Instance Store to have a standby copy. Another solution is to set up backup mechanisms for your data. It’s all up to you how you want to set up your architecture to validate your requirements. In this use case, it’s around IOPS, so we have to choose an EC2 Instance Store.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Scaling an EC2 instance from r4.large to r4.4xlarge is called …………………

A) Horizontal Scalability

B) Vertical Scalability

A

B) Vertical Scalability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Running an application on an Auto Scaling Group that scales the number of EC2 instances in and out is called …………………

A) Horizontal Scalability

B) Vertical Scalability

A

A) Horizontal Scalability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Elastic Load Balancers provide a …………………..

A) static IPv4 we can use in our application

B) static DNS name we can use in our application

C) static IPv6 we can use in our application

A

B) static DNS name we can use in our application

Only Network Load Balancer provides both static DNS name and static IP. While, Application Load Balancer provides a static DNS name but it does NOT provide a static IP. The reason being that AWS wants your Elastic Load Balancer to be accessible using a static endpoint, even if the underlying infrastructure that AWS manages changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You are running a website on 10 EC2 instances fronted by an Elastic Load Balancer. Your users are complaining about the fact that the website always asks them to re-authenticate when they are moving between website pages. You are puzzled because it’s working just fine on your machine and in the Dev environment with 1 EC2 instance. What could be the reason?

A) Your website must have an issue when hosted on multiple EC2 instances

B) The EC2 instances log out users as they can’t see their IP addresses, instead, they receive ELB IP addresses

C) The Elastic Load Balancer does not have Sticky Sessions enabled

A

C) The Elastic Load Balancer does not have Sticky Sessions enabled

ELB Sticky Session feature ensures traffic for the same client is always redirected to the same target (e.g., EC2 instance). This helps that the client does not lose his session data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

You are using an Application Load Balancer to distribute traffic to your website hosted on EC2 instances. It turns out that your website only sees traffic coming from private IPv4 addresses which are in fact your Application Load Balancer’s IP addresses. What should you do to get the IP address of clients connected to your website?

A) Modify your website’s frontend so that users send their IP in every request

B) Modify your website’s backend to get the client IP address from the X-Forwarded-For header

C) Modify your website’s backend to get the client IP address from the X-Forwarded-Port header

D) Modify your website’s backend to get the client IP address from the X-Forwarded-Proto header

A

B) Modify your website’s backend to get the client IP address from the X-Forwarded-For header

When using an Application Load Balancer to distribute traffic to your EC2 instances, the IP address you’ll receive requests from will be the ALB’s private IP addresses. To get the client’s IP address, ALB adds an additional header called “X-Forwarded-For” contains the client’s IP address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

You hosted an application on a set of EC2 instances fronted by an Elastic Load Balancer. A week later, users begin complaining that sometimes the application just doesn’t work. You investigate the issue and found that some EC2 instances crash from time to time. What should you do to protect users from connecting to the EC2 instances that are crashing?

A) Enable ELB Health Checks

B) Enable ELB Stickiness

C) Enable SSL Termination

D) Enable Cross-Zone Load Balancing

A

A) Enable ELB Health Checks

When you enable ELB Health Checks, your ELB won’t send traffic to unhealthy (crashed) EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

You are working as a Solutions Architect for a company and you are required to design an architecture for a high-performance, low-latency application that will receive millions of requests per second. Which type of Elastic Load Balancer should you choose?

A) Application Load Balancer

B) Classic Load Balancer

C) Network Load Balancer

A

C) Network Load Balancer

Network Load Balancer provides the highest performance and lowest latency if your application needs it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Application Load Balancers support the following protocols, EXCEPT:

A) HTTP

B) HTTPS

C) TCP

D) WebSocket

A

C) TCP

Application Load Balancers support HTTP, HTTPS and WebSocket

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Application Load Balancers can route traffic to different Target Groups based on the following, EXCEPT:

A) Client’s Location (Geography)

B) Hostname

C) Request URL Path

D) Source IP Address

A

A) Client’s Location (Geography)

ALBs can route traffic to different Target Groups based on URL Path, Hostname, HTTP Headers, and Query Strings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Registered targets in a Target Groups for an Application Load Balancer can be one of the following, EXCEPT:

A) EC2 Instances

B) Network Load Balancer

C) Private IP Address

D) Lambda Functions

A

B) Network Load Balancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

For compliance purposes, you would like to expose a fixed static IP address to your end-users so that they can write firewall rules that will be stable and approved by regulators. What type of Elastic Load Balancer would you choose?

A) Application Load Balancer with an Elastic IP attached to it

B) Network Load Balancer

C) Classic Load Balancer

A

B) Network Load Balancer

Network Load Balancer has one static IP address per AZ and you can attach an Elastic IP address to it. Application Load Balancers and Classic Load Balancers have a static DNS name.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

You want to create a custom application-based cookie in your Application Load Balancer. Which of the following you can use as a cookie name?

A) AWSALBAPP

B) APPUSERC

C) AWSALBTG

D) AWSALB

A

B) APPUSERC

The other cookie names are reserved by the ELB (AWSALB, AWSALBAPP, AWSALBTG).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

You have a Network Load Balancer that distributes traffic across a set of EC2 instances in us-east-1. You have 2 EC2 instances in us-east-1b AZ and 5 EC2 instances in us-east-1e AZ. You have noticed that the CPU utilization is higher in the EC2 instances in us-east-1b AZ. After more investigation, you noticed that the traffic is equally distributed across the two AZs. How would you solve this problem?

A) Enable Cross-Zone Load Balancing

B) Enable Sticky Sessions

C) Enable ELB Health Checks

D) Enable SSL Termination

A

A) Enable Cross-Zone Load Balancing

When Cross-Zone Load Balancing is enabled, ELB distributes traffic evenly across all registered EC2 instances in all AZs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Which feature in both Application Load Balancers and Network Load Balancers allows you to load multiple SSL certificates on one listener?

A) TLS Termination

B) Server Name Indication (SNI)

C) SSL Security Policies

D) Host Headers

A

B) Server Name Indication (SNI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

You have an Application Load Balancer that is configured to redirect traffic to 3 Target Groups based on the following hostnames: users.example.com, api.external.example.com, and checkout.example.com. You would like to configure HTTPS for each of these hostnames. How do you configure the ALB to make this work?

A) Use an HTTP to HTTPS redirect rule

B) Use a security group SSL certificate

C) Use Server Name Indication

A

C) Use Server Name Indication

Server Name Indication (SNI) allows you to expose multiple HTTPS applications each with its own SSL certificate on the same listener. Read more here: https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

You have an application hosted on a set of EC2 instances managed by an Auto Scaling Group that you configured both desired and maximum capacity to 3. Also, you have created a CloudWatch Alarm that is configured to scale out your ASG when CPU Utilization reaches 60%. Your application suddenly received huge traffic and is now running at 80% CPU Utilization. What will happen?

A) Nothing

B) The desired capacity will go up to 4 and the maximum capacity will stay at 3

C) The desired capacity will go up to 4 and the maximum capacity will stay at 4

A

A) Nothing

The Auto Scaling Group can’t go over the maximum capacity (you configured) during scale-out events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

You have an Auto Scaling Group fronted by an Application Load Balancer. You have configured the ASG to use ALB Health Checks, then one EC2 instance has just been reported unhealthy. What will happen to the EC2 instance?

A) The ASG will keep the instance running and restart the application

B) The ASG will detach the EC2 instance and leave it running

C) The ASG will terminate the EC2 instance

A

C) The ASG will terminate the EC2 instance

You can configure the Auto Scaling Group to determine the EC2 instances’ health based on Application Load Balancer Health Checks instead of EC2 Status Checks (default). When an EC2 instance fails the ALB Health Checks, it is marked unhealthy and will be terminated while the ASG launches a new EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Your boss asked you to scale your Auto Scaling Group based on the number of requests per minute your application makes to your database. What should you do?

A) Create a CloudWatch custom metric, then create a CloudWatch Alarm on this metric to scale your ASG

B) You politely tell him that it’s impossible

C) Enable Detailed Monitoring then create a CloudWatch Alarm to scale your ASG

A

A) Create a CloudWatch custom metric, then create a CloudWatch Alarm on this metric to scale your ASG

There’s no CloudWatch Metric for “requests per minute” for backend-to-database connections. You need to create a CloudWatch Custom Metric, then create a CloudWatch Alarm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

A web application hosted on a fleet of EC2 instances managed by an Auto Scaling Group. You are exposing this application through an Application Load Balancer. Both the EC2 instances and the ALB are deployed on a VPC with the following CIDR 192.168.0.0/18. How do you configure the EC2 instances’ security group to ensure only the ALB can access them on port 80?

A) Add an inbound rule with port 80 and 0.0.0.0/0 as the source

B) Add an inbound rule with port 80 and 192.168.0.0/18 as the source

C) Add an inbound rule with port 80 and ALB’s Security Group as the source

D) Load an SSL certificate on the ALB

A

C) Add an inbound rule with port 80 and ALB’s Security Group as the source

This is the most secure way of ensuring only the ALB can access the EC2 instances. Referencing by security groups in rules is an extremely powerful rule and many questions at the exam rely on it. Make sure you fully master the concepts behind it!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

There is an Auto Scaling Configured running in eu-west-2 region, that is configured to spawn into two Availability Zones eu-west-2a and eu-west-2b. Currently, 3 EC2 instances are running in eu-west-2a and 4 EC2 instances are running in eu-west-2b. The ASG is about to scale in. Which EC2 instance will get terminated?

A) A random EC2 instance in eu-west-2a

B) The EC2 instance in eu-west-2a with the oldest Launch Template version

C) A random EC2 instance in eu-west-2b

D) The EC2 instance in eu-west-2b with the oldest Launch Template version

A

D) The EC2 instance in eu-west-2b with the oldest Launch Template version

Make sure you remember the Default Termination Policy for Auto Scaling Group. It tries to balance across AZs first, then terminates based on the age of the Launch Configuration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

An application is deployed with an Application Load Balancer and an Auto Scaling Group. Currently, you manually scale the ASG and you would like to define a Scaling Policy that will ensure the average number of connections to your EC2 instances is around 1000. Which Scaling Policy should you use?

A) Simple Scaling Policy

B) Step Scaling Policy

C) Target Tracking Policy

D) Scheduled Scaling Policy

A

C) Target Tracking Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Your application hosted on EC2 instances managed by an Auto Scaling Group suddenly receives a spike in traffic which triggers your ASG to scale out and a new EC2 instance has been launched. The traffic continuously increases but the ASG doesn’t launch any new EC2 instances immediately but after 5 minutes. What is a possible cause for this behavior?

A) Cooldown Period

B) Lifecycle Hooks

C) Target Tracking Policy

D) Launch Template

A

A) Cooldown Period

For each Auto Scaling Group, there’s a Cooldown Period after each scaling activity. In this period, the ASG doesn’t launch or terminate EC2 instances. This gives time to metrics to stabilize. The default value for the Cooldown Period is 300 seconds (5 minutes).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

A company has an Auto Scaling Group where random EC2 instances suddenly crashed in the past month. They can’t troubleshoot why the EC2 instances crash as the ASG terminates the unhealthy EC2 instances and replaces them with new EC2 instances. What will you do to troubleshoot the issue and prevent unhealthy instances from being terminated by the ASG?

A) Use AWS Lambda to pause the EC2 instance before terminating

B) Use ASG Lifecycle Hooks to pause the EC2 instance in the Terminating state for troubleshooting

C) Use CloudWatch Logs to troubleshoot the issue

A

B) Use ASG Lifecycle Hooks to pause the EC2 instance in the Terminating state for troubleshooting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Amazon RDS supports the following databases, EXCEPT:

A) MongoDB

B) MySQL

C) MariaDB

D) Microsoft SQL Server

A

A) MongoDB

RDS supports MySQL, PostgreSQL, MariaDB, Oracle, MS SQL Server, and Amazon Aurora.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

You’re planning for a new solution that requires a MySQL database that must be available even in case of a disaster in one of the Availability Zones. What should you use?

A) Create Read Replicas

B) Enable Encryption

C) Enable Multi-AZ

A

C) Enable Multi-AZ

Multi-AZ helps when you plan a disaster recovery for an entire AZ going down. If you plan against an entire AWS Region going down, you should use backups and replication across AWS Regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

We have an RDS database that struggles to keep up with the demand of requests from our website. Our million users mostly read news, and we don’t post news very often. Which solution is NOT adapted to this problem?

A) An ElastiCache Cluster

B) RDS Multi-AZ

C) RDS Read Replicas

A

B) RDS Multi-AZ

Be very careful with the way you read questions at the exam. Here, the question is asking which solution is NOT adapted to this problem. ElastiCache and RDS Read Replicas do indeed help with scaling reads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

You have set up read replicas on your RDS database, but users are complaining that upon updating their social media posts, they do not see their updated posts right away. What is a possible cause for this?

A) There must be a bug in your application

B) Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency

C) You should have set up Multi-AZ instead

A

B) Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Which RDS (NOT Aurora) feature when used does not require you to change the SQL connection string?

A) Multi-AZ

B) Read Replicas

A

A) Multi-AZ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Your application running on a fleet of EC2 instances managed by an Auto Scaling Group behind an Application Load Balancer. Users have to constantly log back in and you don’t want to enable Sticky Sessions on your ALB as you fear it will overload some EC2 instances. What should you do?

A) Use your own custom Load Balancer on EC2 instances instead of using ALB

B) Store session data in RDS

C) Store session data in ElastiCache

D) Store session data in a shared EBS volume

A

C) Store session data in ElastiCache

Storing Session Data in ElastiCache is a common pattern to ensuring different EC2 instances can retrieve your user’s state if needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

An analytics application is currently performing its queries against your main production RDS database. These queries run at any time of the day and slow down the RDS database which impacts your users’ experience. What should you do to improve the users’ experience?

A) Setup a Read Replica

B) Setup Multi-AZ

C) Run the analytics queries at night

A

A) Setup a Read Replica

Read Replicas will help as your analytics application can now perform queries against it, and these queries won’t impact the main production RDS database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

You would like to ensure you have a replica of your database available in another AWS Region if a disaster happens to your main AWS Region. Which database do you recommend to implement this easily?

A) RDS Read Replicas

B) RDS Multi-AZ

C) Aurora Read Replicas

D) Aurora Global Database

A

D) Aurora Global Database

You would like to ensure you have a replica of your database available in another AWS Region if a disaster happens to your main AWS Region. Which database do you recommend to implement this easily?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

How can you enhance the security of your ElastiCache Redis Cluster by forcing users to enter a password when they connect?

A) Use Redis Auth

B) Use IAM Auth

C) Use Security Groups

A

A) Use Redis Auth

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Your company has a production Node.js application that is using RDS MySQL 5.6 as its database. A new application programmed in Java will perform some heavy analytics workload to create a dashboard on a regular hourly basis. What is the most cost-effective solution you can implement to minimize disruption for the main application?

A) Enable Multi-AZ for the RDS database and run the analytics workload on the standby database

B) Create a Read Replica in a different AZ and run the analytics workload on the replica database

C) Create a Read Replica in a different AZ and run the analytics workload on the source database

A

B) Create a Read Replica in a different AZ and run the analytics workload on the replica database

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

You would like to create a disaster recovery strategy for your RDS PostgreSQL database so that in case of a regional outage the database can be quickly made available for both read and write workloads in another AWS Region. The DR database must be highly available. What do you recommend?

A) Create a Read Replica in the same region and enable Multi-AZ on the main database

B) Create a Read Replica in a different region and enable Multi-AZ on the Read Replica

C) Create a Read Replica in the same region and enable Multi-AZ on the Read Replica

D) Enable Multi-Region option on the main database

A

B) Create a Read Replica in a different region and enable Multi-AZ on the Read Replica

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

You have migrated the MySQL database from on-premises to RDS. You have a lot of applications and developers interacting with your database. Each developer has an IAM user in the company’s AWS account. What is a suitable approach to give access to developers to the MySQL RDS DB instance instead of creating a DB user for each one?

A) By default IAM users have access to your RDS database

B) Use Amazon Cognito

C) Enable IAM Database Authentication

A

C) Enable IAM Database Authentication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Which of the following statement is true regarding replication in both RDS Read Replicas and Multi-AZ?

A) Read Replica uses Asynchronous Replication and Multi-AZ uses Asynchronous Replication

B) Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication

C) Read Replica uses Synchronous Replication and Multi-AZ uses Synchronous Replication

D) Read Replica uses Synchronous Replication and Multi-AZ uses Asynchronous Replication

A

B) Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

How do you encrypt an unencrypted RDS DB instance?

A) Do it straight from AWS Console, select your RDS DB instance, choose Actions then Encrypt Using KMS

B) Do it straight from AWS Console, after stopping the RDS DB instance

C) Create a snapshot of the unencrypted RDS DB instance, copy the snapshot and tick “Enable encryption,” then restore the RDS DB instance from the encrypted snapshot

A

C) Create a snapshot of the unencrypted RDS DB instance, copy the snapshot and tick “Enable encryption,” then restore the RDS DB instance from the encrypted snapshot

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

For your RDS database, you can have up to ………… Read Replicas.

A) 3

B) 5

C) 7

A

B) 5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Which RDS database technology does NOT support IAM Database Authentication?

A) Oracle

B) PostgreSQL

C) MySQL

A

A) Oracle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

You have an un-encrypted RDS DB instance and you want to create Read Replicas. Can you configure the RDS Read Replicas to be encrypted?

A) No

B) Yes

A

A) No

You can not create encrypted Read Replicas from an unencrypted RDS DB instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

An application running in production is using an Aurora Cluster as its database. Your development team would like to run a version of the application in a scaled-down application with the ability to perform some heavy workload on a need-basis. Most of the time, the application will be unused. Your CIO has tasked you with helping the team to achieve this while minimizing costs. What do you suggest?

A) Use an Aurora Global Database

B) Use an RDS Database

C) Use Aurora Serverless

D) Run Aurora on EC2, and write a script to shut down the EC2 instance at night

A

C) Use Aurora Serverless

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

How many Aurora Read Replicas can you have in a single Aurora DB Cluster?

A) 5

B) 10

C) 15

A

C) 15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Amazon Aurora supports both …………………….. databases.

A) MySQL and MariaDB

B) MySQL and PostgreSQL

C) Oracle and MariaDB

D) Oracle and MS SQL Server

A

B) MySQL and PostgreSQL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

You work as a Solutions Architect for a gaming company. One of the games mandates that players are ranked in real-time based on their score. Your boss asked you to design then implement an effective and highly available solution to create a gaming leaderboard. What should you use?

A) Use RDS for MySQL

B) Use an Amazon Aurora

C) Use ElastiCache for Memcached

D) Use ElastiCache for Redis- Sorted Sets

A

D) Use ElastiCache for Redis- Sorted Sets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

You have purchased mycoolcompany.com on Amazon Route 53 Registrar and would like the domain to point to your Elastic Load Balancer my-elb-1234567890.us-west-2.elb.amazonaws.com. Which Route 53 Record type must you use here?

A) CNAME

B) Alias

A

B) Alias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

You have deployed a new Elastic Beanstalk environment and would like to direct 5% of your production traffic to this new environment. This allows you to monitor for CloudWatch metrics and ensuring that there’re no bugs exist with your new environment. Which Route 53 Record type allows you to do so?

A) Simple

B) Weighted

C) Latency

D) Failover

A

B) Weighted

Weighted Routing Policy allows you to redirect part of the traffic based on weight (e.g., percentage). It’s a common use case to send part of traffic to a new version of your application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

You have updated a Route 53 Record’s myapp.mydomain.com value to point to a new Elastic Load Balancer, but it looks like users are still redirected to the old ELB. What is a possible cause for this behavior?

A) Because of the Alias Record

B) Because of the CNAME record

C) Because of the TTL

D) Because of Route 53 Health Checks

A

C) Because of the TTL

Each DNS record has a TTL (Time To Live) which orders clients for how long to cache these values and not overload the DNS Resolver with DNS requests. The TTL value should be set to strike a balance between how long the value should be cached vs. how many requests should go to the DNS Resolver.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

You have an application that’s hosted in two different AWS Regions us-west-1 and eu-west-2. You want your users to get the best possible user experience by minimizing the response time from application servers to your users. Which Route 53 Routing Policy should you choose?

A) Multi-Value

B) Weighted

C) Latency

D) Geolocation

A

C) Latency

Latency Routing Policy will evaluate the latency between your users and AWS Regions, and help them get a DNS response that will minimize their latency (e.g. response time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

You have a legal requirement that people in any country but France should NOT be able to access your website. Which Route 53 Routing Policy helps you in achieving this?

A) Latency

B) Simple

C) Multi-Value

D) Geolocation

A

D) Geolocation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

You have purchased a domain on GoDaddy and would like to use Route 53 as the DNS Service Provider. What should you do to make this work?

A) Request for a domain transfer

B) Create a Private Hosted Zone and update the 3rd party Registrar NS records

C) Create a Public Hosted Zone and update the Route 53 NS records

D) Create a Public Hosted Zone and update the 3rd party Registrar NS records

A

D) Create a Public Hosted Zone and update the 3rd party Registrar NS records

Public Hosted Zones are meant to be used for people requesting your website through the Internet. Finally, NS records must be updated on the 3rd party Registrar.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Which of the following are NOT valid Route 53 Health Checks?

A) Health Check that monitors SQS Queue

B) Health Check that monitors an Endpoint

C) Health Check that monitors other Health Checks

D) Health Check that monitors CloudWatch Alarms

A

A) Health Check that monitors SQS Queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Your website TriangleSunglasses.com is hosted on a fleet of EC2 instances managed by an Auto Scaling Group and fronted by an Application Load Balancer. Your ASG has been configured to scale on-demand based on the traffic going to your website. To reduce costs, you have configured the ASG to scale based on the traffic going through the ALB. To make the solution highly available, you have updated your ASG and set the minimum capacity to 2. How can you further reduce the costs while respecting the requirements?

A) Remove the ALB and use an Elastic IP instead

B) Reserve two EC2 instances

C) Reduce the minimum capacity to 1

D) Reduce the minimum capacity to 0

A

B) Reserve two EC2 instances

This is the way to save further costs as we will run 2 EC2 instances no matter what.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Which of the following will NOT help us while designing a STATELESS application tier?

A) Store session data in Amazon RDS

B) Store session data in Amazon ElastiCache

C) Store session data in the client HTTP cookies

D) Store session data on EBS volumes

A

D) Store session data on EBS volumes

EBS volumes are created in a specific AZ and can only be attached to one EC2 instance at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

You want to install software updates on 100s of Linux EC2 instances that you manage. You want to store these updates on shared storage which should be dynamically loaded on the EC2 instances and shouldn’t require heavy operations. What do you suggest?

A) Store the software updates on EBS and sync them using data replication software from one master in each AZ

B) Store the software updates on EFS and mount EFS as a network drive at startup

C) Package the software updates as an EBS snapshot and create EBS volumes for each new software update

D) Store the software updates on Amazon RDS

A

B) Store the software updates on EFS and mount EFS as a network drive at startup

EFS is a network file system (NFS) that allows you to mount the same file system to 100s of EC2 instances. Storing software updates on an EFS allows each EC2 instance to access them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

As a Solutions Architect, you’re planning to migrate a complex ERP software suite to AWS Cloud. You’re planning to host the software on a set of Linux EC2 instances managed by an Auto Scaling Group. The software traditionally takes over an hour to set up on a Linux machine. How do you recommend you speed up the installation process when there’s a scale-out event?

A) Use a Golden AMI

B) Bootstrap using EC2 User Data

C) Store the application in Amazon RDS

D) Retrieve the application setup files from EFS

A

A) Use a Golden AMI

Golden AMI is an image that contains all your software installed and configured so that future EC2 instances can boot up quickly from that AMI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

You’re developing an application and would like to deploy it to Elastic Beanstalk with minimal cost. You should run it in ………………

A) Single Instance Mode

B) High Availability Mode

A

A) Single Instance Mode

The question mentions that you’re still in the development stage and you want to reduce costs. Single Instance Mode will create one EC2 instance and one Elastic IP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

You’re deploying your application to an Elastic Beanstalk environment but you notice that the deployment process is painfully slow. After reviewing the logs, you found that your dependencies are resolved on each EC2 instance each time you deploy. How can you speed up the deployment process with minimal impact?

A) Remove some dependencies in your code

B) Place the dependencies in Amazon EFS

C) Create a Golden AMI that contains the dependencies and use that image to launch the EC2 instances

A

C) Create a Golden AMI that contains the dependencies and use that image to launch the EC2 instances

Golden AMI is an image that contains all your software, dependencies, and configurations, so that future EC2 instances can boot up quickly from that AMI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

You have a 25 GB file that you’re trying to upload to S3 but you’re getting errors. What is a possible cause for this?

A) The file size limit on S3 is 5GB

B) S3 Service in requested AWS Region must be down

C) Use Multi-Part upload when you upload files bigger than 5GB

A

C) Use Multi-Part upload when you upload files bigger than 5GB

Multi-Part Upload is recommended as soon as the file is over 100 MB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

You’re getting errors while trying to create a new S3 bucket named “dev”. You’re using a new AWS Account with no S3 buckets created before. What is a possible cause for this?

A) You’re missing IAM permissions to create an S3 bucket

B) S3 bucket names must be globally unique and “dev” is already taken

A

B) S3 bucket names must be globally unique and “dev” is already taken

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

You have enabled versioning in your S3 bucket which already contains a lot of files. Which version will the existing files have?

A) 1

B) 0

C) -1

D) null

A

D) null

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Your client wants to make sure that file encryption is happening in S3, but he wants to fully manage the encryption keys and never store them in AWS. You recommend him to use ……………………….

A) SSE-S3

B) SSE-KMS

C) SSE-C

D) Client-Side Encryption

A

C) SSE-C

With SSE-C, the encryption happens in AWS and you have full control over the encryption keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

A company you’re working for wants their data stored in S3 to be encrypted. They don’t mind the encryption keys stored and managed by AWS, but they want to maintain control over the rotation policy of the encryption keys. You recommend them to use ………………..

A) SSE-S3

B) SSE-KMS

C) SSE-C

D) Client-Side Encryption

A

B) SSE-KMS

With SSE-KMS, the encryption happens in AWS, and the encryption keys are managed by AWS but you have full control over the rotation policy of the encryption key. Encryption keys stored in AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Your company does not trust AWS for the encryption process and wants it to happen on the application. You recommend them to use ………………..

A) SSE-S3

B) SSE - KMS

C) SSE-C

D) Client-Side Encryption

A

D) Client-Side Encryption

With Client-Side Encryption, you have to do the encryption yourself and you have full control over the encryption keys. You perform the encryption yourself and send the encrypted data to AWS. AWS does not know your encryption keys and cannot decrypt your data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

You have updated an S3 bucket policy to allow IAM users to read/write files in the S3 bucket, but one of the users complain that he can’t perform a PutObject API call. What is a possible cause for this?

A) The S3 bucket policy must be wrong

B) The user is lacking permissions

C) The IAM user must have an explicit DENY in the attached IAM policy

D) You need to contact AWS support to lift this limit

A

C) The IAM user must have an explicit DENY in the attached IAM policy

Explicit DENY in an IAM Policy will take precedence over an S3 bucket policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

You have a website that loads files from an S3 bucket. When you try the URL of the files directly in your Chrome browser it works, but when the website you’re visiting tries to load these files it doesn’t. What’s the problem?

A) The Bucket policy is wrong

B) The IAM policy is wrong

C) CORS is wrong

D) Encryption is wrong

A

C) CORS is wrong

Cross-Origin Resource Sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

An application hosted on an EC2 instance wants to upload objects to an S3 bucket using the PutObject API call, but it lacks the required permissions. What should you do?

A) From inside the EC2 instance, run aws configure and insert your personal IAM Credentials, because you have access to do the required API call

B) Ask an administrator to attach an IAM Policy to the IAM Role on your EC2 instance that authorizes it to do the required API call

C) Export the environment variables with your IAM credentials on the EC2 instance

D) Use the EC2 Metadata API call

A

B) Ask an administrator to attach an IAM Policy to the IAM Role on your EC2 instance that authorizes it to do the required API call

IAM Roles are the right way to provide credentials and permissions to an EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

You and your colleague are working on an application that’s interacting with some AWS services through making API calls. Your colleague can run the application on his machine without issues, while you get API Authorization Exceptions. What should you do?

A) Send him your AWS Access Key and Secret Access Key so he can replicate the issue on his machine

B) Ask him to send you his IAM credentials so you can work without issues

C) Compare both your IAM Policy and his IAM Policy in AWS Policy Simulator to understand the differences

D) Ask him to create an EC2 instance and insert his IAM credentials inside it, so you can run the application from the EC2 instance

A

C) Compare both your IAM Policy and his IAM Policy in AWS Policy Simulator to understand the differences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

Your administrator launched a Linux EC2 instance and gives you the EC2 Key Pair so you can SSH into it. After getting into the EC2 instance, you want to get the EC2 instance ID. What is the best way to do this?

A) Create an instance and attach it to your EC2 instance so you can perform a describe-instances API call

B) Query the user data at http://169.254.169.254/latest/user-data

C) Query the metadata at http://169.254.169.254/latest/meta-data

D) B) Query the metadata at http://254.169.254.169/latest/meta-data

A

C) Query the metadata at http://169.254.169.254/latest/meta-data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

You have enabled versioning and want to be extra careful when it comes to deleting files on an S3 bucket. What should you enable to prevent accidental permanent deletions?

A) Use a bucket policy

B) Enable MFA Delete

C) Encrypt the files

D) Disable versioning

A

B) Enable MFA Delete

MFA Delete forces users to use MFA codes before deleting S3 objects. It’s an extra level of security to prevent accidental deletions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

You would like all your files in an S3 bucket to be encrypted by default. What is the optimal way of achieving this?

A) Use a bucket policy that forces HTTPS connections

B) Enable Default Encryption

C) Enable versioning

A

B) Enable Default Encryption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

You suspect that some of your employees try to access files in an S3 bucket that they don’t have access to. How can you verify this is indeed the case without them noticing?

A) Enable S3 Access Logs and analyze them using Athena

B) Restrict their IAM policies and look at CloudTrail logs

C) Use a bucket policy

A

A) Enable S3 Access Logs and analyze them using Athena

S3 Access Logs log all the requests made to S3 buckets and Amazon Athena can then be used to run serverless analytics on top of the log files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

You want the content of an S3 bucket to be fully available in different AWS Regions. That will help your team perform data analysis at the lowest latency and cost possible. What S3 feature should you use?

A) Amazon CloudFront Distributions

B) S3 Versioning

C) S3 Static Website Hosting

D) S3 Replication

A

D) S3 Replication

S3 Replication allows you to replicate data from an S3 bucket to another in the same/different AWS Region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

You have 3 S3 buckets. One source bucket A, and two destination buckets B and C in different AWS Regions. You want to replicate objects from bucket A to both bucket B and C. How would you achieve this?

A) Configure replication from bucket A to bucket B, then from bucket A to bucket C

B) Configure replication from bucket A to bucket B, then from bucket B to bucket C

C) Configure replication from bucket A to bucket C, then from bucket C to bucket B

A

A) Configure replication from bucket A to bucket B, then from bucket A to bucket C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

Which of the following is NOT a Glacier Deep Archive retrieval mode?

A) Expedited (1 - 5 minutes)

B) Standard (12 hours)

C) Bulk (48 hours)

A

A) Expedited (1 - 5 minutes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

How can you be notified when there’s an object uploaded to your S3 bucket?

A) S3 Select

B) S3 Access Logs

C) S3 Event Notifications

D) S3 Analytics

A

C) S3 Event Notifications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

You are looking to provide temporary URLs to a growing list of federated users to allow them to perform a file upload on your S3 bucket to a specific location. What should you use?

A) S3 CORS

B) S3 Pre-Signed URL

C) S3 Bucket Policies

D) IAM Users

A

B) S3 Pre-Signed URL

S3 Pre-Signed URLs are temporary URLs that you generate to grant time-limited access to some actions in your S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

You have an S3 bucket that has S3 Versioning enabled. This S3 bucket has a lot of objects, and you would like to remove old object versions to reduce costs. What’s the best approach to automate the deletion of these old object versions?

A) S3 Lifecycle Rules - Transition Actions

B) S3 Lifecycle Rules - Expiration Actions

C) S3 Access Logs

A

B) S3 Lifecycle Rules - Expiration Actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

How can you automate the transition of S3 objects between their different tiers?

A) AWS Lambda

B) CloudWatch Events

C) S3 Lifecycle Rules

A

C) S3 Lifecycle Rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

Which of the following is NOT a Glacier retrieval mode?

A) Instant (10 Seconds)

B) Expedited (1 - 5 minutes)

C) Standard (3 - 5 hours)

D) Bulk (5 - 12 hours)

A

A) Instant (10 Seconds)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

While you’re uploading large files to an S3 bucket using Multi-part Upload, there are a lot of unfinished parts stored in the S3 bucket due to network issues. You are not using these unfinished parts and they cost you money. What is the best approach to remove these unfinished parts?

A) Use AWS Lambda to loop on each old/unfinished part and delete them

B) Request AWS Support to help you delete old/unfinished parts

C) Use an S3 Lifecycle Policy to automate old/unfinished parts deletion

A

C) Use an S3 Lifecycle Policy to automate old/unfinished parts deletion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

Which of the following is a Serverless data analysis service allowing you to query data in S3?

A) S3 Analytics

B) Athena

C) Redshift

D) RDS

A

B) Athena

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

You are looking to get recommendations for S3 Lifecycle Rules. How can you analyze the optimal number of days to move objects between different storage tiers?

A) S3 Inventory

B) S3 Analytics

C) S3 Lifecycle Rules Advisor

A

B) S3 Analytics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

You are looking to build an index of your files in S3, using Amazon RDS PostgreSQL. To build this index, it is necessary to read the first 250 bytes of each object in S3, which contains some metadata about the content of the file itself. There are over 100,000 files in your S3 bucket, amounting to 50 TB of data. How can you build this index efficiently?

A) Use the RDS Import feature to load the data from S2 to PostgreSQL, and run a SQL query to build the index

B) Create an application that will traverse the S3 bucket, read all the files one by one, extract the fist 250 bytes, and store that information in RDS

C) Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in RDS

D) Create an application that will traverse the S2 bucket, use S3 Select to get the first 250 bytes, and store that information in RDS

A

C) Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in RDS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

For compliance reasons, your company has a policy mandate that database backups must be retained for 4 years. It shouldn’t be possible to erase them. What do you recommend?

A) Glacier Vaults with Vault Lock Policies

B) EFS network drives with restrictive Linux permissions

C) S3 with Bucket Policies

A

A) Glacier Vaults with Vault Lock Policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

You have a large dataset stored on-premises that you want to upload to the S3 bucket. The dataset is divided into 10 GB files. You have good bandwidth but your Internet connection isn’t stable. What is the best way to upload this dataset to S3 and ensure that the process is fast and avoid any problems with the Internet connection?

A) Use Multi-part Upload only

B) Use S3 Select & use S3 Transfer Acceleration

C) Use S3 Multi-part Upload & S3 Transfer Acceleration

A

C) Use S3 Multi-part Upload & S3 Transfer Acceleration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

You would like to retrieve a subset of your dataset stored in S3 with the .csv format. You would like to retrieve a month of data and only 3 columns out of 10, to minimize compute and network costs. What should you use?

A) S3 Analytics

B) S3 Access Logs

C) S3 Select

D) S3 Inventory

A

C) S3 Select

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

You have a paid content that is stored in the S3 bucket. You want to distribute that content globally, so you have set up a CloudFront Distribution and configured the S3 bucket to only exchange data with your CloudFront Distribution. Which CloudFront feature allows you to securely distribute this paid content?

A) Origin Access Identity

B) S3 Pre-Signed URL

C) CloudFront Signed URL

D) CloudFront Invalidations

A

C) CloudFront Signed URL

CloudFront Signed URLs are commonly used to distribute paid content through dynamically generated signed URLs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

You have a CloudFront Distribution that serves your website hosted on a fleet of EC2 instances behind an Application Load Balancer. All your clients are from the United States, but you found that some malicious requests are coming from other countries. What should you do to only allow users from the US and block other countries?

A) Use CloudFront Geo Restriction

B) Use Origin Access Identity

C) Set up a security group and attach it to your CloudFront Distribution

D) Use a Route 53 Latency record and attach it to CloudFront

A

A) Use CloudFront Geo Restriction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

You have a static website hosted on an S3 bucket. You have created a CloudFront Distribution that points to your S3 bucket to better serve your requests and improve performance. After a while, you noticed that users can still access your website directly from the S3 bucket. You want to enforce users to access the website only through CloudFront. How would you achieve that?

A) Send an email to your clients and tell them not to use the S3 endpoint

B) Configure your CloudFront Distribution and create an Origin Access Identity, then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution OAI user

C) Use S3 Access Points to redirect clients to CloudFront

A

B) Configure your CloudFront Distribution and create an Origin Access Identity, then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution OAI user

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

A website is hosted on a set of EC2 instances fronted by an Application Load Balancer. You have created a CloudFront Distribution and set up its origin to point to your ALB. What should you use to provide access to hundreds of private files served by your CloudFront distribution?

A) CloudFront Signed URLs

B) CloudFront Origin Access Identity

C) CloudFront Signed Cookies

D) CloudFront HTTPS Encryption

A

C) CloudFront Signed Cookies

Signed Cookies are useful when you want to access multiple files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

You are creating an application that is going to expose an HTTP REST API. There is a need to provide request routing rules at the HTTP level. Due to security requirements, your application can only be exposed through the use of two static IP addresses. How can you create a solution that validates these requirements?

A) Use a Network Load Balancer and attach Elastic IPs to it

B) Use AWS Global Accelerator and an Application Load Balancer

C) Use an Application Load Balancer and attach Elastic IPs to it

D) Use CloudFront with Elastic IP and an Application Load Balancer

A

B) Use AWS Global Accelerator and an Application Load Balancer

AWS Global Accelerator will provide us with the two static IP addresses and the ALB will provide us with the HTTP routing rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

What does this S3 bucket policy do?

{
    "Version": "2012-10-17",
    "Id": "Mystery policy",
    "Statement": [{
         "Sid": "What could it be?",
         "Effect": "Allow",
         "Principal": {
             "CanonicalUser": "CloudFront Origin Identity Canonical User ID"
         },
         "Action": "s3:GetObject",
         "Resource": "arn:aws:s3:::examplebucket/*"
    }]
}

A) Forces GetObject request to be encrypted if coming from CloudFront

B) Only allows the S3 bucket content to be accessed from your CloudFront Distribution Origin Access Identity

C) Only allows GetObject type of request on the S3 bucket from anybody

A

B) Only allows the S3 bucket content to be access from your CloudFront Distribution Origin Access Identity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

You need to move hundreds of Terabytes into Amazon S3, then process the data using a fleet of EC2 instances. You have a 1 Gbit/s broadband. You would like to move the data faster and possibly processing it while in transit. What do you recommend?

A) Use your network

B) Use Snowcone

C) Use AWS Data Migration

D) Use Snowball Edge

A

D) Use Snowball Edge

Snowball Edge is the right answer as it comes with computing capabilities and allows you to pre-process the data while it’s being moved into Snowball.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

You want to expose virtually infinite storage for your tape backups. You want to keep the same software you’re using and want an iSCSI compatible interface. What do you use?

A) AWS Snowball

B) AWS Storage Gateway - Tape Gateway

C) AWS Storage Gateway - Volume Gateway

D) AWS Storage Gateway - File Gateway

A

B) AWS Storage Gateway - Tape Gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

Your EC2 Windows Servers need to share some data by having a Network File System mounted on them which respects the Windows security mechanisms and has integration with Microsoft Active Directory. What do you recommend?

A) Amazon FSx for Windows (File Server)

B) Amazon EFS

C) Amazon FSx for Lustre

D) Amazon S3 with File Gateway

A

A) Amazon FSx for Windows (File Server)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

You have hundreds of Terabytes that you want to migrate to AWS S3 as soon as possible. You tried to use your network bandwidth and it will take around 3 weeks to complete the upload process. What is the recommended approach to using in this situation?

A) AWS Storage Gateway - Volume Gateway

B) S3 Multi-Part Upload

C) AWS Snowball Edge

D) AWS Data Migration Service

A

C) AWS Snowball Edge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

You have a large dataset stored in S3 that you want to access from on-premises servers using the NFS or SMB protocol. Also, you want to authenticate access to these files through on-premises Microsoft AD. What would you use?

A) AWS Storage Gateway - Volume Gateway

B) AWS Storage Gateway - File Gateway

C) AWS Storage Gateway - Tape Gateway

D) AWS Data Migration Service

A

B) AWS Storage Gateway - File Gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

You are planning to migrate your company’s infrastructure from on-premises to AWS Cloud. You have an on-premises Microsoft Windows File Server that you want to migrate. What is the most suitable AWS service you can use?

A) Amazon FSx for Windows (File Server)

B) AWS Storage Gateway - File Gateway

C) AWS Managed Microsoft AD

A

A) Amazon FSx for Windows (File Server)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

You would like to have a distributed POSIX compliant file system that will allow you to maximize the IOPS in order to perform some High-Performance Computing (HPC) and genomics computational research. This file system has to easily scale to millions of IOPS. What do you recommend?

A) EFS with Max. IO enabled

B) Amazon FSx for Lustre

C) Amazon S3 mounted on the EC2 instances

D) EC2 instance Store

A

B) Amazon FSx for Lustre

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

Which deployment option in the FSx file system provides you with long-term storage that’s replicated within AZ?

A) Scratch File System

B) Persistent File System

A

B) Persistent File System

Provides long-term storage where data is replicated within the same AZ. Failed files were replaced within minutes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

Which of the following protocols is NOT supported by AWS Transfer Family?

A) File Transfer Protocol (FTP)

B) File Transfer Protocol over SSL (FTPS)

C) Transport Layer Security (TLS)

D) Secure File Transfer Protocol (SFTP)

A

C) Transport Layer Security (TLS)

AWS Transfer Family is a managed service for file transfers into and out of S3 or EFS using the FTP protocol, thus TLS is not supported.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

You have an e-commerce website and you are preparing for Black Friday which is the biggest sale of the year. You expect that your traffic will increase by 100x. Your website already using an SQS Standard Queue, and you’re running a fleet of EC2 instances in an Auto Scaling Group to consume SQS messages. What should you do to prepare your SQS Queue?

A) Contact AWS Support to pre-warm your SQS Standard Queue

B) Enable Auto Scaling in your SQS Queue

C) Increase the capacity of the SQS Queue

D) Do nothing, SQS scales automatically

A

D) Do nothing, SQS scales automatically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

How would you configure your SQS messages to be processed by consumers only after 5 minutes of being published to your SQS Queue?

A) Increase the DelaySeconds parameter

B) Change the Visibility Timeout

C) Enable Long Polling

D) Use Amazon SQS Extended Client

A

A) Increase the DelaySeconds parameter

SQS Delay Queues is a period of time during which Amazon SQS keeps new SQS messages invisible to consumers. In SQS Delay Queues, a message is hidden when it is first added to the queue. (default: 0 mins, max.: 15 mins)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

You have an SQS Queue where each consumer polls 10 messages at a time and finishes processing them in 1 minute. After a while, you noticed that the same SQS messages are received by different consumers resulting in your messages being processed more than once. What should you do to resolve this issue?

A) Enable Long Polling

B) Add DelaySeconds parameter to the messages when being produced

C) Increase the Visibility Timeout

D) Decrease the Visibility Timeout

A

C) Increase the Visibility Timeout

SQS Visibility Timeout is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message again. In Visibility Timeout, a message is hidden only after it is consumed from the queue. Increasing the Visibility Timeout gives more time to the consumer to process the message and prevent duplicate reading of the message. (default: 30 sec., min.: 0 sec., max.: 12 hours)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
140
Q

You have a fleet of EC2 instances (consumers) managed by an Auto Scaling Group that used to process messages in an SQS Standard Queue. Lately, you have found a lot of messages processed twice and after further investigation, you found that these messages can not be processed successfully. How would you troubleshoot (debug) why these messages fail?

A) SQS Standard Queue

B) SQS Dead Letter Queue

C) SQS Delay Queue

D) SQS FIFO Queue

A

B) SQS Dead Letter Queue

SQS Dead Letter Queue is where other SQS queues (source queues) can send messages that can’t be processed (consumed) successfully. It’s useful for debugging as it allows you to isolate problematic messages so you can debug why their processing doesn’t succeed.

141
Q

Which SQS Queue type allows your messages to be processed exactly once and in order?

A) SQS Standard Queue

B) SQS Dead Letter Queue

C) SQS Delay Queue

D) SQS FIFO Queue

A

D) SQS FIFO Queue

SQS FIFO (First-In-First-Out) Queues have all the capabilities of the SQS Standard Queue, plus the following two features. First, The order in which messages are sent and received are strictly preserved and a message is delivered once and remains available until a consumer process and deletes it. Second, duplicated messages are not introduced into the queue.

142
Q

You have 3 different applications that you’d like to send them the same message. All 3 applications are using SQS. What is the best approach would you choose?

A) Use SQS Replication Feature

B) Use SNS + SQS Fan Out Pattern

C) Send messages individually to 3 SQS queues

A

B) Use SNS + SQS Fan Out Pattern

This is a common pattern where only one message is sent to the SNS topic and then “fan-out” to multiple SQS queues. This approach has the following features: it’s fully decoupled, no data loss, and you have the ability to add more SQS queues (more applications) over time.

143
Q

You have a Kinesis data stream with 6 shards provisioned. This data stream usually receiving 5 MB/s of data and sending out 8 MB/s. Occasionally, your traffic spikes up to 2x and you get a ProvisionedThroughputExceeded exception. What should you do to resolve the issue?

A) Add more shards

B) Enable Kinesis Replication

C) Use SQS as a buffer to Kinesis

A

A) Add more shards

The capacity limits of a Kinesis data stream are defined by the number of shards within the data stream. The limits can be exceeded by either data throughput or the number of reading data calls. Each shard allows for 1 MB/s incoming data and 2 MB/s outgoing data. You should increase the number of shards within your data stream to provide enough capacity.

144
Q

You have a website where you want to analyze clickstream data such as the sequence of clicks a user makes, the amount of time a user spends, and where the navigation begins and how it ends. You decided to use Amazon Kinesis, so you have configured the website to send these clickstream data all the way to a Kinesis data stream. While you checking the data sent to your Kinesis data stream, you found that the users’ data is not ordered and the data for one individual user is spread across many shards. How would you fix this problem?

A) There are too many shards, you should only use 1 shard

B) You shouldn’t use multiple consumers, only one and it should re-order data

C) For each record sent to Kinesis, add a partition key that represents the identity of the user

A

C) For each record sent to Kinesis, add a partition key that represents the identity of the user

Kinesis Data Stream uses the partition key associated with each data record to determine which shard a given data record belongs to. When you use the identity of each user as the partition key, this ensures the data for each user is ordered hence sent to the same shard.

145
Q

Which AWS service is most appropriate when you want to perform real-time analytics on streams of data?

A) Amazon SQS

B) Amazon SNS

C) Amazon Kinesis Data Analytics

D) Amazon Kinesis Data Firehose

A

C) Amazon Kinesis Data Analytics

Use Kinesis Data Analytics with Kinesis Data Streams as the underlying source of data.

146
Q

You are running an application that produces a large amount of real-time data that you want to load into S3 and Redshift. Also, these data need to be transformed before being delivered to their destination. What is the best architecture would you choose?

A) SQS + AWS Lambda

B) SNS + HTTP Endpoint

C) Kinesis Data Streams + Kinesis Data Firehose

A

C) Kinesis Data Streams + Kinesis Data Firehose

This is a perfect combo of technology for loading data near real-time data into S3 and Redshift. Kinesis Data Firehose supports custom data transformations using AWS Lambda.

147
Q

Which of the following is NOT a supported subscriber for AWS SNS?

A) Amazon Kinesis Data Streams

B) Amazon SQS

C) HTTP(S) Endpoint

D) AWS Lambda

A

A) Amazon Kinesis Data Streams

Note: Kinesis Data Firehose is now supported, but not Kinesis Data Streams.

148
Q

Which AWS service helps you when you want to send email notifications to your users?

A) Amazon SQS with AWS Lambda

B) Amazon SNS

C) Amazon Kinesis

A

B) Amazon SNS

149
Q

You’re running many micro-services applications on-premises and they communicate using a message broker that supports MQTT protocol. You’re planning to migrate these applications to AWS without re-engineering the applications and modifying the code. Which AWS service allows you to get a managed message broker that supports the MQTT protocol?

A) Amazon SQS

B) Amazon SNS

C) Amazon Kinesis

D) Amazon MQ

A

D) Amazon MQ

Amazon MQ supports industry-standard APIs such as JMS and NMS, and protocols for messaging, including AMQP, STOMP, MQTT, and WebSocket.

150
Q

You have multiple Docker-based applications hosted on-premises that you want to migrate to AWS. You don’t want to provision or manage any infrastructure; you just want to run your containers on AWS. Which AWS service should you choose?

A) Elastic Container Service (ECS) in EC2 Launch Mode

B) Elastic Container Registry (ECR)

C) AWS Fargate on ECS

D) Elastic Kubernetes Service (EKS)

A

C) AWS Fargate on ECS

AWS Fargate allows you to run your containers on AWS without managing any servers.

151
Q

Amazon Elastic Container Service (ECS) has two Launch Types: ……………… and ………………

A) Amazon EC2 Launch Type and Fargate Launch Type

B) Amazon EC2 Launch Type and EKS Launch Type

C) Fargate Launch Type and EKS Launch Type

A

A) Amazon EC2 Launch Type and Fargate Launch Type

152
Q

You have an application hosted on an ECS Cluster (EC2 Launch Type) where you want your ECS tasks to upload files to an S3 bucket. Which IAM Role for your ECS Tasks should you modify?

A) EC2 Instance Profile

B) ECS Task Role

A

B) ECS Task Role

ECS Task Role is the IAM Role used by the ECS task itself. Use when your container wants to call other AWS services like S3, SQS, etc.

153
Q

You’re planning to migrate a WordPress website running on Docker containers from on-premises to AWS. You have decided to run the application in an ECS Cluster, but you want your docker containers to access the same WordPress website content such as website files, images, videos, etc. What do you recommend to achieve this?

A) Mount an EFS volume

B) Mount an EBS volume

C) Use an EC2 Instance Store

A

A) Mount an EFS volume

EFS volume can be shared between different EC2 instances and different ECS Tasks. It can be used as a persistent multi-AZ shared storage for your containers.

154
Q

You are deploying an application on an ECS Cluster made of EC2 instances. Currently, the cluster is hosting one application that is issuing API calls to DynamoDB successfully. Upon adding a second application, which issues API calls to S3, you are getting authorization issues. What should you do to resolve the problem and ensure proper security?

A) Edit the EC2 instance role to add permissions to S3

B) Create an IAM task role for the new application

C) Enable the Fargate mode

D) Edit the S3 bucket policy to allow the ECS task

A

B) Create an IAM task role for the new application

155
Q

Which feature allows an Application Load Balancer to redirect traffic to multiple ECS Tasks running on the same ECS Container instance?

A) Dynamic Port Mapping

B) Automatic Port Mapping

C) ECS Task Definition

D) ECS Service

A

A) Dynamic Port Mapping

156
Q

You are migrating your on-premises Docker-based applications to Amazon ECS. You were using Docker Hub Container Image Library as your container image repository. Which is an alternative AWS service which is fully integrated with Amazon ECS?

A) AWS Fargate

B) Elastic Container Registry (ECR)

C) Elastic Kubernetes Service (EKS)

D) Amazon EC2

A

B) Elastic Container Registry (ECR)

Amazon ECR is a fully managed container registry that makes it easy to store, manage, share, and deploy your container images. It won’t help in running your Docker-based applications.

157
Q

You have created a Lambda function that typically will take around 1 hour to process some data. The code works fine when you run it locally on your machine, but when you invoke the Lambda function it fails with a “timeout” error after 3 seconds. What should you do?

A) Configure your Lambda’s timeout to 25 minutes

B) Configure your Lambda’s memory to 10 GB

C) Run your code somewhere else (like an EC2 instance)

A

C) Run your code somewhere else (like an EC2 instance)

Lambda’s maximum execution time is 15 minutes. You can run your code somewhere else such as an EC2 instance or use Amazon ECS.

158
Q

TRUE / FALSE

Before you create a DynamoDB table, you need to provision the EC2 instance the DynamoDB table will be running on.

A

FALSE

DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain or operate. It automatically scales tables up and down to adjust for capacity and maintain performance. It provides both provisioned (specify RCU & WCU) and on-demand (pay for what you use) capacity modes.

159
Q

You have provisioned a DynamoDB table with 10 RCUs and 10 WCUs. A month later you want to increase the RCU to handle more read traffic. What should you do?

A) Increase RCU and keep WCU the same

B) You need to increase both RCU and WCU

C) Increase RCU and decrease WCU

A

A) Increase RCU and keep WCU the same

RCU and WCU are decoupled, so you can increase/decrease each value separately.

160
Q

You have an e-commerce website where you are using DynamoDB as your database. You are about to enter the Christmas sale and you have a few items which are very popular and you expect that they will be read often. Unfortunately, last year due to the huge traffic you had the ProvisionedThroughputExceededException exception. What would you do to prevent this error from happening again?

A) Increase the RCU to a very high value

B) Create a DAX Cluster

C) Migrate the database away from Dynamo DB for the time of the sale

A

B) Create a DAX Cluster

DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to 10x performance improvement. It caches the most frequently used data, thus offloading the heavy reads on hot keys off your DynamoDB table, hence preventing the “ProvisionedThroughputExceededException” exception.

161
Q

You have developed a mobile application that uses DynamoDB as its datastore. You want to automate sending welcome emails to new users after they sign up. What is the most efficient way to achieve this?

A) Schedule a Lambda function to run every minute using CloudWatch Events, scan the entire table looking for new users

B) Enable SNS and Dynamo DB integration

C) Enable DynamoDB Streams and configure it to invoke a Lambda function to send emails

A

C) Enable DynamoDB Streams and configure it to invoke a Lambda function to send emails

DynamoDB Streams allows you to capture a time-ordered sequence of item-level modifications in a DynamoDB table. It’s integrated with AWS Lambda so that you create triggers that automatically respond to events in real-time.

162
Q

To create a serverless API, you should integrate Amazon API Gateway with ………………….

A) EC2 Instance

B) Elastic Load Balancing

C) AWS Lambda

A

C) AWS Lambda

163
Q

TRUE / FALSE

When you are using an Edge-Optimized API Gateway, your API Gateway lives in CloudFront Edge Locations across all AWS Regions.

A

FALSE

An Edge-Optimized API Gateway is best for geographically distributed clients. API requests are routed to the nearest CloudFront Edge Location which improves latency. The API Gateway still lives in one AWS Region.

164
Q

You would like your users to authenticate using Facebook before they can able to send requests to your API hosted by API Gateway. What should you use to achieve a seamless authentication integration?

A) Amazon Cognito Sync

B) DynamoDB user tables with Lambda Authorizer

C) Amazon Cognito User Pools

A

C) Amazon Cognito User Pools

Amazon Cognito User Pools integrate with Facebook to provide authenticated logins for your application users.

165
Q

You are running an application in production that is leveraging DynamoDB as its datastore and is experiencing smooth sustained usage. There is a need to make the application run in development mode as well, where it will experience the unpredictable volume of requests. What is the most cost-effective solution that you recommend?

A) Use Provisioned Capacity Mode with Auto Scaling Enabled for both development and production

B) Use Provisioned Capacity Mode with Auto Scaling Enabled for production and use On-Demand Capacity Mode for development

B) Use Provisioned Capacity Mode with Auto Scaling Enabled for development and use On-Demand Capacity Mode for production

A) Use On-Demand Capacity Mode for both development and production

A

B) Use Provisioned Capacity Mode with Auto Scaling Enabled for production and use On-Demand Capacity Mode for development

166
Q

You have an application that is served globally using CloudFront Distribution. You want to authenticate users at the CloudFront Edge Locations instead of authentication requests go all the way to your origins. What should you use to satisfy this requirement?

A) Lambda@Edge

B) API Gateway

C) DynamoDB

D) AWS Global Accelerator

A

A) Lambda@Edge

Lambda@Edge is a feature of CloudFront that lets you run code closer to your users, which improves performance and reduces latency.

167
Q

The maximum size of an item in a DynamoDB table is ……………….

A) 1 MB

B) 500 KB

C) 400 KB

D) 400 MB

A

C) 400 KB

168
Q

A startup company plans to run its application on AWS. As a solutions architect, the company hired you to design and implement a fully Serverless REST API. Which technology stack do you recommend?

A) API Gateway + AWS Lambda

B) Application Load Balancer + EC2

C) Elastic Container Service (ECS) + Elastic Block Storage (EBS)

D) Amazon CloudFront + S3

A

A) API Gateway + AWS Lambda

This is fully serverless.

169
Q

The following AWS services have an out of the box caching feature, EXCEPT ……………..

A) API Gateway

B) Lambda

C) DynamoDB

A

B) Lambda

Lambda does not have an out-of-the-box caching feature.

170
Q

You are running a mobile application where you want each registered user to upload/download images to/from his own folder in the S3 bucket. Also, you want to give your users to sign-up and sign in using their social media accounts (e.g., Facebook). Which AWS service should you choose?

A) AWS Identity and Access Management (IAM)

B) AWS Single Sign-On (AWS SSO)

C) Amazon Cognito

D) Amazon CloudFront

A

C) Amazon Cognito

Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0 and OpenID Connect.

171
Q

You have a lot of static files stored in an S3 bucket that you want to distribute globally to your users. Which AWS service should you use?

A) S3 Cross-Region Replication

B) Amazon CloudFront

C) Amazon Route 53

D) API Gateway

A

B) Amazon CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds. This is a perfect use case for Amazon CloudFront.

172
Q

You have created a DynamoDB table in ap-northeast-1 and would like to make it available in eu-west-1, so you decided to create a DynamoDB Global Table. What needs to be enabled first before you create a DynamoDB Global Table?

A) DynamoDB Streams

B) DynamoDB DAX

C) DynamoDB Versioning

D) DynamoDB Backups

A

A) DynamoDB Streams

DynamoDB Streams enable DynamoDB to get a changelog and use that changelog to replicate data across replica tables in other AWS Regions.

173
Q

You have configured a Lambda function to run each time an item is added to a DynamoDB table using DynamoDB Streams. The function is meant to insert messages into the SQS queue for further long processing jobs. Each time the Lambda function is invoked, it seems able to read from the DynamoDB Stream but it isn’t able to insert the messages into the SQS queue. What do you think the problem is?

A) Lambda can’t be used to insert messages into the SQS queue, use an EC2 instance instead

B) The Lambda Execution IAM Role is missing permissions

C) The Lambda security group must allow outbound access to SQS

D) The SQS security group must be edited to allow AWS Lambda

A

B) The Lambda Execution IAM Role is missing permissions

174
Q

You would like to create an architecture for a micro-services application whose sole purpose is to encode videos stored in an S3 bucket and store the encoded videos back into an S3 bucket. You would like to make this micro-services application reliable and has the ability to retry upon failures. Each video may take over 25 minutes to be processed. The services used in the architecture should be asynchronous and should have the capability to be stopped for a day and resume the next day from the videos that haven’t been encoded yet. Which of the following AWS services would you recommend in this scenario?

A) Amazon S3 + AWS Lambda

B) Amazon SNS + Amazon EC2

C) Amazon SQS + Amazon EC2

D) Amazon SQS + AWS Lambda

A

C) Amazon SQS + Amazon EC2

Amazon SQS allows you to retain messages for days and process them later, while we can take down our EC2 instances.

175
Q

You would like to distribute paid software installation files globally for your customers that have indeed purchased the content. The software may be purchased by different users, and you want to protect the download URL with security including IP restriction. Which solution do you recommend?

A) CloudFront Signed URL

B) S3 Pre-SIgned URLs

C) EFS

D) Public S3 Bucket

A

A) CloudFront Signed URL

This will have security including IP address restriction.

176
Q

You are running a photo-sharing website where your images are downloaded from all over the world. Every month you publish a master pack of beautiful mountain images that are over 15 GB in size. The content is currently hosted on an Elastic File System (EFS) file system and distributed by an Application Load Balancer and a set of EC2 instances. Each month, you are experiencing very high traffic which increases the load on your EC2 instances and increases network costs. What do you recommend to reduce EC2 load and network costs without refactoring your website?

A) Hosts the master pack into S3

B) Enable Application Load Balancer Caching

C) Scale up the EC2 instances

D) Create a CloudFront Distribution

A

D) Create a CloudFront Distribution

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds. Amazon CloudFront can be used in front of an Application Load Balancer.

177
Q

An AWS service allows you to capture gigabytes of data per second in real-time and deliver these data to multiple consuming applications, with a replay feature.

A) Kinesis Data Streams

B) Amazon S3

C) Amazon MQ

A

A) Kinesis Data Streams

Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. It can continuously capture gigabytes of data per second from hundreds of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.

178
Q

Which database helps you store relational datasets, with SQL language compatibility and the capability of processing transactions such as insert, update, and delete?

A) Amazon Redshift

B) Amazon RDS

C) Amazon DynamoDB

D) Amazon ElastiCache

A

B) Amazon RDS

179
Q

Which AWS service provides you with caching capability that is compatible with Redis API?

A) Amazon RDS

B) Amazon DynamoDB

C) Amazon ElasticSearch

D) Amazon ElastiCache

A

D) Amazon ElastiCache

Amazon ElastiCache is a fully managed in-memory data store, compatible with Redis or Memcached.

180
Q

You want to migrate an on-premises MongoDB NoSQL database to AWS. You don’t want to manage any database servers, so you want to use a managed NoSQL database, preferably Serverless, that provides you with high availability, durability, and reliability. Which database should you choose?

A) Amazon RDS

B) Amazon DynamoDB

C) Amazon Redshift

D) Amazon Aurora

A

B) Amazon DynamoDB

Amazon DynamoDB is a key-value, document, NoSQL database.

181
Q

You are looking to perform Online Transaction Processing (OLTP). You would like to use a database that has built-in auto-scaling capabilities and provides you with the maximum number of replicas for its underlying storage. What AWS service do you recommend?

A) Amazon ElastiCache

B) Amazon Redshift

C) Amazon Aurora

D) Amazon RDS

A

C) Amazon Aurora

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database. It features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across 3 AZs.

182
Q

As a Solutions Architect, a startup company asked you for help as they are working on an architecture for a social media website where users can be friends with each other, and like each other’s posts. The company plan on performing some complicated queries such as “What are the number of likes on the posts that have been posted by the friends of Mike?”. Which database do you recommend?

A) Amazon RDS

B) Amazon Redshift

C) Amazon Neptune

D) Amazon ElasticSearch

A

C) Amazon Neptune

Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets.

183
Q

You have a set of files, 100MB each, that you want to store in a reliable and durable key-value store. Which AWS service do you recommend?

A) Amazon Athena

B) Amazon S3

C) Amazon DynamoDB

D) Amazon ElastiCache

A

B) Amazon S3

Amazon S3 is indeed a key-value store! (where the key is the full path of the object in the bucket)

184
Q

You would like to have a database that is efficient at performing analytical queries on large sets of columnar data. You would like to connect to this Data Warehouse using a reporting and dashboard tool such as Amazon QuickSight. Which AWS technology do you recommend?

A) Amazon RDS

B) Amazon S3

C) Amazon Redshift

D) Amazon Neptune

A

C) Amazon Redshift

185
Q

You have a lot of log files stored in an S3 bucket that you want to perform a quick analysis, if possible Serverless, to filter the logs and find users that attempted to make an unauthorized action. Which AWS service allows you to do so?

A) Amazon DynamoDB

B) Amazon Redshift

C) Amazon S3 Glacier

D) Amazon Athena

A

D) Amazon Athena

Amazon Athena is an interactive serverless query service that makes it easy to analyze data in S3 buckets using Standard SQL.

186
Q

As a Solutions Architect, you have been instructed you to prepare a disaster recovery plan for a Redshift cluster. What should you do?

A) Enable Multi-AZ

B) Enable Automated Snapshots, then configure your Redshift cluster to automatically copy snapshots to another AWS Region

C) Take a snapshot, then restore to a new Redshift Global Cluster

A

B) Enable Automated Snapshots, then configure your Redshift cluster to automatically copy snapshots to another AWS Region

187
Q

Which feature in Redshift forces all COPY and UNLOAD traffic moving between your cluster and data repositories through your VPCs?

A) Enhanced VPC Routing

B) Improved VPC Routing

C) Redshift Spectrum

A

A) Enhanced VPC Routing

188
Q

You are running a gaming website that is using DynamoDB as its data store. Users have been asking for a search feature to find other gamers by name, with partial matches if possible. Which AWS technology do you recommend to implement this feature?

A) Amazon DynamoDB

B) Amazon ElasticSearch

C) Amazon Neptune

D) Amazon Redshift

A

B) Amazon ElasticSearch

Anytime you see “search”, think ElasticSearch.

189
Q

An AWS service allows you to create, run, and monitor ETL (extract, transform, and load) jobs in a few clicks.

A) AWS Glue

B) Amazon DynamoDB

C) Amazon RDS

D) Amazon Redshift

A

A) AWS Glue

AWS Glue is a serverless data-preparation service for extract, transform, and load (ETL) operations.

190
Q

You have a couple of EC2 instances in which you would like their Standard CloudWatch Metrics to be collected every 1 minute. What should you do?

A) Enable CloudWatch Custom Metrics

B) Enable High Resolution

C) Enable Basic Monitoring

D) Enable Detailed Monitoring

A

D) Enable Detailed Monitoring

This is a paid offering and is disabled by default. When enabled, the EC2 instance’s metrics are available in 1-minute periods.

191
Q

High-Resolution Custom Metrics can have a minimum resolution of ……………………

A) 1 Second

B) 10 Seconds

C) 30 Seconds

D) 1 Minute

A

A) 1 Second

192
Q

You have an RDS DB instance that’s configured to push its database logs to CloudWatch. You want to create a CloudWatch alarm if there’s an Error found in the logs. How would you do that?

A) Create a scheduled CloudWatch Event that triggers an AWS Lambda every 1 hour, scans the logs, and notifies you through SNS Topic

B) Create a CloudWatch Logs Metric Filter that filters the logs for the keyword “Error”, then creates a CloudWatch Alarm based on that Metric Filter

C) Create an AWS Config Rule that monitors “Error” in your database logs and notifies you through SNS Topic

A

B) Create a CloudWatch Logs Metric Filter that filters the logs for the keyword “Error”, then creates a CloudWatch Alarm based on that Metric Filter

193
Q

You have an application hosted on a fleet of EC2 instances managed by an Auto Scaling Group that you configured its minimum capacity to 2. Also, you have created a CloudWatch Alarm that is configured to scale in your ASG when CPU Utilization is below 60%. Currently, your application runs on 2 EC2 instances and has low traffic and the CloudWatch Alarm is in the ALARM state. What will happen?

A) One EC2 instance will be terminated and the ASG desired and minimum capacity will go to 1

B) The CloudWatch Alarm will remain in ALARM state but will never decrease the number of EC2 instances in the ASG

C) The CloudWatch Alarm will be detached from my ASG

D) The CloudWatch Alarm will go in OK state

A

B) The CloudWatch Alarm will remain in ALARM state but will never decrease the number of EC2 instances in the ASG

The number of EC2 instances in an ASG can not go below the minimum capacity, even if the CloudWatch alarm would in theory trigger an EC2 instance termination.

194
Q

How would you monitor your EC2 instance memory usage in CloudWatch?

A) Enable EC2 Detailed Monitoring

B) By default, the EC2 instance pushes memory usage to CloudWatch

C) Use the Unified CloudWatch Agent to push memory usage as a custom metric to CloudWatch

A

C) Use the Unified CloudWatch Agent to push memory usage as a custom metric to CloudWatch

195
Q

A CloudWatch Alarm set on a High-Resolution Custom Metric can be triggered as often as ………………….

A) 1 Second

B) 10 Seconds

C) 30 Seconds

D) 1 Minute

A

B) 10 Seconds

If you set an alarm on a high-resolution metric, you can specify a high-resolution alarm with a period of 10 seconds or 30 seconds, or you can set a regular alarm with a period of any multiple of 60 seconds.

196
Q

You have made a configuration change and would like to evaluate the impact of it on the performance of your application. Which AWS service should you use?

A) Amazon CloudWatch

B) Amazon CloudTrail

A

A) Amazon CloudWatch

Amazon CloudWatch is a monitoring service that allows you to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. It is used to monitor your applications’ performance and metrics.

197
Q

Someone has terminated an EC2 instance in your AWS account last week, which was hosting a critical database that contains sensitive data. Which AWS service helps you find who did that and when?

A) CloudWatch Metrics

B) CloudWatch Alarms

C) CloudWatch Events

D) AWS CloudTrail

A

D) AWS CloudTrail

AWS CloudTrail allows you to log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. It provides the event history of your AWS account activity, audit API calls made through the AWS Management Console, AWS SDKs, AWS CLI. So, the EC2 instance termination API call will appear here. You can use CloudTrail to detect unusual activity in your AWS accounts.

198
Q

You have CloudTrail enabled for your AWS Account in all AWS Regions. What should you use to detect unusual activity in your AWS Account?

A) CloudTrail Data Events

B) CloudTrail Insights

C) CloudTrail Management Events

A

B) CloudTrail Insights

199
Q

One of your teammates terminated an EC2 instance 4 months ago which has critical data. You don’t know who made this so you are going to review all API calls within this period using CloudTrail. You already have CloudTrail set up and configured to send logs to the S3 bucket. What should you do to find out who made this?

A) Use CloudTrail Event History in CloudTrail Console

B) Analyze CloudTrail logs in S3 bucket using Amazon Athena

A

B) Analyze CloudTrail logs in S3 bucket using Amazon Athena

You can use the CloudTrail Console to view the last 90 days of recorded API activity. For events older than 90 days, use Athena to analyze CloudTrail logs stored in S3.

200
Q

You are running a website on a fleet of EC2 instances with OS that has a known vulnerability on port 84. You want to continuously monitor your EC2 instances if they have port 84 exposed. How should you do this?

A) Setup CloudWatch Metrics

B) Setup CloudTrail Trails

C) Setup Config Rules

D) Schedule a CloudWatch Event to trigger a Lambda function to scan your EC2 instances

A

C) Setup Config Rules

201
Q

You would like to evaluate the compliance of your resource’s configurations over time. Which AWS service will you choose?

A) AWS Config

B) Amazon CloudWatch

C) AWS CloudTrail

A

A) AWS Config

202
Q

Someone changed the configuration of a resource and made it non-compliant. Which AWS service can you use to find out who made the change?

A) Amazon CloudWatch

B) AWS CloudTrail

C) AWS Config

A

B) AWS CloudTrail

203
Q

You have enabled AWS Config to monitor Security Groups if there’s unrestricted SSH access to any of your EC2 instances. Which AWS Config feature can you use to automatically re-configure your Security Groups to their correct state?

A) AWS Config Remediations

B) AWS Config Rules

C) AWS Config Notifications

A

A) AWS Config Remediations

204
Q

You are running a critical website on a set of EC2 instances with a tightened Security Group that has restricted SSH access. You have enabled AWS Config in your AWS Region and you want to be notified via email when someone modified your EC2 instances’ Security Group. Which AWS Config feature helps you do this?

A) AWS Config Remediations

B) AWS Config Rules

C) AWS Config Notifications

A

C) AWS Config Notifications

205
Q

You have a mobile application and would like to give your users access to their own personal space in the S3 bucket. How do you achieve that?

A) Generate IAm user credentials for each of your application’s users

B) Use Amazon Cognito Identity Federation

C) Use SAML Identity Federation

D) Use a Bucket Policy to make your bucket public

A

B) Use Amazon Cognito Identity Federation

Amazon Cognito can be used to federate mobile user accounts and provide them with their own IAM permissions, so they can be able to access their own personal space in the S3 bucket.

206
Q

You have an on-premises identity provider that does not support SAML 2.0, and you want to give your on-premises users access to your resources in the AWS Accounts that you manage. What should you do?

A) Use a Custom Identity Broker to authenticate your users and get temporary credentials from AWS STS using AssumeRole or GetFederationToken STS API calls

B) Create a Lambda function that automatically creates a corresponding IAM user in every AWS account for each user in your Microsoft AD

C) Use SAML Identity Federation

A

A) Use a Custom Identity Broker to authenticate your users and get temporary credentials from AWS STS using AssumeRole or GetFederationToken STS API calls

207
Q

You have strong regulatory requirements to only allow fully internally audited AWS services in production. You still want to allow your teams to experiment in a development environment while services are being audited. How can you best set this up?

A) Provide the Dev team with a completely independent AWS account

B) Apply a global IAM policy on your Prod account

C) Create an AWS Organization and create two Prod and Dev OUs, then Apply an SCP on the Prod OU

D) Create an AWS Config Role

A

C) Create an AWS Organization and create two Prod and Dev OUs, then Apply an SCP on the Prod OU

208
Q

You have an on-premises Microsoft Active Directory setup and you would like to provide access for your on-premises AD users to the multiple AWS accounts you have. The solution should be scalable for adding accounts in the future. What do you recommend?

A) Setup the SAML 2.0 integration between each AWS account and your on-premises Microsoft AD

B) Create a Lambda function that automatically creates a corresponding IAM user in every AWS account for each user in your Microsoft AD

C) Setup Web Identity Federation through Amazon Cognito

D) Setup AWS Single Sign-On

A

D) Setup AWS Single Sign-On

209
Q

You are managing the AWS account for your company, and you want to give one of the developers access to read files from an S3 bucket. You have updated the bucket policy to this, but he still can’t access the files in the bucket. What is the problem?

{
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "AllowsRead",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/Dave"
        },
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::static-files-bucket-xxx"
     }]
}

A) Everything is okay, he just needs to logout and log in again

B) The bucket does not contain any files yet

C) You should change the resource to arn:aws:s3:::static-files-bucket-xxx/*, because this is an object-level permission

A

C) You should change the resource to arn:aws:s3:::static-files-bucket-xxx/*, because this is an object-level permission

210
Q

Which AWS Directory Service allows you to proxy requests to your on-premises Microsoft Active Directory?

A) Microsoft AD on EC2

B) AWS Managed Microsoft AD

C) AD Connector

D) Simple AD

A

C) AD Connector

211
Q

Which AWS service allows you to share AWS resources in your AWS Account with other AWS Accounts?

A) AWS Resource Access Manager

B) AWS Single Sign-On

C) AWS Organizations

D) AWS Shared Responsibility Model

A

A) AWS Resource Access Manager

AWS Resource Access Manager (AWS RAM) helps you securely share your AWS resources within your organization or organizational units (OUs) in AWS Organizations and with AWS Accounts. You can also share resources with IAM Roles and IAM Users.

212
Q

You have 5 AWS Accounts that you manage using AWS Organizations. You want to restrict access to certain AWS services in each account. How should you do that?

A) Using IAM Roles

B) Using AWS Organizations SCP

C) Using AWS Config

A

B) Using AWS Organizations SCP

213
Q

To enable In-flight Encryption (In-Transit Encryption), we need to have ……………………

A) an HTTP endpoint with an SSL certificate

B) an HTTPS endpoint with an SSL certificate

C) a TCP endpoint

A

B) an HTTPS endpoint with an SSL certificate

In-flight Encryption = HTTPS, and HTTPS can not be enabled without an SSL certificate.

214
Q

TRUE / FALSE

Server-Side Encryption means that the data is sent encrypted to the server.

A

FALSE

Server-Side Encryption means the server will encrypt the data for us. We don’t need to encrypt it beforehand.

215
Q

In Server-Side Encryption, where do the encryption and decryption happen?

A) Both Encryption and Decryption happen on the server

B) Both Encryption and Decryption happen on the client

C) Encryption happens on the server and Decryption happens on the client

D) Encryption happens on the client and Decryption happens on the server

A

A) Both Encryption and Decryption happen on the server

In Server-Side Encryption, we couldn’t be able to decrypt the data ourselves as we can’t have access to the corresponding encryption key.

216
Q

TRUE / FALSE

In Client-Side Encryption, the server must know our encryption scheme before we can upload the data.

A

FALSE

With Client-Side Encryption, the server doesn’t need to know any information about the encryption scheme being used, as the server will not perform any encryption or decryption operations.

217
Q

TRUE / FALSE

You need to create KMS Keys in AWS KMS before you are able to use the encryption features for EBS, S3, RDS …

A

FALSE

You can use the AWS Managed Service keys in KMS, therefore we don’t need to create our own KMS keys.

218
Q

TRUE / FALSE

AWS KMS supports both symmetric and asymmetric KMS keys.

A

TRUE

KMS keys can be symmetric or asymmetric. A symmetric KMS key represents a 256-bit key used for encryption and decryption. An asymmetric KMS key represents an RSA key pair used for encryption and decryption or signing and verification, but not both. Or it represents an elliptic curve (ECC) key pair used for signing and verification.

219
Q

When you enable Automatic Rotation on your KMS Key, the backing key is rotated every ……………..

A) 90 days

B) 1 year

C) 2 years

D) 3 Years

A

B) 1 year

220
Q

You have an AMI that has an encrypted EBS snapshot using KMS CMK. You want to share this AMI with another AWS account. You have shared the AMI with the desired AWS account, but the other AWS account still can’t use it. How would you solve this problem?

A) The other AWS account needs to logout and login again to refresh its credentials

B) You need to share the KMS CMK used to encrypt the AMI with the other AWS account

C) You can’t share an AMI that has an encrypted EBS snapshot

A

B) You need to share the KMS CMK used to encrypt the AMI with the other AWS account

221
Q

You have created a Customer-managed CMK in KMS that you use to encrypt both S3 buckets and EBS snapshots. Your company policy mandates that your encryption keys be rotated every 3 months. What should you do?

A) Re-configure your KMS CMK and enable Automatic Rotation, in the “Period” select 3 months

B) Use AWS Managed Keys as they are automatically rotated by AWS every 3 months

C) Rotate the KMS CMK manually. Create a new KMS CMK and use Key Aliases to reference the new KMS CMK. Keep the old KMS CMK so you can decrypt the old data

A

C) Rotate the KMS CMK manually. Create a new KMS CMK and use Key Aliases to reference the new KMS CMK. Keep the old KMS CMK so you can decrypt the old data

222
Q

What should you use to control access to your KMS CMKs?

A) KMS Key Policies

B) KMS IAM Policy

C) AWS GuardDuty

D) KMS Access Control List (KMS ACL)

A

A) KMS Key Policies

223
Q

You have a Lambda function used to process some data in the database. You would like to give your Lambda function access to the database password. Which of the following options is the most secure?

A) Embed it in the code

B) Have it as a plaintext environment variable

C) Have it as an encrypted environment variable and decrypt it at runtime

A

C) Have it as an encrypted environment variable and decrypt it at runtime

This is the most secure solution amongst these options

224
Q

You have a secret value that you use for encryption purposes, and you want to store and track the values of this secret over time. Which AWS service should you use?

A) AWS KMS Versioning feature

B) SSM Parameter Store

C) Amazon S3

A

B) SSM Parameter Store

SSM Parameters Store can be used to store secrets and has built-in version tracking capability. Each time you edit the value of a parameter, SSM Parameter Store creates a new version of the parameter and retains the previous versions. You can view the details, including the values, of all versions in a parameter’s history.

225
Q

According to AWS Shared Responsibility Model, what are you responsible for in RDS?

A) Security Group Rules

B) OS Patching

C) Database Engine Patching

D) Underlying Hardware Security

A

A) Security Group Rules

226
Q

Your user-facing website is a high-risk target for DDoS attacks and you would like to get 24/7 support in case they happen and AWS bill reimbursement for the incurred costs during the attack. What AWS service should you use?

A) AWS WAF

B) AWS Shield Advanced

C) AWS Shield

D) AWS DDoS OpsTeam

A

B) AWS Shield Advanced

227
Q

You would like to externally maintain the configuration values of your main database, to be picked up at runtime by your application. What’s the best place to store them to maintain control and version history?

A) Amazon DynamoDB

B) Amazon S3

C) Amazon EBS

D) SSM Parameter Store

A

D) SSM Parameter Store

228
Q

You would like to use a dedicated hardware module to manage your encryption keys and have full control over them. What do you recommend?

A) AWS CloudHSM

B) AWS KMS

C) AWS Parameter Store

A

A) AWS CloudHSM

229
Q

AWS GuardDuty scans the following data sources, EXCEPT …………….

A) CloudTrail Logs

B) VPC Flow Logs

C) DNS Logs

D) CloudWatch Logs

A

D) CloudWatch Logs

230
Q

You have a website hosted on a fleet of EC2 instances fronted by an Application Load Balancer. What should you use to protect your website from common web application attacks (e.g., SQL Injection)?

A) AWS Shield

B) AWS WAF

C) AWS Security Hub

D) AWS GuardDuty

A

B) AWS WAF

231
Q

You would like to analyze OS vulnerabilities from within EC2 instances. You need these analyses to occur weekly and provide you with concrete recommendations in case vulnerabilities are found. Which AWS service should you use?

A) AWS Shield

B) Amazon GuardDuty

C) Amazon Inspector

D) AWS Config

A

C) Amazon Inspector

232
Q

What is the most suitable AWS service for storing RDS DB passwords which also provides you automatic rotation?

A) AWS Secrets Manager

B) AWS KMS

C) AWS SSM Parameter Store

A

A) AWS Secrets Manager

233
Q

Which AWS service allows you to centrally manage EC2 Security Groups and AWS Shield Advanced across all AWS accounts in your AWS Organization?

A) AWS Shield

B) AWS GuardDuty

C) AWS Config

D) AWS Firewall Manager

A

D) AWS Firewall Manager

AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. It is integrated with AWS Organizations so you can enable AWS WAF rules, AWS Shield Advanced protection, security groups, AWS Network Firewall rules, and Amazon Route 53 Resolver DNS Firewall rules.

234
Q

Which AWS service helps you protect your sensitive data stored in S3 buckets?

A) Amazon GuardDuty

B) Amazon Shield

C) Amazon Macie

D) AWS KMS

A

C) Amazon Macie

Amazon Macie is a fully managed data security service that uses Machine Learning to discover and protect your sensitive data stored in S3 buckets. It automatically provides an inventory of S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with other AWS accounts. It allows you to identify and alert you to sensitive data, such as Personally Identifiable Information (PII).

235
Q

What does this CIDR 10.0.4.0/28 correspond to?

A) 10.0.4.0 to 10.0.4.15

B) 10.0.4.0 to 10.0.32.0

C) 10.0.4.0 to 10.0.4.28

D) 10.0.0.0 to 10.0.16.0

A

A) 10.0.4.0 to 10.0.4.15

/28 means 16 IPs (=2^(32-28) = 2^4), means only the last digit can change.

236
Q

You have a corporate network of size 10.0.0.0/8 and a satellite office of size 192.168.0.0/16. Which CIDR is acceptable for your AWS VPC if you plan on connecting your networks later on?

A) 172.16.0.0/12

B) 172.16.0.0/16

C) 10.0.16.0/16

D) 192.168.4.0/18

A

B) 172.16.0.0/16

CIDR not should overlap, and the max CIDR size in AWS is /16.

237
Q

You plan on creating a subnet and want it to have at least capacity for 28 EC2 instances. What’s the minimum size you need to have for your subnet?

A) /28

B) /27

C) /26

D) /25

A

C) /26

Perfect size, 64 IPs.

238
Q

Security Groups operate at the …………….. level while NACLs operate at the …………….. level.

A) EC2 instance, Subnet

B) Subnet, EC2 instance

A

A) EC2 instance, Subnet

239
Q

You have attached an Internet Gateway to your VPC, but your EC2 instances still don’t have access to the internet. What is NOT a possible issue?

A) Route Tables are missing entries

B) The EC2 instances don’t have public IPs

C) The Security Group does not allow traffic in

D) The NACL does not allow network traffic out

A

C) The Security Group does not allow traffic in

Security groups are stateful and if traffic can go out, then it can go back in.

240
Q

You would like to provide Internet access to your EC2 instances in private subnets with IPv4 while making sure this solution requires the least amount of administration and scales seamlessly. What should you use?

A) NAT Instances with Source/Destination Check flag off

B) Egress Only Internet Gateway

C) NAT Gateway

A

C) NAT Gateway

241
Q

VPC Peering has been enabled between VPC A and VPC B, and the route tables have been updated for VPC A. But, the EC2 instances cannot communicate. What is the likely issue?

A) Check the NACL

B) Check the Route Tables in VPC B

C) Check the EC2 instance attached Security Groups

D) Check if DNS Resolution is enabled

A

B) Check the Route Tables in VPC B

Route tables must be updated in both VPCs that are peered.

242
Q

You have set up a Direct Connect connection between your corporate data center and your VPC A in your AWS account. You need to access VPC B in another AWS region from your corporate datacenter as well. What should you do?

A) Enable VPC Peering

B) Use a Customer Gateway

C) Use a Direct Connect Gateway

D) Set up a NAT Gateway

A

C) Use a Direct Connect Gateway

This is the main use case of Direct Connect Gateways.

243
Q

When using VPC Endpoints, what are the only two AWS services that have a Gateway Endpoint available?

A) Amazon S3 & Amazon SQS

B) Amazon SQS & DynamoDB

C) Amazon S3 & DynamoDB

A

C) Amazon S3 & DynamoDB

These two services have a VPC Gateway Endpoint (remember it), all the other ones have an Interface endpoint (powered by Private Link - means a private IP).

244
Q

AWS reserves 5 IP addresses each time you create a new subnet in a VPC. When you create a subnet with CIDR 10.0.0.0/24, the following IP addresses are reserved, EXCEPT ………………..

A) 10.0.0.1

B) 10.0.0.2

C) 10.0.0.3

D) 10.0.0.4

A

D) 10.0.0.4

245
Q

You have created a new VPC with 4 subnets in it. You begin to launch a set of EC2 instances inside these subnets but you noticed that these EC2 instances don’t get assigned public hostnames and DNS resolution isn’t working. What should you do to resolve this issue?

A) Enable DNS Resolution and DNS Hostnames in your VPC

B) Check Route Tables attached to your subnets

C) Make sure that your Internet Gateway is working properly

A

A) Enable DNS Resolution and DNS Hostnames in your VPC

246
Q

You have 3 VPCs A, B, and C. You want to establish a VPC Peering connection between all the 3 VPCs. What should you do?

A) As VPC Peering supports Transitive Peering, so you need to establish 2 VPC Peering connections (A-B, B-C)

B) Establish VPC Peering connections (A-B, A-C, B-C)

A

B) Establish VPC Peering connections (A-B, A-C, B-C)

247
Q

How can you capture information about IP traffic inside your VPCs?

A) Enable VPC Flow Logs

B) Enable VPC Traffic Monitoring

C) Enable CloudWatch Traffic Logs

A

A) Enable VPC Flow Logs

VPC Flow Logs is a VPC feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC.

248
Q

If you want a 500 Mbps Direct Connect connection between your corporate datacenter to AWS, you would choose a ……………… connection.

A) Dedicated

B) Hosted

A

B) Hosted

Hosted Direct Connect connection supports 50Mbps, 500Mbps, up to 10Gbps.

249
Q

You have an internal web application hosted in a private subnet in your VPC that you want to be used by other customers. You don’t want to expose the application to the Internet or open your whole VPC to other customers. What should you do?

A) Use NAT Gateway

B) Use VPC Endpoint Services (AWS PrivateLink)

C) Use VPC Peering

A

B) Use VPC Endpoint Services (AWS PrivateLink)

Allows you to expose a private application to other AWS customers without making the application public to the Internet and without making a VPC Peering connection.

250
Q

When you set up an AWS Site-to-Site VPN connection between your corporate on-premises datacenter and VPCs in AWS Cloud, what are the two major components you want to configure for this connection?

A) Customer Gateway and NAT Gateway

B) Internet Gateway and Customer Gateway

C) Virtual Private Gateway and Internet Gateway

D) Virtual Private Gateway and Customer Gateway

A

D) Virtual Private Gateway and Customer Gateway

251
Q

Your company has created a REST API that it will sell to hundreds of customers as a SaaS. Your customers are on AWS and are using their own VPCs. You would like to allow your customers to access your SaaS without going through the public Internet while ensuring your infrastructure is not left exposed to network attacks. What do you recommend?

A) Create a VPC Endpoint

B) Create a VPC Peering connection

C) Create PrivateLink (VPC Endpoint Services)

D) Create a ClassicLink

A

C) Create PrivateLink (VPC Endpoint Services)

252
Q

Your company has several on-premises sites across the USA. These sites are currently linked using private connections, but your private connections provider has been recently quite unstable, making your IT architecture partially offline. You would like to create a backup connection that will use the public Internet to link your on-premises sites, that you can failover in case of issues with your provider. What do you recommend?

A) VPC Peering

B) AWS VPN CloudHub

C) Direct Connect

D) AWS PrivateLink

A

B) AWS VPN CloudHub

AWS VPN CloudHub allows you to securely communicate with multiple sites using AWS VPN. It operates on a simple hub-and-spoke model that you can use with or without a VPC.

253
Q

You need to set up a dedicated connection between your on-premises corporate datacenter and AWS Cloud. This connection must be private, consistent, and traffic must not travel through the Internet. Which AWS service should you use?

A) Site-To-Site VPN

B) AWS PrivateLink

C) AWS Direct Connect

D) Amazon EventBridge

A

C) AWS Direct Connect

254
Q

TRUE / FALSE

Using a Direct Connect connection, you can access both public and private AWS resources.

A

TRUE

255
Q

You want to scale up an AWS Site-to-Site VPN connection throughput, established between your on-premises data and AWS Cloud, beyond a single IPsec tunnel’s maximum limit of 1.25 Gbps. What should you do?

A) Use 2 Virtual Private Gateways

B) Use Direct Connect Gateway

C) Use Transit Gateway

A

C) Use Transit Gateway

256
Q

You have a VPC in your AWS account that runs in a dual-stack mode. You are continuously trying to launch an EC2 instance, but it fails. After further investigation, you have found that you are no longer have IPv4 addresses available. What should you do?

A) Modify your VPC to run in IPv6 mode only

B) Modify your VPC to run in IPv4 mode only

C) Add an additional IPv4 CIDR to your VPC

A

C) Add an additional IPv4 CIDR to your VPC

257
Q

As part of your Disaster Recovery plan, you would like to have only the critical infrastructure up and running in AWS. You don’t mind a longer Recovery Time Objective (RTO). Which DR strategy do you recommend?

A) Backup and Restore

B) Pilot Light

C) Warm Standby

D) Multi-Site

A

B) Pilot Light

258
Q

You would like to get the Disaster Recovery strategy with the lowest Recovery Time Objective (RTO) and Recovery Point Objective (RPO), regardless of the cost. Which DR should you choose?

A) Backup and Restore

B) Pilot Light

C) Warm Standby

D) Multi-Site

A

D) Multi-Site

259
Q

Which of the following Disaster Recovery strategies has a potentially high Recovery Point Objective (RPO) and Recovery Time Objective (RTO)?

A) Backup and Restore

B) Pilot Light

C) Warm Standby

D) Multi-Site

A

A) Backup and Restore

260
Q

You want to make a Disaster Recovery plan where you have a scaled-down version of your system up and running, and when a disaster happens, it scales up quickly. Which DR strategy should you choose?

A) Backup and Restore

B) Pilot Light

C) Warm Standby

D) Multi-Site

A

C) Warm Standby

261
Q

You have an on-premises Oracle database that you want to migrate to AWS, specifically to Amazon Aurora. How would you do the migration?

A) Use AWS Schema Conversion Tool (AWS SCT) to convert the database schema, then use AWS Database Migration Service (AWS DMS) to migrate the data

B) Use AWS Database Migration Service (AWS DMS) to convert the database schema, then use AWS Schema Conversion Tool (AWS SCT) to migrate the data

A

A) Use AWS Schema Conversion Tool (AWS SCT) to convert the database schema, then use AWS Database Migration Service (AWS DMS) to migrate the data

262
Q

You have on-premises sensitive files and documents that you want to regularly synchronize to AWS to keep another copy. Which AWS service can help you with that?

A) AWS Database Migration Service

B) Amazon EFS

C) AWS DataSync

A

C) AWS DataSync

AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS Storage services, as well as between AWS Storage services.

263
Q

AWS DataSync supports the following locations, EXCEPT ………………..

A) Amazon S3

B) Amazon EBS

C) Amazon EFS

D) Amazon FSx for Windows File Server

A

B) Amazon EBS

264
Q

You are running many resources in AWS such as EC2 instances, EBS volumes, DynamoDB tables… You want an easy way to manage backups across all these AWS services from a single place. Which AWS offering makes this process easy?

A) Amazon S3

B) AWS Storage Gateway

C) AWS Backup

D) EC2 Snapshots

A

C) AWS Backup

AWS Backup enables you to centralize and automate data protection across AWS services. It helps you support your regulatory compliance or business policies for data protection.

265
Q

You are working on a Serverless application where you want to process objects uploaded to an S3 bucket. You have configured S3 Events on your S3 bucket to invoke a Lambda function every time an object has been uploaded. You want to ensure that events that can’t be processed are sent to a Dead Letter Queue (DLQ) for further processing. Which AWS service should you use to set up the DLQ?

A) S3 Events

B) SNS Topic

C) Lambda Function

A

C) Lambda Function

The Lambda function’s invocation is “asynchronous”, so the DLQ has to be set on the Lambda function side.

266
Q

As a Solutions Architect, you have created an architecture for a company that includes the following AWS services: CloudFront, Web Application Firewall (AWS WAF), AWS Shield, Application Load Balancer, and EC2 instances managed by an Auto Scaling Group. Sometimes the company receives malicious requests and wants to block these IP addresses. According to your architecture, Where should you do it?

A) CloudFront

B) AWS WAF

C) AWS Shield

D) ALB Security Group

E) EC2 Security Group

F) NACL

A

B) AWS WAF

267
Q

Your EC2 instances are deployed in Cluster Placement Group in order to perform High-Performance Computing (HPC). You would like to maximize network performance between your EC2 instances. What should you use?

A) Elastic Fabric Adapter

B) Elastic Network Interface

C) Elastic Network Adapter

D) FSx for Lustre

A

A) Elastic Fabric Adapter

268
Q

You have launched an EC2 instance (Bastion Host) in us-east-1a AZ to access your EC2 instances in private subnets. You want to make your Bastion Host highly available in case there is a disaster in us-east-1a AZ. What should you do?

A) Run 2 Bastion Hosts in 2 AZs and route traffic using an Application Load Balancer deployed in the 2 AZs

B) Run 2 Bastion Hosts in 2 AZs and route traffic using Network Load Balancer deployed in the 2 AZs

A

B) Run 2 Bastion Hosts in 2 AZs and route traffic using Network Load Balancer deployed in the 2 AZs

269
Q

Which AWS service allows you to store Docker images in AWS?

A) Elastic Container Service (ECS)

B) Elastic Container Registry (ECR)

C) Amazon S3

D) AWS CodeCommit

A

B) Elastic Container Registry (ECR)

Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy to store, manage, share, and deploy Docker container images.

270
Q

A company, hosts all their infrastructure in AWS, wants to find an AWS service alternative to GitLab as they want to version control their code entirely in AWS. Which AWS service do you recommend?

A) AWS CodeBuild

B) AWS CodeCommit

C) Amazon S3

D) AWS CodePipeline

A

B) AWS CodeCommit

AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. It is an alternative to GitLab and GitHub.

271
Q

As part of your Disaster Recovery strategy, you would like to make sure your entire infrastructure is code (IaC) so that you can easily re-deploy it in any AWS region. Which AWS service do you recommend?

A) AWS CodePipeline

B) AWS Elastic Beanstalk

C) AWS CodeDeploy

D) AWS CloudFormation

A

D) AWS CloudFormation

AWS CloudFormation is the de-facto service in AWS for infrastructure as code (IaC). It enables you to create and provision AWS infrastructure deployments predictably and repeatedly.

272
Q

You work for a company that uses AWS Organization to manage multiple AWS accounts. You want to create a CloudFormation stack in multiple AWS accounts in multiple AWS Regions. What is the easiest way to achieve this?

A) AWS CodeDeploy

B) AWS Organizations

C) AWS CLI

D) CloudFormation StackSets

A

D) CloudFormation StackSets

CloudFormation StackSets allows you to create, update, or delete CloudFormation stacks across multiple AWS accounts and AWS regions with a single operation.

273
Q

Which AWS service allows you to manage a fleet of Docker containers in AWS Cloud and on-premises?

A) Amazon EC2

B) Amazon ECR

C) Amazon ECS

D) AWS CloudFormation

A

C) Amazon ECS

Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.

274
Q

You would like to orchestrate your CICD pipeline to deliver all the way to Elastic Beanstalk. Which AWS service do you recommend?

A) AWS CodeBuild

B) AWS CodePipeline

C) AWS CloudFormation

D) AWS Simple Workflow Service

A

B) AWS CodePipeline

AWS CodePipeline is a fully managed continuous delivery (CD) service that helps you automate your release pipeline for fast and reliable application and infrastructure updates. It automates the build, test, and deploy phases of your release process every time there is a code change. It has direct integration with Elastic Beanstalk.

275
Q

Which AWS service helps you deploy your code to a fleet of EC2 instances with a specific strategy (e.g., Blue/Green deployment)?

A) AWS CodeDeploy

B) AWS CodeBuild

C) AWS CodePipeline

D) AWS CodeCommit

A

A) AWS CodeDeploy

AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of computing services such as EC2, Fargate, Lambda, and your on-premises servers. You can define the strategy you want to execute such as in-place or blue/green deployments.

276
Q

You have a Jenkins Continuous Integration (CI) build server hosted on-premises and you would like to stop it and replace it with an AWS managed service. Which AWS should you choose?

A) AWS Jenkins

B) AWS CodeBuild

C) AWS CloudFormation

D) Amazon ECS

A

B) AWS CodeBuild

AWS CodeBuild is a fully managed continuous integration (CI) service that compiles source code, runs tests, and produces software packages that are ready to deploy. It is an alternative to Jenkins.

277
Q

You want to orchestrate a series of AWS Lambda functions into a workflow. Which AWS service should you use?

A) AWS Simple Workflow Service

B) AWS CodePipeline

C) AWS Step Functions

D) AWS OpsWorks

A

C) AWS Step Functions

AWS Step Functions is a low-code visual workflow service used to orchestrate AWS services, automate business processes, and build Serverless applications. It manages failures, retries, parallelization, service integrations, …

278
Q

Which AWS service allows you to create a Hadoop cluster to perform big data analysis?

A) Amazon Redshift

B) Amazon Athena

C) AWS Glue

D) Amazon EMR

A

D) Amazon EMR

Amazon EMR is managed service that makes it fast, easy, and cost-effective to run Apache Hadoop and Spark to process vast amounts of data.

279
Q

You are looking to move data all around your AWS databases using a managed ETL service that has a metadata catalog feature. Which AWS do you recommend?

A) Amazon Redshift

B) Amazon Athena

C) AWS Glue

D) Amazon EMR

A

C) AWS Glue

AWS Glue is a Serverless data-preparation service for extract, transform, and load (ETL) operations.

280
Q

Your company is already using Chef recipes to manage its infrastructure. You would like to move to the AWS cloud and keep on using Chef. What AWS service do you recommend?

A) AWS OpsWorks

B) AWS Systems Manager (AWS SSM)

C) AWS CloudFormation

D) Amazon EC2

A

A) AWS OpsWorks

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.

281
Q

As a Solutions Architect, you want to migrate an on-premises Virtual Desktop Infrastructure (VDI) to AWS. This would allow you to reduce maintenance and management costs. Which AWS service do you recommend?

A) AWS AppSync

B) AWS Organizations

C) Amazon Workspaces

D) Amazon ECR

A

C) Amazon Workspaces

Amazon WorkSpaces is a fully managed, persistent desktop virtualization service that enables your users to access data, applications, and resources they need, anywhere, anytime, from any supported device. It can be used to provision either Windows or Linux desktops.

282
Q

Your developers are creating a mobile application and would like to have a managed GraphQL backend. Which AWS service do you recommend?

A) Amazon API Gateway

B) AWS Lambda

C) Amazon ECS

D) AWS AppSync

A

D) AWS AppSync

AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like DynamoDB, Lambda, and more.

283
Q

You need recommendations for the types of reserved EC2 instances you should buy to optimize your AWS costs. You also want to have access to a report detailing how utilized your reserved EC2 instances are. What do you recommend?

A) Setup a billing alarm

B) Use AWS Cost Explorer

C) AWS Lambda

D) AWS AppSync

A

B) Use AWS Cost Explorer

AWS Cost Explorer enables you to view and analyze your costs and usage. You can view data for up to the last 12 months, forecast how much you are likely to spend for the next 12 months, and get recommendations for what EC2 reserved instances to purchase.

284
Q

Which AWS Service analyzes your AWS account and gives recommendations for cost optimization, performance, security, fault tolerance, and service limits?

A) AWS Trusted Advisor

B) AWS CloudTrail

C) AWS Identity and Access Management (AWS IAM)

D) AWS CloudFormation

A

A) AWS Trusted Advisor

AWS Trusted Advisor provides recommendations that help you follow AWS best practices. It evaluates your account by using checks. These checks identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas.

285
Q

The DevOps team at a leading social media company uses AWS OpsWorks, which is a fully managed configuration management service. OpsWorks eliminates the need to operate your configuration management systems or worry about maintaining its infrastructure.

Can you identify the configuration management tools for which OpsWorks provides managed instances? (Select two)

A) Puppet

B) Salt

C) CFEngine

D) Ansible

E) Chef

A

E) Chef

A) Puppet

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments.

Incorrect options:

D) Ansible

C) CFEngine

B) Salt

As mentioned earlier in the explanation, OpsWorks supports only Chef and Puppet, so these three options are incorrect.

286
Q

An Elastic Load Balancer has marked all the EC2 instances in the target group as unhealthy. Surprisingly, when a developer enters the IP address of the EC2 instances in the web browser, he can access the website.

What could be the reason the instances are being marked as unhealthy? (Select two)

A) Your web-app has a runtime that is not supported by the Application Load Balancer

B) The route for the health check is misconfigured

C) You need to attach Elastic IP to the EC2 instances

D) The security group of the EC2 instance does not allow for traffic from the security group of Application Load Balancer

E) The EBS volumes have been improperly mounted

A

B) The route for the health check is misconfigured

D) The security group of the EC2 instance does not allow for traffic from the security group of the Application Load Balancer

An Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks.

Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target groups with which the target is registered. If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets.

You must ensure that your load balancer can communicate with registered targets on both the listener port and the health check port. Whenever you add a listener to your load balancer or update the health check port for a target group used by the load balancer to route requests, you must verify that the security groups associated with the load balancer allow traffic on the new port in both directions.

Incorrect options:

E) The EBS volumes have been improperly mounted - You can access the website using the IP address which means there is no issue with the EBS volumes. So this option is not correct.

A) Your web-app has a runtime that is not supported by the Application Load Balancer - There is no connection between a web app runtime and the application load balancer. This option has been added as a distractor.

C) You need to attach Elastic IP to the EC2 instances - This option is a distractor as Elastic IPs do not need to be assigned to EC2 instances while using an Application Load Balancer.

287
Q

A developer needs to implement a Lambda function in AWS account A that accesses an Amazon S3 bucket in AWS account B.

As a Solutions Architect, which of the following will you recommend to meet this requirement?

A) Create an IAM role for the Lambda function that grants access to the S3 bucket. Set the IAM role as the Lambda function’s execution role. Make sure that the bucket policy also grants access to the Lambda function’s execution role.

B) AWS Lambda cannot access resources across AWS accounts. Use Identity Federation to work around this limitation of Lambda

C) The S3 bucket owner should make the bucket public so that it can be accessed by the Lambda function in the other AWS account

D) Create an IAM role for the Lambda function that grants access to the S3 bucket. Set the IAM role as the Lambda function’s execution role and that would give the Lambda function cross-account access to the S3 bucket

A

D) Create an IAM role for the Lambda function that grants access to the S3 bucket. Set the IAM role as the Lambda function’s execution role. Make sure that the bucket policy also grants access to the Lambda function’s execution role

If the IAM role that you create for the Lambda function is in the same AWS account as the bucket, then you don’t need to grant Amazon S3 permissions on both the IAM role and the bucket policy. Instead, you can grant the permissions on the IAM role and then verify that the bucket policy doesn’t explicitly deny access to the Lambda function role. If the IAM role and the bucket are in different accounts, then you need to grant Amazon S3 permissions on both the IAM role and the bucket policy. Therefore, this is the right way of giving access to AWS Lambda for the given use-case.

Incorrect options:

B) AWS Lambda cannot access resources across AWS accounts. Use Identity federation to work around this limitation of Lambda - This is an incorrect statement, used only as a distractor.

A) Create an IAM role for the Lambda function that grants access to the S3 bucket. Set the IAM role as the Lambda function’s execution role and that would give the Lambda function cross-account access to the S3 bucket - When the execution role of Lambda and S3 bucket to be accessed are from different accounts, then you need to grant S3 bucket access permissions to the IAM role and also ensure that the bucket policy grants access to the Lambda function’s execution role.

C) The S3 bucket owner should make the bucket public so that it can be accessed by the Lambda function in the other AWS account - Making the S3 bucket public for the given use-case will be considered as a security bad practice. It’s usually done for very few use-cases such as hosting a website on S3. Therefore this option is incorrect.

288
Q

A retail company uses AWS Cloud to manage its IT infrastructure. The company has set up “AWS Organizations” to manage several departments running their AWS accounts and using resources such as EC2 instances and RDS databases. The company wants to provide shared and centrally-managed VPCs to all departments using applications that need a high degree of interconnectivity.

As a solutions architect, which of the following options would you choose to facilitate this use-case?

A) Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations

B) Use VPC peering to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations

C) Use VPC peering to share a VPC with other AWS accounts belonging to the same parent organization from AWS Organizations

D) Use VPC sharing to share a VPC with other AWS accounts belonging to the same parent organization from AWS Organizations

A

A) Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations

VPC sharing (part of Resource Access Manager) allows multiple AWS accounts to create their application resources such as EC2 instances, RDS databases, Redshift clusters, and Lambda functions, into shared and centrally-managed Amazon Virtual Private Clouds (VPCs). To set this up, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner.

You can share Amazon VPCs to leverage the implicit routing within a VPC for applications that require a high degree of interconnectivity and are within the same trust boundaries. This reduces the number of VPCs that you create and manage while using separate accounts for billing and access control.

Incorrect options:

D) Use VPC sharing to share a VPC with other AWS accounts belonging to the same parent organization from AWS Organizations - Using VPC sharing, an account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. The owner account cannot share the VPC itself. Therefore this option is incorrect.

B) Use VPC peering to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations - A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. VPC peering does not facilitate centrally managed VPCs. Therefore this option is incorrect.

C) Use VPC peering to share a VPC with other AWS accounts belonging to the same parent organization from AWS Organizations - A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. VPC peering does not facilitate centrally managed VPCs. Moreover, an AWS owner account cannot share the VPC itself with another AWS account. Therefore this option is incorrect.

289
Q

The infrastructure team at a company maintains 5 different VPCs (let’s call these VPCs A, B, C, D, E) for resource isolation. Due to the changed organizational structure, the team wants to interconnect all VPCs together. To facilitate this, the team has set up VPC peering connections between VPC A and all other VPCs in a hub and spoke model with VPC A at the center. However, the team has still failed to establish connectivity between all VPCs.

As a solutions architect, which of the following would you recommend as the MOST resource-efficient and scalable solution?

A) Use a VPC endpoint to interconnect the VPCs

B) Establish VPC peering connections between all VPCs

C) Use a transit gateway to interconnect the VPCs

D) Use an internet gateway to interconnect the VPCs

A

C) Use a transit gateway to interconnect the VPCs

A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks.

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Transitive Peering does not work for VPC peering connections. So, if you have a VPC peering connection between VPC A and VPC B (pcx-aaaabbbb), and between VPC A and VPC C (pcx-aaaacccc). Then, there is no VPC peering connection between VPC B and VPC C. Instead of using VPC peering, you can use an AWS Transit Gateway that acts as a network transit hub, to interconnect your VPCs or connect your VPCs with on-premises networks. Therefore this is the correct option.

Incorrect options:

D) Use an internet gateway to interconnect the VPCs - An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic. You cannot use an internet gateway to interconnect your VPCs and on-premises networks, hence this option is incorrect.

A) Use a VPC endpoint to interconnect the VPCs - A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. You cannot use a VPC endpoint to interconnect your VPCs and on-premises networks, hence this option is incorrect.

B) Establish VPC peering connections between all VPCs - Establishing VPC peering between all VPCs is an inelegant and clumsy way to establish connectivity between all VPCs. Instead, you should use a Transit Gateway that acts as a network transit hub to interconnect your VPCs and on-premises networks.

290
Q

A company has noticed that its application performance has deteriorated after a new Auto Scaling group was deployed a few days back. Upon investigation, the team found out that the Launch Configuration selected for the Auto Scaling group is using the incorrect instance type that is not optimized to handle the application workflow.

As a solutions architect, what would you recommend to provide a long term resolution for this issue?

A) No need to modify the launch configuration. Just modify the Auto Scaling group to use the correct instance type

B) Modify the launch configuration to use the correct instance type and continue to use the existing Auto Scaling group

C) Create a new launch configuration to use the correct instance type. Modify the Auto Scaling group to use this new launch configuration. Delete the old launch configuration as it is no longer needed.

D) No need to modify the launch configuration. Just modify the Auto Scaling group to use more number of existing instance types. More instances may offset the loss of performance.

A

C) Create a new launch configuration to use the correct instance type. Modify the Auto Scaling group to use this new launch configuration. Delete the old launch configuration as it is no longer needed

A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances. Include the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping.

It is not possible to modify a launch configuration once it is created. The correct option is to create a new launch configuration to use the correct instance type. Then modify the Auto Scaling group to use this new launch configuration. Lastly to clean-up, just delete the old launch configuration as it is no longer needed.

Incorrect options:

B) Modify the launch configuration to use the correct instance type and continue to use the existing Auto Scaling group - As mentioned earlier, it is not possible to modify a launch configuration once it is created. Hence, this option is incorrect.

A) No need to modify the launch configuration. Just modify the Auto Scaling group to use the correct instance type - You cannot use an Auto Scaling group to directly modify the instance type of the underlying instances. Hence, this option is incorrect.

D) No need to modify the launch configuration. Just modify the Auto Scaling group to use more number of existing instance types. More instances may offset the loss of performance - Using the Auto Scaling group to increase the number of instances to cover up for the performance loss is not recommended as it does not address the root cause of the problem. The Machine Learning workflow requires a certain instance type that is optimized to handle Machine Learning computations. Hence, this option is incorrect.B)

291
Q

An IT company wants to review its security best-practices after an incident was reported where a new developer on the team was assigned full access to DynamoDB. The developer accidentally deleted a couple of tables from the production environment while building out a new feature.

Which is the MOST effective way to address this issue so that such incidents do not recur?

A) Only root user should have full database access in the organization

B) The CTO should review the permissions for each new developer’s IAM user so that such incidents don’t recur

C) Use permissions boundary to control the maximum permissions employees can grant to the IAM principals

D) Remove full database access for all IAM users in the organization

A

C) Use permissions boundary to control the maximum permissions employees can grant to the IAM principals

A permissions boundary can be used to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage. As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined. Therefore, using the permissions boundary offers the right solution for this use-case.

Incorrect options:

D) Remove full database access for all IAM users in the organization - It is not practical to remove full access for all IAM users in the organization because a select set of users need this access for database administration. So this option is not correct.

B) The CTO should review the permissions for each new developer’s IAM user so that such incidents don’t recur - Likewise the CTO is not expected to review the permissions for each new developer’s IAM user, as this is best done via an automated procedure. This option has been added as a distractor.

A) Only root user should have full database access in the organization - As a best practice, the root user should not access the AWS account to carry out any administrative procedures. So this option is not correct.

292
Q

An engineering team wants to examine the feasibility of the user data feature of Amazon EC2 for an upcoming project.

Which of the following are true about the EC2 user data configuration? (Select two)

A) By default, user data is executed every time an EC2 instance is restarted

B) By default, scripts entered as user data do not have root user privileges for executing

C) By default, scripts entered as user data are executed with root user privileges

D) When an instance is running, you can update user data by using root user credentials

E) By default, user data runs only during the boot cycle when you first launch an instance

A

User Data is generally used to perform common automated configuration tasks and even run scripts after the instance starts. When you launch an instance in Amazon EC2, you can pass two types of user data - shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text or as a file.

C) By default, scripts entered as user data are executed with root user privileges - Scripts entered as user data are executed as the root user, hence do not need the sudo command in the script. Any files you create will be owned by root; if you need non-root users to have file access, you should modify the permissions accordingly in the script.

E) By default, user data runs only during the boot cycle when you first launch an instance - By default, user data scripts and cloud-init directives run only during the boot cycle when you first launch an instance. You can update your configuration to ensure that your user data scripts and cloud-init directives run every time you restart your instance.

Incorrect options:

A) By default, user data is executed every time an EC2 instance is re-started - As discussed above, this is not a default configuration of the system. But, can be achieved by explicitly configuring the instance.

D) When an instance is running, you can update user data by using root user credentials - You can’t change the user data if the instance is running (even by using root user credentials), but you can view it.

B) By default, scripts entered as user data do not have root user privileges for executing - Scripts entered as user data are executed as the root user, hence do not need the sudo command in the script.

293
Q

A leading online gaming company is migrating its flagship application to AWS Cloud for delivering its online games to users across the world. The company would like to use a Network Load Balancer (NLB) to handle millions of requests per second. The engineering team has provisioned multiple instances in a public subnet and specified these instance IDs as the targets for the NLB.

As a solutions architect, can you help the engineering team understand the correct routing mechanism for these target instances?

A) Traffic is routed to instances using the instance ID specified in the primary network interface for the instance

B) Traffic is routed to instances using the primary elastic IP address specified in the primary network interface for the instance

C) Traffic is routed to instances using the primary public IP address specified in the primary network interface for the instance

D) Traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance

A

D) Traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance

A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.

Request Routing and IP Addresses -

If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance. The load balancer rewrites the destination IP address from the data packet before forwarding it to the target instance.

If you specify targets using IP addresses, you can route traffic to an instance using any private IP address from one or more network interfaces. This enables multiple applications on an instance to use the same port. Note that each network interface can have its security group. The load balancer rewrites the destination IP address before forwarding it to the target.

Incorrect options:

C) Traffic is routed to instances using the primary public IP address specified in the primary network interface for the instance - If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance. So public IP address cannot be used to route the traffic to the instance.

B) Traffic is routed to instances using the primary elastic IP address specified in the primary network interface for the instance - If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance. So elastic IP address cannot be used to route the traffic to the instance.

A) Traffic is routed to instances using the instance ID specified in the primary network interface for the instance - You cannot use instance ID to route traffic to the instance. This option is just added as a distractor.

294
Q

A news network uses Amazon S3 to aggregate the raw video footage from its reporting teams across the US. The news network has recently expanded into new geographies in Europe and Asia. The technical teams at the overseas branch offices have reported huge delays in uploading large video files to the destination S3 bucket.

Which of the following are the MOST cost-effective options to improve the file upload speed into S3? (Select two)

A) Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket

B) Use multipart uploads for faster file uploads into the destination S3 bucket

C) Create multiple AWS direct connect connections between the AWS Cloud and branch offices in Europe and Asia. Use the direct connect connections for faster file uploads into S3

D) Create site-to-site VPN connections between the AWS Cloud and branch offices in Europe and Asia. Use these VPN connections for faster file uploads into S3

E) Use AWS Global Accelerator for faster file uploads into the destination S3 bucket

A

A) Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket - Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

B) Use multipart uploads for faster file uploads into the destination S3 bucket - Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. Multipart upload provides improved throughput, therefore it facilitates faster file uploads.

Incorrect options:

C) Create multiple AWS direct connect connections between the AWS Cloud and branch offices in Europe and Asia. Use the direct connect connections for faster file uploads into S3 - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Direct connect takes significant time (several months) to be provisioned and is an overkill for the given use-case.

D) Create multiple site-to-site VPN connections between the AWS Cloud and branch offices in Europe and Asia. Use these VPN connections for faster file uploads into S3 - AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch office network to the cloud with an AWS Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections are a good solution if you have low to modest bandwidth requirements and can tolerate the inherent variability in Internet-based connectivity. Site-to-site VPN will not help in accelerating the file transfer speeds into S3 for the given use-case.

E) Use AWS Global Accelerator for faster file uploads into the destination S3 bucket - AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances. AWS Global Accelerator will not help in accelerating the file transfer speeds into S3 for the given use-case.

295
Q

You have been hired as a Solutions Architect to advise a company on the various authentication/authorization mechanisms that AWS offers to authorize an API call within the API Gateway. The company would prefer a solution that offers built-in user management.

Which of the following solutions would you suggest as the best fit for the given use-case?

A) Use API Gateway Lambda authorizer

B) Use Amazon Cognito User Pools

C) Use Amazon Cognito Identity Pools

D) Use AWS_IAM authorization

A

B) Use Amazon Cognito User Pools - A user pool is a user directory in Amazon Cognito. You can leverage Amazon Cognito User Pools to either provide built-in user management or integrate with external identity providers, such as Facebook, Twitter, Google+, and Amazon. Whether your users sign-in directly or through a third party, all members of the user pool have a directory profile that you can access through a Software Development Kit (SDK).

User pools provide: 1. Sign-up and sign-in services. 2. A built-in, customizable web UI to sign in users. 3. Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple, as well as sign-in with SAML identity providers from your user pool. 4. User directory management and user profiles. 5. Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification. 6. Customized workflows and user migration through AWS Lambda triggers.

After creating an Amazon Cognito user pool, in API Gateway, you must then create a COGNITO_USER_POOLS authorizer that uses the user pool.A

Incorrect options:

D) Use AWS_IAM authorization - For consumers who currently are located within your AWS environment or have the means to retrieve AWS Identity and Access Management (IAM) temporary credentials to access your environment, you can use AWS_IAM authorization and add least-privileged permissions to the respective IAM role to securely invoke your API. API Gateway API Keys is not a security mechanism and should not be used for authorization unless it’s a public API. It should be used primarily to track a consumer’s usage across your API.

A) Use API Gateway Lambda authorizer - If you have an existing Identity Provider (IdP), you can use an API Gateway Lambda authorizer to invoke a Lambda function to authenticate/validate a given user against your IdP. You can use a Lambda authorizer for custom validation logic based on identity metadata.

A Lambda authorizer can send additional information derived from a bearer token or request context values to your backend service. For example, the authorizer can return a map containing user IDs, user names, and scope. By using Lambda authorizers, your backend does not need to map authorization tokens to user-centric data, allowing you to limit the exposure of such information to just the authorization function.

When using Lambda authorizers, AWS strictly advises against passing credentials or any sort of sensitive data via query string parameters or headers, so this is not as secure as using Cognito User Pools.

In addition, both these options do not offer built-in user management.

C) Use Amazon Cognito Identity Pools - The two main components of Amazon Cognito are user pools and identity pools. Identity pools provide AWS credentials to grant your users access to other AWS services. To enable users in your user pool to access AWS resources, you can configure an identity pool to exchange user pool tokens for AWS credentials. So, identity pools aren’t an authentication mechanism in themselves and hence aren’t a choice for this use case.

296
Q

A financial services firm uses a high-frequency trading system and wants to write the log files into Amazon S3. The system will also read these log files in parallel on a near real-time basis. The engineering team wants to address any data discrepancies that might arise when the trading system overwrites an existing log file and then tries to read that specific log file.

Which of the following options BEST describes the capabilities of Amazon S3 relevant to this scenario?

A) A process replaces an existing object and immediately tries to read it. Until the change is fully propagated, Amazon S3 does not return any data

B) A process replaces an existing object and immediately tries to read it. Until the change is fully propagated, Amazon S3 might return the previous data

C) A process replaces an existing object and immediately tries to read it. Amazon S3 always returns the latest version of the object.

D) A process replaces an existing object and immediately tries to read it. Until the change is fully propagated, Amazon S3 might return the new data

A

C) A process replaces an existing object and immediately tries to read it. Amazon S3 always returns the latest version of the object

Amazon S3 delivers strong read-after-write consistency automatically, without changes to performance or availability, without sacrificing regional isolation for applications, and at no additional cost.

After a successful write of a new object or an overwrite of an existing object, any subsequent read request immediately receives the latest version of the object. S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with any changes reflected.

Strong read-after-write consistency helps when you need to immediately read an object after a write. For example, strong read-after-write consistency when you often read and list immediately after writing objects.

To summarize, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are strongly consistent. What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket.

297
Q

A retail company has developed a REST API which is deployed in an Auto Scaling group behind an Application Load Balancer. The API stores the user data in DynamoDB and any static content, such as images, are served via S3. On analyzing the usage trends, it is found that 90% of the read requests are for commonly accessed data across all users.

As a Solutions Architect, which of the following would you suggest as the MOST efficient solution to improve the application performance?

A) Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for S3

B) Enable DynamoDB Accelerator (DAX) for DynamoDB and ElastiCache Memcached for S3

C) Enable ElastiCache Redis for DynamoDB and CloudFront for S3

D) Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3

A

D) Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3

DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB that delivers up to a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second.

DAX is tightly integrated with DynamoDB—you simply provision a DAX cluster, use the DAX client SDK to point your existing DynamoDB API calls at the DAX cluster, and let DAX handle the rest. Because DAX is API-compatible with DynamoDB, you don’t have to make any functional application code changes. DAX is used to natively cache DynamoDB reads.

CloudFront is a content delivery network (CDN) service that delivers static and dynamic web content, video streams, and APIs around the world, securely and at scale. By design, delivering data out of CloudFront can be more cost-effective than delivering it from S3 directly to your users.

When a user requests content that you serve with CloudFront, their request is routed to a nearby Edge Location. If CloudFront has a cached copy of the requested file, CloudFront delivers it to the user, providing a fast (low-latency) response. If the file they’ve requested isn’t yet cached, CloudFront retrieves it from your origin – for example, the S3 bucket where you’ve stored your content.

So, you can use CloudFront to improve application performance to serve static content from S3.

Incorrect options:

C) Enable ElastiCache Redis for DynamoDB and CloudFront for S3

Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store.

Although you can integrate Redis with DynamoDB, it’s much more involved than using DAX which is a much better fit.

B) Enable DAX for DynamoDB and ElastiCache Memcached for S3

A) Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for S3

Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. Amazon ElastiCache for Memcached is a great choice for implementing an in-memory cache to decrease access latency, increase throughput, and ease the load off your relational or NoSQL database.

ElastiCache Memcached cannot be used as a cache to serve static content from S3, so both these options are incorrect.

298
Q

An IT company has an Access Control Management (ACM) application that uses Amazon RDS for MySQL but is running into performance issues despite using Read Replicas. The company has hired you as a solutions architect to address these performance-related challenges without moving away from the underlying relational database schema. The company has branch offices across the world, and it needs the solution to work on a global scale.

Which of the following will you recommend as the MOST cost-effective and high-performance solution?

A) Use Amazon DynamoDB Global Tables to provide fast, local, read and write performance in each region

B) Use Amazon Aurora Global Database to enable fast local reads with low latency in each region

C) Spin up a Redshift cluster in each AWS region. Migrate the existing data into Redshift clusters

D) Spin up EC2 instances in each AWS region, install MySQL databases and migrate the existing data into these new databases

A

B) Use Amazon Aurora Global Database to enable fast local reads with low latency in each region

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. Aurora is not an in-memory database.

Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages. Amazon Aurora Global Database is the correct choice for the given use-case.

Incorrect options:

A) Use Amazon DynamoDB Global Tables to provide fast, local, read and write performance in each region - Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications.

Global Tables builds upon DynamoDB’s global footprint to provide you with a fully managed, multi-region, and multi-master database that provides fast, local, read, and write performance for massively scaled, global applications. Global Tables replicates your Amazon DynamoDB tables automatically across your choice of AWS regions. Given that the use-case wants you to continue with the underlying schema of the relational database, DynamoDB is not the right choice as it’s a NoSQL database.

C) Spin up a Redshift cluster in each AWS region. Migrate the existing data into Redshift clusters - Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. Redshift is not suited to be used as a transactional relational database, so this option is not correct.

D) Spin up EC2 instances in each AWS region, install MySQL databases and migrate the existing data into these new databases - Setting up EC2 instances in multiple regions with manually managed MySQL databases represents a maintenance nightmare and is not the correct choice for this use-case.

299
Q

A startup’s cloud infrastructure consists of a few Amazon EC2 instances, Amazon RDS instances and Amazon S3 storage. A year into their business operations, the startup is incurring costs that seem too high for their business requirements.

Which of the following options represents a valid cost-optimization solution?

A) Use AWS Trusted Advisor checks on Amazon EC2 Reserved Instances to automatically renew Reserved Instances. Trusted Advisor also suggests Amazon RDS idle DB instances

B) Use Amazon S3 Storage class analysis to get recommendations for transitions of objects to S2 Glacier storage classes to reduce storage costs. You can also automate moving these objects into lower-cost storage tier using Lifecycle Policies

C) Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations

D) Use AWS Compute Optimizer recommendations to help you choose the optimal Amazon EC2 purchasing options and help reserve your instance capacities at reduced costs

A

C) Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations - AWS Cost Explorer helps you identify under-utilized EC2 instances that may be downsized on an instance by instance basis within the same instance family, and also understand the potential impact on your AWS bill by taking into account your Reserved Instances and Savings Plans.

AWS Compute Optimizer recommends optimal AWS Compute resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Compute Optimizer helps you choose the optimal Amazon EC2 instance types, including those that are part of an Amazon EC2 Auto Scaling group, based on your utilization data.

Incorrect options:

B) Use Amazon S3 Storage class analysis to get recommendations for transitions of objects to S3 Glacier storage classes to reduce storage costs. You can also automate moving these objects into lower-cost storage tier using Lifecycle Policies -

By using Amazon S3 Analytics Storage Class analysis you can analyze storage access patterns to help you decide when to transition the right data to the right storage class. This new Amazon S3 analytics feature observes data access patterns to help you determine when to transition less frequently accessed STANDARD storage to the STANDARD_IA (IA, for infrequent access) storage class. Storage class analysis does not give recommendations for transitions to the ONEZONE_IA or S3 Glacier storage classes.

A) Use AWS Trusted Advisor checks on Amazon EC2 Reserved Instances to automatically renew Reserved Instances. Trusted advisor also suggests Amazon RDS idle DB instances - AWS Trusted Advisor checks for Amazon EC2 Reserved Instances that are scheduled to expire within the next 30 days or have expired in the preceding 30 days. Reserved Instances do not renew automatically; you can continue using an EC2 instance covered by the reservation without interruption, but you will be charged On-Demand rates. Trusted advisor does not have a feature to auto-renew Reserved Instances.

D) Use AWS Compute Optimizer recommendations to help you choose the optimal Amazon EC2 purchasing options and help reserve your instance capacities at reduced costs - AWS Compute Optimizer recommends optimal AWS Compute resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Over-provisioning compute can lead to unnecessary infrastructure cost and under-provisioning compute can lead to poor application performance. Compute Optimizer helps you choose the optimal Amazon EC2 instance types, including those that are part of an Amazon EC2 Auto Scaling group, based on your utilization data. It does not recommend instance purchase options.

300
Q

A retail company wants to rollout and test a blue-green deployment for its global application in the next 48 hours. Most of the customers use mobile phones which are prone to DNS caching. The company has only two days left for the annual Thanksgiving sale to commence.

As a Solutions Architect, which of the following options would you recommend to test the deployment on as many users as possible in the given time frame?

A) Use Elastic Load Balancer to distribute traffic across deployments

B) Use AWS CodeDeploy deployment options to choose the right deployment

C) Use Route 53 weighted routing to spread traffic across different deployments

D) Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment

A

Blue/green deployment is a technique for releasing applications by shifting traffic between two identical environments running different versions of the application: “Blue” is the currently running version and “green” the new version. This type of deployment allows you to test features in the green environment without impacting the currently running version of your application. When you’re satisfied that the green version is working properly, you can gradually reroute the traffic from the old blue environment to the new green environment. Blue/green deployments can mitigate common risks associated with deploying software, such as downtime and rollback capability.

D) Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment - AWS Global Accelerator is a network layer service that directs traffic to optimal endpoints over the AWS global network, this improves the availability and performance of your internet applications. It provides two static anycast IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, Elastic IP addresses or Amazon EC2 instances, in a single or in multiple AWS regions.

AWS Global Accelerator uses endpoint weights to determine the proportion of traffic that is directed to endpoints in an endpoint group, and traffic dials to control the percentage of traffic that is directed to an endpoint group (an AWS region where your application is deployed).

While relying on the DNS service is a great option for blue/green deployments, it may not fit use-cases that require a fast and controlled transition of the traffic. Some client devices and internet resolvers cache DNS answers for long periods; this DNS feature improves the efficiency of the DNS service as it reduces the DNS traffic across the Internet, and serves as a resiliency technique by preventing authoritative name-server overloads. The downside of this in blue/green deployments is that you don’t know how long it will take before all of your users receive updated IP addresses when you update a record, change your routing preference or when there is an application failure.

With AWS Global Accelerator, you can shift traffic gradually or all at once between the blue and the green environment and vice-versa without being subject to DNS caching on client devices and internet resolvers, traffic dials and endpoint weights changes are effective within seconds.

Incorrect options:

C) Use Route 53 weighted routing to spread traffic across different deployments - Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of the software. As discussed earlier, DNS caching is a negative behavior for this use case and hence Route 53 is not a good option.

A) Use Elastic Load Balancer to distribute traffic across deployments - An ELB can distribute traffic across healthy instances. You can also use the ALB weighted target groups feature for blue/green deployments as it does not rely on the DNS service. In addition you don’t need to create new ALBs for the green environment. As the use-case refers to a global application, so this option cannot be used for a multi-Region solution which is needed for the given requirement.

B) Use AWS CodeDeploy deployment options to choose the right deployment - In CodeDeploy, a deployment is the process, and the components involved in the process, of installing content on one or more instances. This content can consist of code, web and configuration files, executables, packages, scripts, and so on. CodeDeploy deploys content that is stored in a source repository, according to the configuration rules you specify. Blue/Green deployment is one of the deployment types that CodeDeploy supports. CodeDeploy is not meant to distribute traffic across instances, so this option is incorrect.

301
Q

A company has grown from a small startup to an enterprise employing over 1000 people. As the team size has grown, the company has recently observed some strange behavior, with S3 buckets settings being changed regularly.

How can you figure out what’s happening without restricting the rights of the users?

A) Use S3 access logs to analyze user access using Athena

B) Implement an IAM policy to forbid users from changing S3 bucket settings

C) Implement a bucket policy requiring MFA for all operations

D) Use CloudTrail to analyze API calls

A

D) Use CloudTrail to analyze API calls - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.

In general, to analyze any API calls made within an AWS account, CloudTrail is used.

Incorrect options:

B) Implement an IAM policy to forbid users to change S3 bucket settings - You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, AWS Organizations SCPs, ACLs, and session policies.

Implementing an IAM policy to forbid users would be disruptive and wouldn’t go unnoticed.

A) Use S3 access logs to analyze user access using Athena - Amazon S3 server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. It can also help you learn about your customer base and understand your Amazon S3 bill. S3 access logs would not provide us the necessary information, so it’s not the correct choice for this use-case.

C) Implement a bucket policy requiring MFA for all operations - Amazon S3 supports MFA-protected API access, a feature that can enforce multi-factor authentication (MFA) for access to your Amazon S3 resources. Multi-factor authentication provides an extra level of security that you can apply to your AWS environment. It is a security feature that requires users to prove the physical possession of an MFA device by providing a valid MFA code. Changing the bucket policy to require MFA would not go unnoticed.

302
Q

A company runs its EC2 servers behind an Application Load Balancer along with an Auto Scaling group. The engineers at the company want to be able to install proprietary tools on each instance and perform a pre-activation status check of these tools whenever an instance is provisioned because of a scale-out event from an auto-scaling policy.

Which of the following options can be used to enable this custom action?

A) Use the Auto Scaling group lifecycle hook to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

B) Use the EC2 instance user data to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

C) Use the EC2 instance meta data to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

D) Use the Auto Scaling group scheduled action to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

A

A) Use the Auto Scaling group lifecycle hook to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.

Auto Scaling group lifecycle hooks enable you to perform custom actions as the Auto Scaling group launches or terminates instances. Lifecycle hooks enable you to perform custom actions by pausing instances as an Auto Scaling group launches or terminates them. When an instance is paused, it remains in a wait state either until you complete the lifecycle action using the complete-lifecycle-action command or the CompleteLifecycleAction operation, or until the timeout period ends (one hour by default). For example, you could install or configure software on newly launched instances, or download log files from an instance before it terminates.

Incorrect options:

D) Use the Auto Scaling group scheduled action to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check - To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. The scheduled action tells Amazon EC2 Auto Scaling to perform a scaling action at specified times. You cannot use scheduled action to carry out custom actions when the Auto Scaling group launches or terminates an instance.

C) Use the EC2 instance meta data to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check - EC2 instance metadata is data about your instance that you can use to configure or manage the running instance. You cannot use EC2 instance metadata to put the instance in wait state.

B) Use the EC2 instance user data to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check - EC2 instance user data is the data that you specified in the form of a configuration script while launching your instance. You cannot use EC2 instance user data to put the instance in wait state.

303
Q

An e-commerce application uses an Amazon Aurora Multi-AZ deployment for its database. While analyzing the performance metrics, the engineering team has found that the database reads are causing high I/O and adding latency to the write requests against the database.

As an AWS Certified Solutions Architect Associate, what would you recommend to separate the read requests from the write requests?

A) Configure the application to read from the Multi-AZ standby instance

B) Set up a read replica and modify the application to use the appropriate endpoint

C) Provision another Amazon Aurora database and link it to the primary database as a read replica

D) Activate read-through caching on the Amazon Aurora database

A

B) Set up a read replica and modify the application to use the appropriate endpoint

An Amazon Aurora DB cluster consists of one or more DB instances and a cluster volume that manages the data for those DB instances. An Aurora cluster volume is a virtual database storage volume that spans multiple Availability Zones, with each Availability Zone having a copy of the DB cluster data. Two types of DB instances make up an Aurora DB cluster:

Primary DB instance – Supports read and write operations, and performs all of the data modifications to the cluster volume. Each Aurora DB cluster has one primary DB instance.

Aurora Replica – Connects to the same storage volume as the primary DB instance and supports only read operations. Each Aurora DB cluster can have up to 15 Aurora Replicas in addition to the primary DB instance. Aurora automatically fails over to an Aurora Replica in case the primary DB instance becomes unavailable. You can specify the failover priority for Aurora Replicas. Aurora Replicas can also offload read workloads from the primary DB instance.

Aurora Replicas have two main purposes. You can issue queries to them to scale the read operations for your application. You typically do so by connecting to the reader endpoint of the cluster. That way, Aurora can spread the load for read-only connections across as many Aurora Replicas as you have in the cluster. Aurora Replicas also help to increase availability. If the writer instance in a cluster becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as the new writer.

While setting up a Multi-AZ deployment for Aurora, you create an Aurora replica or reader node in a different AZ.

Multi-AZ for Aurora:

You use the reader endpoint for read-only connections for your Aurora cluster. This endpoint uses a load-balancing mechanism to help your cluster handle a query-intensive workload. The reader endpoint is the endpoint that you supply to applications that do reporting or other read-only operations on the cluster. The reader endpoint load-balances connections to available Aurora Replicas in an Aurora DB cluster.

Incorrect options:

C) Provision another Amazon Aurora database and link it to the primary database as a read replica - You cannot provision another Aurora database and then link it as a read-replica for the primary database. This option is ruled out.

A) Configure the application to read from the Multi-AZ standby instance - This option has been added as a distractor as Aurora does not have any entity called standby instance. You create a standby instance while setting up a Multi-AZ deployment for RDS and NOT for Aurora.

D) Activate read-through caching on the Amazon Aurora database - Aurora does not have built-in support for read-through caching, so this option just serves as a distractor. To implement caching, you will need to integrate something like ElastiCache and that would need code changes for the application.

304
Q

A medium-sized business has a taxi dispatch application deployed on an EC2 instance. Because of an unknown bug, the application causes the instance to freeze regularly. Then, the instance has to be manually restarted via the AWS management console.

Which of the following is the MOST cost-optimal and resource-efficient way to implement an automated solution until a permanent fix is delivered by the development team?

A) Use CloudWatch events to trigger a Lambda function to reboot the instance status every 5 minutes

B) Setup a CloudWatch alarm to monitor the health status of the instance. In case of an Instance Health Check Failure, an EC2 Reboot CloudWatch Alarm Action can be used to reboot the instance

C) Setup a CloudWatch alarm to monitor the health status of the instance. In case of an Instance Health Check failure, Cloud Watch Alarm can publish to an SNS event which can then trigger a Lambda function. The Lambda function can use AWS EC2 API to reboot the instance

D) Use CloudWatch events to trigger a Lambda unction to check the instance status every 5 minutes. In the case of Instance Health Check failure, the Lambda function can use AWS EC2 API to reboot the instance

A

B) Setup a CloudWatch alarm to monitor the health status of the instance. In case of an Instance Health Check failure, an EC2 Reboot CloudWatch Alarm Action can be used to reboot the instance

Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your EC2 instances. You can use the stop or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.

You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance. The reboot alarm action is recommended for Instance Health Check failures (as opposed to the recover alarm action, which is suited for System Health Check failures).

Incorrect options:

C) Setup a CloudWatch alarm to monitor the health status of the instance. In case of an Instance Health Check failure, CloudWatch Alarm can publish to an SNS event which can then trigger a lambda function. The lambda function can use AWS EC2 API to reboot the instance

D) Use CloudWatch events to trigger a Lambda function to check the instance status every 5 minutes. In the case of Instance Health Check failure, the lambda function can use AWS EC2 API to reboot the instance

A) Use CloudWatch events to trigger a Lambda function to reboot the instance status every 5 minutes

Using CloudWatch event or CloudWatch alarm to trigger a lambda function, directly or indirectly, is wasteful of resources. You should just use the EC2 Reboot CloudWatch Alarm Action to reboot the instance. So all the options that trigger the lambda function are incorrect.

305
Q

An Internet-of-Things (IoT) company would like to have a streaming system that performs real-time analytics on the ingested IoT data. Once the analytics is done, the company would like to send notifications back to the mobile applications of the IoT device owners.

As a solutions architect, which of the following AWS technologies would you recommend to send these notifications to the mobile applications?

A) Amazon SQS with Amazon SNS

B) Amazon Kinesis with Amazon SQS

C) Amazon Kinesis with Amazon SES

D) Amazon Kinesis with Amazon SNS

A

D) Amazon Kinesis with Amazon Simple Notification Service (SNS) - Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application.

With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin.

Kinesis will be great for event streaming from the IoT devices, but not for sending notifications as it doesn’t have such a feature.

Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging. SNS is a notification service and will be perfect for this use case.

Streaming data with Kinesis and using SNS to send the response notifications is the optimal solution for the current scenario.

Incorrect options:

A) Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service (SNS) - Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. Kinesis is better for streaming data since queues aren’t meant for real-time streaming of data.

C) Amazon Kinesis with Simple Email Service (Amazon SES) - Amazon Simple Email Service (Amazon SES) is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. It is a reliable, cost-effective service for businesses of all sizes that use email to keep in contact with their customers. It is an email service and not a notification service as is the requirement in the current use case.

B) Amazon Kinesis with Simple Queue Service (SQS) - As explained above, Kinesis works well for streaming real-time data. SQS is a queuing service that helps decouple system architecture by offering flexibility and ease of maintenance. It cannot send notifications. SQS is paired with SNS to provide this functionality.

306
Q

A development team has deployed a microservice to the ECS. The application layer is in a Docker container that provides both static and dynamic content through an Application Load Balancer. With increasing load, the ECS cluster is experiencing higher network usage. The development team has looked into the network usage and found that 90% of it is due to distributing static content of the application.

As a Solutions Architect, what do you recommend to improve the application’s network usage and decrease costs?

A) Distribute the dynamic content through Amazon S3

B) Distribute the static content through Amazon EFS

C) Distribute the static content through Amazon S3

D) Distribute the dynamic content through Amazon EFS

A

C) Distribute the static content through Amazon S3 -

You can use Amazon S3 to host a static website. On a static website, individual web pages include static content. They might also contain client-side scripts. To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. When you configure a bucket as a static website, you must enable website hosting, set permissions, and create and add an index document. Depending on your website requirements, you can also configure redirects, web traffic logging, and a custom error document.

Distributing the static content through S3 allows us to offload most of the network usage to S3 and free up our applications running on ECS.

Incorrect options:

A) Distribute the dynamic content through Amazon S3 - By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting, but AWS has other resources for hosting dynamic websites.

B) Distribute the static content through Amazon EFS

D) Distribute the dynamic content through Amazon EFS

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Using EFS for static or dynamic content will not change anything as static content on EFS would still have to be distributed by the ECS instances.

307
Q

A cybersecurity company uses a fleet of EC2 instances to run a proprietary application. The infrastructure maintenance group at the company wants to be notified via an email whenever the CPU utilization for any of the EC2 instances breaches a certain threshold.

Which of the following services would you use for building a solution with the LEAST amount of development effort? (Select two)

A) AWS Lambda

B) Amazon CloudWatch

C) Amazon SNS

D) Amazon SQS

E) AWS Step Functions

A

C) Amazon SNS - Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging.

B) Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Amazon CloudWatch allows you to monitor AWS cloud resources and the applications you run on AWS.

You can use CloudWatch Alarms to send an email via SNS whenever any of the EC2 instances breaches a certain threshold. Hence both these options are correct.

Incorrect options:

A) AWS Lambda - With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume—there’s no charge when your code isn’t running. You can run code for virtually any type of application or backend service—all with zero administration. You cannot use AWS Lambda to monitor CPU utilization of EC2 instances or send notification emails, hence this option is incorrect.

D) Amazon SQS - Amazon SQS Standard offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows. You cannot use SQS to monitor CPU utilization of EC2 instances or send notification emails, hence this option is incorrect.

E) AWS Step Functions - AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services, such as AWS Lambda, AWS Fargate, and Amazon SageMaker, into feature-rich applications. You cannot use Step Functions to monitor CPU utilization of EC2 instances or send notification emails, hence this option is incorrect.

308
Q

An e-commerce company is planning to migrate their two-tier application from on-premises infrastructure to AWS Cloud. As the engineering team at the company is new to the AWS Cloud, they are planning to use the Amazon VPC console wizard to set up the networking configuration for the two-tier application having public web servers and private database servers.

Can you spot the configuration that is NOT supported by the Amazon VPC console wizard?

A) VPC with public and private subnets (NAT)

B) VPC with a public subnet only and AWS Site-to-Site VPN access

C) VPC with public and private subnets and AWS Site-to-Site VPN access

D) VPC with a single public subnet

A

B) VPC with a public subnet only and AWS Site-to-Site VPN access

The Amazon VPC console wizard provides the following four configurations:

D) VPC with a single public subnet - The configuration for this scenario includes a virtual private cloud (VPC) with a single public subnet, and an internet gateway to enable communication over the internet. We recommend this configuration if you need to run a single-tier, public-facing web application, such as a blog or a simple website.

A) VPC with public and private subnets (NAT) - The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet. We recommend this scenario if you want to run a public-facing web application while maintaining back-end servers that aren’t publicly accessible. A common example is a multi-tier website, with the web servers in a public subnet and the database servers in a private subnet. You can set up security and routing so that the web servers can communicate with the database servers.

C) VPC with public and private subnets and AWS Site-to-Site VPN access - The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet, and a virtual private gateway to enable communication with your network over an IPsec VPN tunnel. We recommend this scenario if you want to extend your network into the cloud and also directly access the Internet from your VPC. This scenario enables you to run a multi-tiered application with a scalable web front end in a public subnet and to house your data in a private subnet that is connected to your network by an IPsec AWS Site-to-Site VPN connection.

VPC with a private subnet only and AWS Site-to-Site VPN access - The configuration for this scenario includes a virtual private cloud (VPC) with a single private subnet, and a virtual private gateway to enable communication with your network over an IPsec VPN tunnel. There is no Internet gateway to enable communication over the Internet. We recommend this scenario if you want to extend your network into the cloud using Amazon’s infrastructure without exposing your network to the Internet.

Therefore, the option “VPC with a public subnet only and AWS Site-to-Site VPN access” is NOT supported by the Amazon VPC console wizard.

309
Q

A healthcare startup needs to enforce compliance and regulatory guidelines for objects stored in Amazon S3. One of the key requirements is to provide adequate protection against accidental deletion of objects.

As a solutions architect, what are your recommendations to address these guidelines? (Select two)

A) Change the configuration on AWS S3 console so that the user needs to provide additional confirmation while deleting any S3 object

B) Establish a process to get managerial approval for deleting S3 objects

C) Create an event trigger on deleting any S3 object. The event invokes an SNS notification via email to the IT manager

D) Enable versioning on the bucket

E) Enable MFA delete on the bucket

A

D) Enable versioning on the bucket - Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite.

For example:
If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version. If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version. You can always restore the previous version. Hence, this is the correct option.

E) Enable MFA delete on the bucket - To provide additional protection, multi-factor authentication (MFA) delete can be enabled. MFA delete requires secondary authentication to take place before objects can be permanently deleted from an Amazon S3 bucket. Hence, this is the correct option.

Incorrect options:

C) Create an event trigger on deleting any S3 object. The event invokes an SNS notification via email to the IT manager - Sending an event trigger after object deletion does not meet the objective of preventing object deletion by mistake because the object has already been deleted. So, this option is incorrect.

B) Establish a process to get managerial approval for deleting S3 objects - This option for getting managerial approval is just a distractor.

A) Change the configuration on AWS S3 console so that the user needs to provide additional confirmation while deleting any S3 object - There is no provision to set up S3 configuration to ask for additional confirmation before deleting an object. This option is incorrect.

310
Q

A pharmaceutical company is considering moving to AWS Cloud to accelerate the research and development process. Most of the daily workflows would be centered around running batch jobs on EC2 instances with storage on EBS volumes. The CTO is concerned about meeting HIPAA compliance norms for sensitive data stored on EBS.

Which of the following options outline the correct capabilities of an encrypted EBS volume? (Select three)

A) Any snapshot created from the volume is encrypted

B) Data moving between the volume and the instance is NOT encrypted

C) Data moving between the volume and the instance is encrypted

D) Data at rest inside the volume is encrypted

E) Data at rest inside the volume is NOT encrypted

F) Any snapshot created from the volume is NOT encrypted

A

D) Data at rest inside the volume is encrypted

A) Any snapshot created from the volume is encrypted

C) Data moving between the volume and the instance is encrypted

Amazon Elastic Block Store (Amazon EBS) provides block-level storage volumes for use with EC2 instances. When you create an encrypted EBS volume and attach it to a supported instance type, data stored at rest on the volume, data moving between the volume and the instance, snapshots created from the volume and volumes created from those snapshots are all encrypted. It uses AWS Key Management Service (AWS KMS) customer master keys (CMK) when creating encrypted volumes and snapshots. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage.

Therefore, the incorrect options are:

B) Data moving between the volume and the instance is NOT encrypted

F) Any snapshot created from the volume is NOT encrypted

E) Data at rest inside the volume is NOT encrypted

311
Q

A retail organization is moving some of its on-premises data to AWS Cloud. The DevOps team at the organization has set up an AWS Managed IPSec VPN Connection between their remote on-premises network and their Amazon VPC over the internet.

Which of the following represents the correct configuration for the IPSec VPN Connection?

A) Create a Virtual Private Gateway on both the AWS side of the VPN as well as the on-premises side of the VPN

B) Create a Customer Gateway on both the AWS side of the VPN as well as the on-premises side of the VPN

C) Create a Virtual Private Gateway on the on-premises side of the VPN and a Customer Gateway on the AWS side of the VPN

D) Create a Virtual Private Gateway on the AWS side of the VPN and a Customer Gateway on the on-premises side of the VPN

A

D) Create a Virtual Private Gateway on the AWS side of the VPN and a Customer Gateway on the on-premises side of the VPN

Amazon VPC provides the facility to create an IPsec VPN connection (also known as site-to-site VPN) between remote customer networks and their Amazon VPC over the internet. The following are the key concepts for a site-to-site VPN:

Virtual private gateway: A Virtual Private Gateway (also known as a VPN Gateway) is the endpoint on the AWS VPC side of your VPN connection.

VPN connection: A secure connection between your on-premises equipment and your VPCs.

VPN tunnel: An encrypted link where data can pass from the customer network to or from AWS.

Customer Gateway: An AWS resource that provides information to AWS about your Customer Gateway device.

Customer Gateway device: A physical device or software application on the customer side of the Site-to-Site VPN connection.

Incorrect options:

C) Create a Virtual Private Gateway on the on-premises side of the VPN and a Customer Gateway on the AWS side of the VPN - You need to create a Virtual Private Gateway on the AWS side of the VPN and a Customer Gateway on the on-premises side of the VPN. Therefore, this option is wrong.

B) Create a Customer Gateway on both the AWS side of the VPN as well as the on-premises side of the VPN - You need to create a Virtual Private Gateway on the AWS side of the VPN and a Customer Gateway on the on-premises side of the VPN. Therefore, this option is wrong.

A) Create a Virtual Private Gateway on both the AWS side of the VPN as well as the on-premises side of the VPN - You need to create a Virtual Private Gateway on the AWS side of the VPN and a Customer Gateway on the on-premises side of the VPN. Therefore, this option is wrong.

312
Q

The business analytics team at a company has been running ad-hoc queries on Oracle and PostgreSQL services on Amazon RDS to prepare daily reports for senior management. To facilitate the business analytics reporting, the engineering team now wants to continuously replicate this data and consolidate these databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift.

As a solutions architect, which of the following would you recommend as the MOST resource-efficient solution that requires the LEAST amount of development time without the need to manage the underlying infrastructure?

A) Use AWS EMR to replicate the data from the databases into Amazon Redshift

B) Use AWS Glue to replicate the data from the databases into Amazon Redshift

C) Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift

D) Use Amazon Kinesis Data Streams to replicate the data from the databases into Amazon Redshift

A

C) Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3.

You can migrate data to Amazon Redshift databases using AWS Database Migration Service. Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. With an Amazon Redshift database as a target, you can migrate data from all of the other supported source databases.

The Amazon Redshift cluster must be in the same AWS account and the same AWS Region as the replication instance. During a database migration to Amazon Redshift, AWS DMS first moves data to an Amazon S3 bucket. When the files reside in an Amazon S3 bucket, AWS DMS then transfers them to the proper tables in the Amazon Redshift data warehouse. AWS DMS creates the S3 bucket in the same AWS Region as the Amazon Redshift database. The AWS DMS replication instance must be located in that same region.

Incorrect options:

B) Use AWS Glue to replicate the data from the databases into Amazon Redshift - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing.

Using AWS Glue involves significant development efforts to write custom migration scripts to copy the database data into Redshift.

A) Use AWS EMR to replicate the data from the databases into Amazon Redshift - Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With EMR you can run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than standard Apache Spark. For short-running jobs, you can spin up and spin down clusters and pay per second for the instances used. For long-running workloads, you can create highly available clusters that automatically scale to meet demand. Amazon EMR uses Hadoop, an open-source framework, to distribute your data and processing across a resizable cluster of Amazon EC2 instances.

Using EMR involves significant infrastructure management efforts to set up and maintain the EMR cluster. Additionally this option involves a major development effort to write custom migration jobs to copy the database data into Redshift.

D) Use Amazon Kinesis Data Streams to replicate the data from the databases into Amazon Redshift - Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.

However, the user is expected to manually provision an appropriate number of shards to process the expected volume of the incoming data stream. The throughput of an Amazon Kinesis data stream is designed to scale without limits via increasing the number of shards within a data stream. Therefore Kinesis Data Streams is not the right fit for this use-case.

313
Q

An Electronic Design Automation (EDA) application produces massive volumes of data that can be divided into two categories. The ‘hot data’ needs to be both processed and stored quickly in a parallel and distributed fashion. The ‘cold data’ needs to be kept for reference with quick access for reads and updates at a low cost.

Which of the following AWS services is BEST suited to accelerate the aforementioned chip design process?

A) Amazon FSx for Windows File Server

B) AWS Glue

C) Amazon EMR

D) Amazon FSx for Lustre

A

D) Amazon FSx for Lustre

Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-performance file system. It is used for workloads such as machine learning, high-performance computing (HPC), video processing, and financial modeling. The open-source Lustre file system is designed for applications that require fast storage – where you want your storage to keep up with your compute. FSx for Lustre integrates with Amazon S3, making it easy to process data sets with the Lustre file system. When linked to an S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files and allows you to write changed data back to S3.

FSx for Lustre provides the ability to both process the ‘hot data’ in a parallel and distributed fashion as well as easily store the ‘cold data’ on Amazon S3. Therefore this option is the BEST fit for the given problem statement.

Incorrect options:

A) Amazon FSx for Windows File Server - Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. FSx for Windows does not allow you to present S3 objects as files and does not allow you to write changed data back to S3. Therefore you cannot reference the “cold data” with quick access for reads and updates at low cost. Hence this option is not correct.

C) Amazon EMR - Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR uses Hadoop, an open-source framework, to distribute your data and processing across a resizable cluster of Amazon EC2 instances. EMR does not offer the same storage and processing speed as FSx for Lustre. So it is not the right fit for the given high-performance workflow scenario.

B) AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing. AWS Glue does not offer the same storage and processing speed as FSx for Lustre. So it is not the right fit for the given high-performance workflow scenario.

314
Q

A company manages a multi-tier social media application that runs on EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. As a solutions architect, you have been tasked to make the application more resilient to periodic spikes in request rates.

Which of the following solutions would you recommend for the given use-case? (Select two)

A) Use AWS Global Accelerator

B) Use AWS Shield

C) Use CloudFront distribution in front of the Application Load Balancer

D) Use AWS Direct Connect

E) Use Aurora Replica

A

E) Use Aurora Replica

Aurora Replicas have two main purposes. You can issue queries to them to scale the read operations for your application. You typically do so by connecting to the reader endpoint of the cluster. That way, Aurora can spread the load for read-only connections across as many Aurora Replicas as you have in the cluster. Aurora Replicas also help to increase availability. If the writer instance in a cluster becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as the new writer. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region.

C) Use CloudFront distribution in front of the Application Load Balancer

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content.

CloudFront offers an origin failover feature to help support your data resiliency needs. CloudFront is a global service that delivers your content through a worldwide network of data centers called edge locations or points of presence (POPs). If your content is not already cached in an edge location, CloudFront retrieves it from an origin that you’ve identified as the source for the definitive version of the content.

Incorrect options:

B) Use AWS Shield - AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency. There are two tiers of AWS Shield - Standard and Advanced. Shield cannot be used to improve application resiliency to handle spikes in traffic.

A) Use AWS Global Accelerator - AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Since CloudFront is better for improving application resiliency to handle spikes in traffic, so this option is ruled out.

D) Use AWS Direct Connect - AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your intranet and Amazon VPC. Direct Connect cannot be used to improve application resiliency to handle spikes in traffic.

315
Q

Your company is deploying a website running on Elastic Beanstalk. The website takes over 45 minutes for the installation and contains both static as well as dynamic files that must be generated during the installation process.

As a Solutions Architect, you would like to bring the time to create a new instance in your Elastic Beanstalk deployment to be less than 2 minutes. Which of the following options should be combined to build a solution for this requirement? (Select two)

A) Use EC2 user data to install the application at boot time

B) Create a Golden AMI with the static installation components already setup

C) Use Elastic Beanstalk deployment caching feature

D) Use EC2 user data to customize the dynamic installation parts at boot time

E) Store the installation files in S3 so they can be quickly retrieved

A

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

When you create an AWS Elastic Beanstalk environment, you can specify an Amazon Machine Image (AMI) to use instead of the standard Elastic Beanstalk AMI included in your platform version. A custom AMI can improve provisioning times when instances are launched in your environment if you need to install a lot of software that isn’t included in the standard AMIs.

B) Create a Golden AMI with the static installation components already setup - A Golden AMI is an AMI that you standardize through configuration, consistent security patching, and hardening. It also contains agents you approve for logging, security, performance monitoring, etc. For the given use-case, you can have the static installation components already setup via the golden AMI.

D) Use EC2 user data to customize the dynamic installation parts at boot time - EC2 instance user data is the data that you specified in the form of a configuration script while launching your instance. You can use EC2 user data to customize the dynamic installation parts at boot time, rather than installing the application itself at boot time.

Incorrect options:

E) Store the installation files in S3 so they can be quickly retrieved - Amazon S3 bucket can be used as a storage location for your source code, logs, and other artifacts that are created when you use Elastic Beanstalk. It cannot be used to run or generate dynamic files since S3 is not an environment but a storage service.

A) Use EC2 user data to install the application at boot time - User data of an instance can be used to perform common automated configuration tasks or run scripts after the instance starts. User data, cannot, however, be used to install the application since it takes over 45 minutes for the installation which contains static as well as dynamic files that must be generated during the installation process.

C) Use Elastic Beanstalk deployment caching feature - Elastic Beanstalk deployment caching is a made-up option. It is just added as a distractor.

316
Q

The DevOps team at an IT company is provisioning a two-tier application in a VPC with a public subnet and a private subnet. The team wants to use either a NAT instance or a NAT gateway in the public subnet to enable instances in the private subnet to initiate outbound IPv4 traffic to the internet but needs some technical assistance in terms of the configuration options available for the NAT instance and the NAT gateway.

As a solutions architect, which of the following options would you identify as CORRECT? (Select three)

A) NAT instance supports port forwarding

B) NAT instance can be used as a bastion server

C) Security Groups can be associated with a NAT gateway

D) Security Groups can be associated with a NAT instance

E) NAT gateway can be used as a bastion server

F) NAT gateway supports port forwarding

A

Correct options:

B) NAT instance can be used as a bastion server

D) Security Groups can be associated with a NAT instance

A) NAT instance supports port forwarding

A NAT instance or a NAT Gateway can be used in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet.

317
Q

A media company has its corporate headquarters in Los Angeles with an on-premises data center using an AWS Direct Connect connection to the AWS VPC. The branch offices in San Francisco and Miami use Site-to-Site VPN connections to connect to the AWS VPC. The company is looking for a solution to have the branch offices send and receive data with each other as well as with their corporate headquarters.

As a solutions architect, which of the following AWS services would you recommend addressing this use-case?

A) VPC Endpoint

B) Software VPN

C) VPN CloudHub

D) VPC Peering

A

C) VPN CloudHub

If you have multiple AWS Site-to-Site VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. This enables your remote sites to communicate with each other, and not just with the VPC. Sites that use AWS Direct Connect connections to the virtual private gateway can also be part of the AWS VPN CloudHub. The VPN CloudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable if you have multiple branch offices and existing internet connections and would like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices.

Per the given use-case, the corporate headquarters has an AWS Direct Connect connection to the VPC and the branch offices have Site-to-Site VPN connections to the VPC. Therefore using the AWS VPN CloudHub, branch offices can send and receive data with each other as well as with their corporate headquarters.

Incorrect options:

A) VPC Endpoint - A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet. When you use VPC endpoint, the traffic between your VPC and the other AWS service does not leave the Amazon network, therefore this option cannot be used to send and receive data between the remote branch offices of the company.

D) VPC Peering - A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. VPC peering facilitates a connection between two VPCs within the AWS network, therefore this option cannot be used to send and receive data between the remote branch offices of the company.

B) Software VPN - Amazon VPC offers you the flexibility to fully manage both sides of your Amazon VPC connectivity by creating a VPN connection between your remote network and a software VPN appliance running in your Amazon VPC network. Since Software VPN just handles connectivity between the remote network and Amazon VPC, therefore it cannot be used to send and receive data between the remote branch offices of the company.

318
Q

A company wants to migrate its on-premises databases to AWS Cloud. The CTO at the company wants a solution that can handle complex database configurations such as secondary indexes, foreign keys, and stored procedures.

As a solutions architect, which of the following AWS services should be combined to handle this use-case? (Select two)

A) AWS Schema Conversion Tool

B) Basic Schema Copy

C) AWS Glue

D) AWS Database Migration Service

E) AWS Snowball Edge

A

A) AWS Schema Conversion Tool

D) AWS Database Migration Service

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. AWS Database Migration Service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora.

Given the use-case where the CTO at the company wants to move away from license-based expensive, legacy commercial database solutions deployed at the on-premises data center to more efficient, open-source, and cost-effective options on AWS Cloud, this is an example of heterogeneous database migrations.

For such a scenario, the source and target databases engines are different, like in the case of Oracle to Amazon Aurora, Oracle to PostgreSQL, or Microsoft SQL Server to MySQL migrations. In this case, the schema structure, data types, and database code of source and target databases can be quite different, requiring a schema and code transformation before the data migration starts.

That makes heterogeneous migrations a two-step process. First use the AWS Schema Conversion Tool to convert the source schema and code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database. All the required data type conversions will automatically be done by the AWS Database Migration Service during the migration. The source database can be located on your on-premises environment outside of AWS, running on an Amazon EC2 instance, or it can be an Amazon RDS database. The target can be a database in Amazon EC2 or Amazon RDS.

Incorrect options:

E) AWS Snowball Edge - Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases. As each Snowball Edge Storage Optimized device can handle 80TB of data, you can order 10 such devices to take care of the data transfer for all applications. The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for data transfer. You may see the Snowball device on the exam, just remember that the original Snowball device had 80TB of storage space. AWS Snowball Edge cannot be used for database migrations.

C) AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing. Therefore, it cannot be used for database migrations.

B) Basic Schema Copy - To quickly migrate a database schema to your target instance you can rely on the Basic Schema Copy feature of AWS Database Migration Service. Basic Schema Copy will automatically create tables and primary keys in the target instance if the target does not already contain tables with the same names. Basic Schema Copy is great for doing a test migration, or when you are migrating databases heterogeneously e.g. Oracle to MySQL or SQL Server to Oracle. Basic Schema Copy will not migrate secondary indexes, foreign keys or stored procedures. When you need to use a more customizable schema migration process (e.g. when you are migrating your production database and need to move your stored procedures and secondary database objects), you must use the AWS Schema Conversion Tool.

319
Q

The engineering team at an e-commerce company has been tasked with migrating to a serverless architecture. The team wants to focus on the key points of consideration when using Lambda as a backbone for this architecture.

As a Solutions Architect, which of the following options would you identify as correct for the given requirement? (Select three)

A) The bigger your deployment package, the slower your Lambda function will cold-start. Hence, AWS suggests packaging dependencies as a separate package from the actual Lambda package

B) Serverless architecture and containers complement each other but you cannot package and deploy Lambda functions as container images

C) Since Lambda functions can scale extremely quickly, its a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold

D) Lambda allocates compute power in proportion to the memory you allocate to your function. AWS, thus recommends to over provision your function time out settings for the proper performance of Lambda functions

E) If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code

F) By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources

A

F) By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources - Lambda functions always operate from an AWS-owned VPC. By default, your function has the full ability to make network requests to any public internet address — this includes access to any of the public AWS APIs. For example, your function can interact with AWS DynamoDB APIs to PutItem or Query for records. You should only enable your functions for VPC access when you need to interact with a private resource located in a private subnet. An RDS instance is a good example.

Once your function is VPC-enabled, all network traffic from your function is subject to the routing rules of your VPC/Subnet. If your function needs to interact with a public resource, you will need a route through a NAT gateway in a public subnet.

C) Since Lambda functions can scale extremely quickly, its a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold - Since Lambda functions can scale extremely quickly, this means you should have controls in place to notify you when you have a spike in concurrency. A good idea is to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds your threshold. You should create an AWS Budget so you can monitor costs on a daily basis.

E) If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code - You can configure your Lambda function to pull in additional code and content in the form of layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. With layers, you can use libraries in your function without needing to include them in your deployment package. Layers let you keep your deployment package small, which makes development easier. A function can use up to 5 layers at a time.

You can create layers, or use layers published by AWS and other AWS customers. Layers support resource-based policies for granting layer usage permissions to specific AWS accounts, AWS Organizations, or all accounts. The total unzipped size of the function and all layers can’t exceed the unzipped deployment package size limit of 250 MB.

Incorrect options:

D) Lambda allocates compute power in proportion to the memory you allocate to your function. AWS, thus recommends to over provision your function time out settings for the proper performance of Lambda functions - Lambda allocates compute power in proportion to the memory you allocate to your function. This means you can over-provision memory to run your functions faster and potentially reduce your costs. However, AWS recommends that you should not over provision your function time out settings. Always understand your code performance and set a function time out accordingly. Overprovisioning function timeout often results in Lambda functions running longer than expected and unexpected costs.

A) The bigger your deployment package, the slower your Lambda function will cold-start. Hence, AWS suggests packaging dependencies as a separate package from the actual Lambda package - This statement is incorrect and acts as a distractor. All the dependencies are also packaged into the single Lambda deployment package.

B) Serverless architecture and containers complement each other but you cannot package and deploy Lambda functions as container images - This statement is incorrect. You can now package and deploy Lambda functions as container images.

320
Q

A multi-national retail company has multiple business divisions, with each division having its own AWS account. The engineering team at the company would like to debug and trace data across these AWS accounts and visualize it in a centralized account.

As a Solutions Architect, which of the following solutions would you suggest for the given use-case?

A) VPC Flow Logs

B) X-Ray

C) CloudWatch Events

D) CloudTrail

A

B) X-Ray

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

You can use X-Ray to collect data across AWS Accounts. The X-Ray agent can assume a role to publish data into an account different from the one in which it is running. This enables you to publish data from various components of your application into a central account.

Incorrect options:

A) VPC Flow Logs: VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is used to analyze network traces and helps with network security. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. You cannot use VPC Flow Logs to debug and trace data across accounts.

C) CloudWatch Events: Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. These help to trigger notifications based on changes happening in AWS services. You cannot use CloudWatch Events to debug and trace data across accounts.

D) CloudTrail: With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. You can use AWS CloudTrail to answer questions such as - “Who made an API call to modify this resource?”. CloudTrail provides event history of your AWS account activity thereby enabling governance, compliance, operational auditing, and risk auditing of your AWS account. You cannot use CloudTrail to debug and trace data across accounts.

321
Q

A financial services company wants to identify any sensitive data stored on its Amazon S3 buckets. The company also wants to monitor and protect all data stored on S3 against any malicious activity.

As a solutions architect, which of the following solutions would you recommend to help address the given requirements?

A) Use Amazon Macie to monitor any malicious activity on data stored in S3. Use Amazon GuardDuty to identify any sensitive data stored on S3

B) Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use Amazon Macie to identify any sensitive data stored on S3

C) Use Amazon GuardDuty to monitor any malicious activity on data stored in S3 as well as to identify any sensitive data stored on S3

D) Use Amazon Macie to monitor any malicious activity on data stored in S3 as well as to identify any sensitive data stored on S3

A

B) Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use Amazon Macie to identify any sensitive data stored on S3

Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.

Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data on Amazon S3. Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as names, addresses, and credit card numbers. It also gives you constant visibility of the data security and data privacy of your data stored in Amazon S3.

322
Q

An organization wants to delegate access to a set of users from the development environment so that they can access some resources in the production environment which is managed under another AWS account.

As a solutions architect, which of the following steps would you recommend?

A) Both IAM roles and IAM users can be used interchangeably for cross-account access

B) It is not possible to access cross-account resources

C) Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment

D) Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environment

A

C) Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment

IAM roles allow you to delegate access to users or services that normally don’t have access to your organization’s AWS resources. IAM users or AWS services can assume a role to obtain temporary security credentials that can be used to make AWS API calls. Consequently, you don’t have to share long-term credentials for access to a resource. Using IAM roles, it is possible to access cross-account resources.

Incorrect options:

D) Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environment - There is no need to create new IAM user credentials for the production environment, as you can use IAM roles to access cross-account resources.

B) It is not possible to access cross-account resources - You can use IAM roles to access cross-account resources.

A) Both IAM roles and IAM users can be used interchangeably for cross-account access - IAM roles and IAM users are separate IAM entities and should not be mixed. Only IAM roles can be used to access cross-account resources.

323
Q

A media company wants a low-latency way to distribute live sports results which are delivered via a proprietary application using UDP protocol.

As a solutions architect, which of the following solutions would you recommend such that it offers the BEST performance for this use case?

A) Use Global Accelerator to provide a low latency way to distribute live sports results

B) Use CloudFront to provide a low latency way to distribute live sports results

C) Use Elastic Load Balancer to provide a low latency way to distribute live sports results

D) Use Auto Scaling group to provide a low latency way to distribute live sports results

A

A) Use Global Accelerator to provide a low latency way to distribute live sports results

AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you offer to your global users. AWS Global Accelerator is easy to set up, configure, and manage. It provides static IP addresses that provide a fixed entry point to your applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and Availability Zones. AWS Global Accelerator always routes user traffic to the optimal endpoint based on performance, reacting instantly to changes in application health, your user’s location, and policies that you configure. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP. Therefore, this option is correct.

Incorrect options:

B) Use CloudFront to provide a low latency way to distribute live sports results - Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content. Regional edge caches help with all types of content, particularly content that tends to become less popular over time. Examples include user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos; and news and event-related content that might suddenly find new popularity. CloudFront supports HTTP/RTMP protocol based requests, therefore this option is incorrect.

C) Use Elastic Load Balancer to provide a low latency way to distribute live sports results - Elastic Load Balancer automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Elastic Load Balancer cannot help with decreasing latency of incoming traffic from the source.

D) Use Auto Scaling group to provide a low latency way to distribute live sports results - Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size. Auto Scaling group cannot help with decreasing latency of incoming traffic from the source.

Exam Alert:

Please note the differences between the capabilities of Global Accelerator and CloudFront -

AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.

Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.

324
Q

A big data analytics company is using Kinesis Data Streams (KDS) to process IoT data from the field devices of an agricultural sciences company. Multiple consumer applications are using the incoming data streams and the engineers have noticed a performance lag for the data delivery speed between producers and consumers of the data streams.

As a solutions architect, which of the following would you recommend for improving the performance for the given use-case?

A) Swap out Kinesis Data Streams with SQS FIFO queues

B) Use Enhanced Fanout feature of Kinesis Data Streams

C) Swap out Kinesis Data Streams with SQS Standard queues

D) Swap out Kinesis Data Streams with Kinesis Data Firehose

A

B) Use Enhanced Fanout feature of Kinesis Data Streams

Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.

By default, the 2MB/second/shard output is shared between all of the applications consuming data from the stream. You should use enhanced fan-out if you have multiple consumers retrieving data from a stream in parallel. With enhanced fan-out developers can register stream consumers to use enhanced fan-out and receive their own 2MB/second pipe of read throughput per shard, and this throughput automatically scales with the number of shards in a stream.

Incorrect options:

D) Swap out Kinesis Data Streams with Kinesis Data Firehose - Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. Kinesis Data Firehose can only write to S3, Redshift, Elasticsearch or Splunk. You can’t have applications consuming data streams from Kinesis Data Firehose, that’s the job of Kinesis Data Streams. Therefore this option is not correct.

C) Swap out Kinesis Data Streams with SQS Standard queues

A) Swap out Kinesis Data Streams with SQS FIFO queues

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent. As multiple applications are consuming the same stream concurrently, both SQS Standard and SQS FIFO are not the right fit for the given use-case.

Exam Alert:

Please understand the differences between the capabilities of Kinesis Data Streams vs SQS, as you may be asked scenario-based questions on this topic in the exam.

325
Q

The development team at an e-commerce startup has set up multiple microservices running on EC2 instances under an Elastic Load Balancer. The team wants to route traffic to multiple back-end services based on the content of the request.

Which of the following types of load balancers would allow routing based on the content of the request?

A) Application Load Balancer

B) Both Application Load Balancer and Network Load Balancer

C) Network Load Balancer

D) Classic Load Balancer

A

A) Application Load Balancer

An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply and then selects a target from the target group for the rule action. You can configure listener rules to route requests to different target groups based on the content of the application traffic. Each target group can be an independent microservice, therefore this option is correct.

Please review this to understand the various rule condition types for Elastic Load Balancer: via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html

Incorrect options:

C) Network Load Balancer - Network Load Balancer operates at the connection level (Layer 4), routing connections to targets - Amazon EC2 instances, microservices, and containers – within Amazon Virtual Private Cloud (Amazon VPC) based on IP protocol data.

D) Classic Load Balancer - Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level. Classic Load Balancer is intended for applications that were built within the EC2-Classic network.

Network Load Balancer or Classic Load Balancer cannot be used to route traffic based on the content of the request. So both these options are incorrect.

B) Both Application Load Balancer and Network Load Balancer - Network Load Balancer cannot be used to route traffic based on the content of the request. So this option is also incorrect.

326
Q

A social photo-sharing web application is hosted on EC2 instances behind an Elastic Load Balancer. The app gives the users the ability to upload their photos and also shows a leaderboard on the homepage of the app. The uploaded photos are stored in S3 and the leaderboard data is maintained in DynamoDB. The EC2 instances need to access both S3 and DynamoDB for these features.

As a solutions architect, which of the following solutions would you recommend as the MOST secure option?

A) Save the AWS credentials (access key Id and secret access token) in a configuration file within the application code on the EC2 instances. EC2 instances can use these credentials to access S3 and DynamoDB

B)Configure AWS CLI on the EC2 instances using a valid IAM user’s credentials. The application code can then invoke shell scripts to access S3 and DynamoDB via AWS CLI

C) Attach the appropriate IAM role to the EC2 instance profile so that the instance can access S3 and DynamoDB

D) Encrypt the AWS credentials via a custom encryption library and save it in a secret directory on the EC2 instances. The application code can then safely decrypt the AWS credentials to make the API calls to S3 and DynamoDB

A

C) Attach the appropriate IAM role to the EC2 instance profile so that the instance can access S3 and DynamoDB

Applications that run on an EC2 instance must include AWS credentials in their AWS API requests. You could have your developers store AWS credentials directly within the EC2 instance and allow applications in that instance to use those credentials. But developers would then have to manage the credentials and ensure that they securely pass the credentials to each instance and update each EC2 instance when it’s time to rotate the credentials.

Instead, you should use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you don’t have to distribute long-term credentials (such as a username and password or access keys) to an EC2 instance. The role supplies temporary permissions that applications can use when they make calls to other AWS resources. When you launch an EC2 instance, you specify an IAM role to associate with the instance. Applications that run on the instance can then use the role-supplied temporary credentials to sign API requests. Therefore, this option is correct.

Incorrect options:

A) Save the AWS credentials (access key Id and secret access token) in a configuration file within the application code on the EC2 instances. EC2 instances can use these credentials to access S3 and DynamoDB

B)Configure AWS CLI on the EC2 instances using a valid IAM user’s credentials. The application code can then invoke shell scripts to access S3 and DynamoDB via AWS CLI

D) Encrypt the AWS credentials via a custom encryption library and save it in a secret directory on the EC2 instances. The application code can then safely decrypt the AWS credentials to make the API calls to S3 and DynamoDB

Keeping the AWS credentials (encrypted or plain text) on the EC2 instance is a bad security practice, therefore these three options using the AWS credentials are incorrect.

327
Q

A Big Data analytics company wants to set up an AWS cloud architecture that throttles requests in case of sudden traffic spikes. The company is looking for AWS services that can be used for buffering or throttling to handle such traffic variations.

Which of the following services can be used to support this requirement?

A) Amazon Gateway Endpoints, Amazon SQS and Amazon Kinesis

B) Amazon SQS, Amazon SNS and AWS Lambda

C) Elastic Load Balancer, Amazon SQS, AWS Lambda

D) Amazon API Gateway, Amazon SQS and Amazon Kinesis

A

Throttling is the process of limiting the number of requests an authorized program can submit to a given operation in a given amount of time.

D) Amazon API Gateway, Amazon SQS and Amazon Kinesis - To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API using the token bucket algorithm, where a token counts for a request. Specifically, API Gateway sets a limit on a steady-state rate and a burst of request submissions against all APIs in your account. In the token bucket algorithm, the burst is the maximum bucket size.

Amazon SQS - Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers buffer capabilities to smooth out temporary volume spikes without losing messages or increasing latency.

Amazon Kinesis - Amazon Kinesis is a fully managed, scalable service that can ingest, buffer, and process streaming data in real-time.

Incorrect options:

B) Amazon SQS, Amazon SNS and AWS Lambda - Amazon SQS has the ability to buffer its messages. Amazon Simple Notification Service (SNS) cannot buffer messages and is generally used with SQS to provide the buffering facility. When requests come in faster than your Lambda function can scale, or when your function is at maximum concurrency, additional requests fail as the Lambda throttles those requests with error code 429 status code. So, this combination of services is incorrect.

A) Amazon Gateway Endpoints, Amazon SQS and Amazon Kinesis - A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. This cannot help in throttling or buffering of requests. Amazon SQS and Kinesis can buffer incoming data. Since Gateway Endpoint is an incorrect service for throttling or buffering, this option is incorrect.

C) Elastic Load Balancer, Amazon SQS, AWS Lambda - Elastic Load Balancer cannot throttle requests. Amazon SQS can be used to buffer messages. When requests come in faster than your Lambda function can scale, or when your function is at maximum concurrency, additional requests fail as the Lambda throttles those requests with error code 429 status code. So, this combination of services is incorrect.

328
Q

A Big Data processing company has created a distributed data processing framework that performs best if the network performance between the processing machines is high. The application has to be deployed on AWS, and the company is only looking at performance as the key measure.

As a Solutions Architect, which deployment do you recommend?

A) Optimize the EC2 kernel using EC2 User Data

B) Use a Spread placement group

C) Use a Cluster placement group

D) Use Spot Instances

A

When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload. Depending on the type of workload, you can create a placement group using one of the following placement strategies:

Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.

Partition – spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.

Spread – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.

There is no charge for creating a placement group.

C) Use a Cluster placement group - A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit of up to 10 Gbps for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network.

Cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. They are also recommended when the majority of the network traffic is between the instances in the group. To provide the lowest latency and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.

Incorrect options:

D) Use Spot Instances - A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. Since performance is the key criteria, this is not the right choice.

A) Optimize the EC2 kernel using EC2 User Data - Optimizing the EC2 kernel won’t help with network performance as it’s bounded by the EC2 instance type mainly. Therefore, this option is incorrect.

B) Use a Spread placement group - A spread placement group is a group of instances that are each placed on distinct racks, with each rack having its own network and power source. The instances are placed across distinct underlying hardware to reduce correlated failures. A spread placement group can span multiple Availability Zones in the same Region. You can have a maximum of seven running instances per Availability Zone per group.

329
Q

An IT company has built a custom data warehousing solution for a retail organization by using Amazon Redshift. As part of the cost optimizations, the company wants to move any historical data (any data older than a year) into S3, as the daily analytical reports consume data for just the last one year. However the analysts want to retain the ability to cross-reference this historical data along with the daily reports.

The company wants to develop a solution with the LEAST amount of effort and MINIMUM cost. As a solutions architect, which option would you recommend to facilitate this use-case?

A) Use Glue ETL job to load the S3 based historical data into Redshift. Once the ad-hoc queries are run for the historic data, it can be removed from Redshift

B) Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift

C) Use the Redshift COPY command to load the S3 based historical data into Redshift. Once the ad-hoc queries are run for the historic data, it can be removed from Redshift

D) Setup access to the historical data via Athena. The analytics team can run historical data queries on Athena and continue the daily reporting on Redshift. In case the reports need to be cross-referenced, the analytics team need to export these in flat files and then do further analysis

A

B) Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift

Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis.

Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables.

Amazon Redshift Spectrum resides on dedicated Amazon Redshift servers that are independent of your cluster. Redshift Spectrum pushes many compute-intensive tasks, such as predicate filtering and aggregation, down to the Redshift Spectrum layer. Thus, Redshift Spectrum queries use much less of your cluster’s processing capacity than other queries.

Incorrect options:

D) Setup access to the historical data via Athena. The analytics team can run historical data queries on Athena and continue the daily reporting on Redshift. In case the reports need to be cross-referenced, the analytics team need to export these in flat files and then do further analysis - Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to set up or manage, and customers pay only for the queries they run. You can use Athena to process logs, perform ad-hoc analysis, and run interactive queries. Providing access to historical data via Athena would mean that historical data reconciliation would become difficult as the daily report would still be produced via Redshift. Such a setup is cumbersome to maintain on a day to day basis. Hence the option to use Athena is ruled out.

C) Use the Redshift COPY command to load the S3 based historical data into Redshift. Once the ad-hoc queries are run for the historic data, it can be removed from Redshift

A) Use Glue ETL job to load the S3 based historical data into Redshift. Once the ad-hoc queries are run for the historic data, it can be removed from Redshift

Loading historical data into Redshift via COPY command or Glue ETL job would cost heavy for a one-time ad-hoc process. The same result can be achieved more cost-efficiently by using Redshift Spectrum. Therefore both these options to load historical data into Redshift are also incorrect for the given use-case.

330
Q

A media company wants to get out of the business of owning and maintaining its own IT infrastructure. As part of this digital transformation, the media company wants to archive about 5PB of data in its on-premises data center to durable long term storage.

As a solutions architect, what is your recommendation to migrate this data in the MOST cost-optimal way?

A) Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier

B) Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into AWS Glacier

C) Setup Site-to-Site VPN connection between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier

D) Setup AWS direct connect between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier

A

A) Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier

Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases. The data stored on the Snowball Edge device can be copied into the S3 bucket and later transitioned into AWS Glacier via a lifecycle policy. You can’t directly copy data from Snowball Edge devices into AWS Glacier.

Incorrect options:

B) Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into AWS Glacier - As mentioned earlier, you can’t directly copy data from Snowball Edge devices into AWS Glacier. Hence, this option is incorrect.

D) Setup AWS direct connect between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier - AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. Direct Connect involves significant monetary investment and takes more than a month to set up, therefore it’s not the correct fit for this use-case where just a one-time data transfer has to be done.

C) Setup Site-to-Site VPN connection between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier - AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). VPN Connections are a good solution if you have an immediate need, and have low to modest bandwidth requirements. Because of the high data volume for the given use-case, Site-to-Site VPN is not the correct choice.

331
Q

Your company runs a website for evaluating coding skills. As a Solutions Architect, you’ve designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend is using an RDS PostgreSQL database. Caching is implemented using a Redis ElastiCache cluster. You would like to increase the security of your authentication to Redis from the Lambda function, leveraging a username and password combination.

As a solutions architect, which of the following options would you recommend?

A) Create an inbound rule to restrict access to Redis Auth only from the Lambda security group

B) Use Redis Auth - Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications.

C) Use IAM Auth and attach an IAM role to Lambda

D) Enable KMS Encryption

A

B) Use Redis Auth - Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications.

Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store.

ElastiCache for Redis supports replication, high availability, and cluster sharding right out of the box. IAM Auth is not supported by ElastiCache.

Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands, thereby improving data security.

Incorrect options:

C) Use IAM Auth and attach an IAM role to Lambda - As discussed above, IAM Auth is not supported by ElastiCache.

D) Enable KMS Encryption - AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses hardware security modules that have been validated under FIPS 140-2. KMS does not support username and password for enabling encryption.

A) Create an inbound rule to restrict access to Redis Auth only from the Lambda security group - A security group acts as a virtual firewall that controls the traffic for one or more instances. You can add rules to each security group that allows traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group.

332
Q

An IT company provides S3 bucket access to specific users within the same account for completing project specific work. With changing business requirements, cross-account S3 access requests are also growing every month. The company is looking for a solution that can offer user level as well as account-level access permissions for the data stored in S3 buckets.

As a Solutions Architect, which of the following would you suggest as the MOST optimized way of controlling access for this use-case?

A) Use Amazon S3 Bucket Policies

B) Use Access Control Lists (ACLs)

C) Use Security Groups

D) Use Identity and Access Management (IAM) Policies

A

A) Use Amazon S3 Bucket Policies

Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket. Policies can be attached to users, groups, or Amazon S3 buckets, enabling centralized management of permissions. With bucket policies, you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources.

You can further restrict access to specific resources based on certain conditions. For example, you can restrict access based on request time (Date Condition), whether the request was sent using SSL (Boolean Conditions), a requester’s IP address (IP Address Condition), or based on the requester’s client application (String Conditions). To identify these conditions, you use policy keys.

Incorrect options:

D) Use Identity and Access Management (IAM) policies - AWS IAM enables organizations with many employees to create and manage multiple users under a single AWS account. IAM policies are attached to the users, enabling centralized control of permissions for users under your AWS Account to access buckets or objects. With IAM policies, you can only grant users within your own AWS account permission to access your Amazon S3 resources. So, this is not the right choice for the current requirement.

B) Use Access Control Lists (ACLs) - Within Amazon S3, you can use ACLs to give read or write access on buckets or objects to groups of users. With ACLs, you can only grant other AWS accounts (not specific users) access to your Amazon S3 resources. So, this is not the right choice for the current requirement.

C) Use Security Groups - A security group acts as a virtual firewall for EC2 instances to control incoming and outgoing traffic. S3 does not support Security Groups, this option just acts as a distractor.

333
Q

A retail company uses Amazon EC2 instances, API Gateway, Amazon RDS, Elastic Load Balancer and CloudFront services. To improve the security of these services, the Risk Advisory group has suggested a feasibility check for using the Amazon GuardDuty service.

Which of the following would you identify as data sources supported by GuardDuty?

A) VPC Flow Logs, API Gateway logs, S3 access logs

B) ELB logs, DNS logs, CloudTrail events

C) CloudFront logs, API Gateway logs, CloudTrail events

D) VPC Flow Logs, DNS logs, CloudTrail events

A

D) VPC Flow Logs, DNS logs, CloudTrail events - Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time-consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS. The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.

GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.

With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain. By integrating with Amazon CloudWatch Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event management and workflow systems.

334
Q

For security purposes, a development team has decided to deploy the EC2 instances in a private subnet. The team plans to use VPC endpoints so that the instances can access some AWS services securely. The members of the team would like to know about the two AWS services that support Gateway Endpoints.

As a solutions architect, which of the following services would you suggest for this requirement? (Select two)

C) Amazon Simple Notification Service (SNS)

D) Amazon Kinesis

E) Amazon Simple Queue Service (SQS)

A

B) Amazon S3

A) DynamoDB

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

There are two types of VPC endpoints: Interface Endpoints and Gateway Endpoints. An Interface Endpoint is an Elastic Network Interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service.

A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3 and DynamoDB.

You can use two types of VPC endpoints to access Amazon S3: gateway endpoints and interface endpoints. A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on premises, or from a VPC in another AWS Region using VPC peering or AWS Transit Gateway.

You must remember that these two services use a VPC gateway endpoint. The rest of the AWS services use VPC interface endpoints.

335
Q

A weather forecast agency collects key weather metrics across multiple cities in the US and sends this data in the form of key-value pairs to AWS Cloud at a one-minute frequency.

As a solutions architect, which of the following AWS services would you use to build a solution for processing and then reliably storing this data with high availability? (Select two)

A) RDS

B) DynamoDB

C) Redshift

D) ElastiCache

E) Lamba

A

E) Lambda - With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume—there’s no charge when your code isn’t running. You can run code for virtually any type of application or backend service—all with zero administration.

B) DynamoDB - Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB is a NoSQL database and it’s best suited to store data in key-value pairs.

AWS Lambda can be combined with DynamoDB to process and capture the key-value data from the IoT sources described in the use-case. So both these options are correct.

Incorrect options:

C) Redshift - Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. You cannot use Redshift to capture data in key-value pairs from the IoT sources, so this option is not correct.

D) ElastiCache - Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. Elasticache is used as a caching layer in front of relational databases. It is not a good fit to store data in key-value pairs from the IoT sources, so this option is not correct.

A) RDS - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. Relational databases are not a good fit to store data in key-value pairs, so this option is not correct.

References:

336
Q

An e-commerce company has copied 1 PB of data from its on-premises data center to an Amazon S3 bucket in the us-west-1 Region using an AWS Direct Connect link. The company now wants to copy the data to another S3 bucket in the us-east-1 Region. The on-premises data center does not allow the use of AWS Snowball.

As a Solutions Architect, which of the following would you recommend to accomplish this?

A) Set up cross-Region replication to copy objects across S3 buckets in different Regions

B) Copy data from the source S3 bucket to a target S3 bucket using the S3 console

C) Copy data from the source bucket to the destination bucket using the aws S3 sync command

D) Use Snowball Edge device to copy the data from one Region to another Region

A

C) Copy data from the source bucket to the destination bucket using the aws S3 sync command

The aws S3 sync command uses the CopyObject APIs to copy objects between S3 buckets. The sync command lists the source and target buckets to identify objects that are in the source bucket but that aren’t in the target bucket. The command also identifies objects in the source bucket that have different LastModified dates than the objects that are in the target bucket. The sync command on a versioned bucket copies only the current version of the object—previous versions aren’t copied. By default, this preserves object metadata, but the access control lists (ACLs) are set to FULL_CONTROL for your AWS account, which removes any additional ACLs. If the operation fails, you can run the sync command again without duplicating previously copied objects.

You can use the command like so:

Incorrect options:

D) Use Snowball Edge device to copy the data from one Region to another Region - As the given requirement is about copying the data from one AWS Region to another AWS Region, so Snowball Edge cannot be used here. Snowball Edge Storage Optimized is the optimal data transfer choice if you need to securely and quickly transfer terabytes to petabytes of data to AWS. You can use Snowball Edge Storage Optimized if you have a large backlog of data to transfer or if you frequently collect data that needs to be transferred to AWS and your storage is in an area where high-bandwidth internet connections are not available or cost-prohibitive. Snowball Edge can operate in remote locations or harsh operating environments, such as factory floors, oil and gas rigs, mining sites, hospitals, and on moving vehicles.

B) Copy data from the source S3 bucket to a target S3 bucket using the S3 console - AWS S3 console cannot be used to transfer 1PB of data from one bucket to another as it’s not feasible. You must use S3 sync for this requirement.

A) Set up cross-Region replication to copy objects across S3 buckets in different Regions - As the data already exists in the source bucket, so you cannot use cross-Region replication because, by default, replication only supports copying new Amazon S3 objects after it is enabled. Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. Object may be replicated to a single destination bucket or multiple destination buckets. Destination buckets can be in different AWS Regions or within the same Region as the source bucket.

337
Q

A file sharing application is used to upload files to an Amazon S3 bucket. Once uploaded, the files are processed to extract metadata, which takes less than 3 seconds. The volume and frequency of the uploads vary from a few files each hour to hundreds of uploads per hour. The company has hired you as an AWS Certified Solutions Architect Associate to design a cost-effective architecture that will meet these requirements.

Which of the following will you recommend?

A) Set up an object-created event notification within the S3 bucket to invoke Amazon Kinesis Data Streams to process the uploaded files

B) Set up AWS CloudTrail trails to log S3 API calls. Use AWS AppSync to process the uploaded files

C) Set up an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the uploaded files

D) Trigger an SNS topic via an S3 event notification when any file is uploaded to Amazon S3. Invoke an AWS Lambda function to process the files

A

C) Set up an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the uploaded files

You can use the Amazon S3 Event Notifications feature to receive notifications when certain events happen in your S3 bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications.

Currently, Amazon S3 can publish notifications for the following events: New object-created events, Object removal events, Restore object events, Reduced Redundancy Storage (RRS) object lost events, Replication events.

For this use case, we use the New object created event notification to invoke a Lambda function.

Lambda can run custom code in response to Amazon S3 bucket events. When Amazon S3 detects an event of a specific type (for example, an object-created event), it can publish the event to AWS Lambda and invoke your Lambda function.

Incorrect options:

A) Set up an object-created event notification within the S3 bucket to invoke Amazon Kinesis Data Streams to process the uploaded files - Amazon Kinesis Data Streams cannot be directly invoked from the S3 event notifications. Amazon S3 supports the following destinations where it can publish events: Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS) queue, AWS Lambda.

B) Set up AWS CloudTrail trails to log S3 API calls. Use AWS AppSync to process the uploaded files - AppSync enables you to manage and update mobile app data in real-time between devices and the cloud and allows apps to interact with the data on the mobile device when it is offline. Even if you enable AWS CloudTrail trails to log S3 API calls, you cannot integrate AppSync to provide a solution for the given use case.

D) Trigger an SNS topic via an S3 event notification when any file is uploaded to Amazon S3. Invoke an AWS Lambda function to process the files - As discussed above, SNS, SQS, and AWS Lambda are the only supported destinations for publishing events from S3. So, configuring the Lambda function directly to an S3 event makes more sense than using an SNS topic that will eventually trigger the Lambda function to process the request.

338
Q

A Big Data analytics company writes data and log files in Amazon S3 buckets. The company now wants to stream the existing data files as well as any ongoing file updates from Amazon S3 to Amazon Kinesis Data Streams.

As a Solutions Architect, which of the following would you suggest as the fastest possible way of building a solution for this requirement?

A) Leverage S3 event notification to trigger a Lambda function for the file create event. The Lambda function will then send the necessary data to Amazon Kinesis Data Streams

B) Configure CloudWatch events for the bucket actions on Amazon S3. An AWS Lambda function can then be triggered from the CloudWatch event that will send the necessary data to Amazon Kinesis Data Streams

C) Amazon S3 bucket actions can be directly configured to write data into Amazon Simple Notification Service (SNS). SNS can then be used to send the updates to Amazon Kinesis Data Streams

D) Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams

A

D) Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams - You can achieve this by using AWS Database Migration Service (AWS DMS). AWS DMS enables you to seamlessly migrate data from supported sources to relational databases, data warehouses, streaming platforms, and other data stores in AWS cloud.

The given requirement needs the functionality to be implemented in the least possible time. You can use AWS DMS for such data-processing requirements. AWS DMS lets you expand the existing application to stream data from Amazon S3 into Amazon Kinesis Data Streams for real-time analytics without writing and maintaining new code. AWS DMS supports specifying Amazon S3 as the source and streaming services like Kinesis and Amazon Managed Streaming of Kafka (Amazon MSK) as the target. AWS DMS allows migration of full and change data capture (CDC) files to these services. AWS DMS performs this task out of box without any complex configuration or code development. You can also configure an AWS DMS replication instance to scale up or down depending on the workload.

AWS DMS supports Amazon S3 as the source and Kinesis as the target, so data stored in an S3 bucket is streamed to Kinesis. Several consumers, such as AWS Lambda, Amazon Kinesis Data Firehose, Amazon Kinesis Data Analytics, and the Kinesis Consumer Library (KCL), can consume the data concurrently to perform real-time analytics on the dataset. Each AWS service in this architecture can scale independently as needed.

Incorrect options:

B) Configure CloudWatch events for the bucket actions on Amazon S3. An AWS Lambda function can then be triggered from the CloudWatch event that will send the necessary data to Amazon Kinesis Data Streams - You will need to enable a Cloudtrail trail to use object-level actions as a trigger for CloudWatch events. Also, using Lambda functions would require significant custom development to write the data into Kinesis Data Streams, so this option is not the right fit.

A) Leverage S3 event notification to trigger a Lambda function for the file create event. The Lambda function will then send the necessary data to Amazon Kinesis Data Streams - Using Lambda functions would require significant custom development to write the data into Kinesis Data Streams, so this option is not the right fit.

C) Amazon S3 bucket actions can be directly configured to write data into Amazon Simple Notification Service (SNS). SNS can then be used to send the updates to Amazon Kinesis Data Streams - S3 cannot directly write data into SNS, although it can certainly use S3 event notifications to send an event to SNS. Also, SNS cannot directly send messages to Kinesis Data Streams. So this option is incorrect.

339
Q

The engineering team at an e-commerce company wants to migrate from SQS Standard queues to FIFO queues with batching.

As a solutions architect, which of the following steps would you have in the migration checklist? (Select three)

A) Make sure that the name of the FIFO queue is the same as the standard queue

B) Make sure that the name of the FIFO queue ends with the .fifo suffix

C) Make sure that the throughput for the target FIFO queue does not exceed 300 messages per second

D) Convert the existing standard queue into a FIFO queue

E) Make sure that the throughput for the target FIFO queue does not exceed 3,000 messages per second

F) Delete the existing standard queue and recreate it as a FIFO queue

A

F) Delete the existing standard queue and recreate it as a FIFO queue

B) Make sure that the name of the FIFO queue ends with the .fifo suffix

E) Make sure that the throughput for the target FIFO queue does not exceed 3,000 messages per second

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.

By default, FIFO queues support up to 3,000 messages per second with batching, or up to 300 messages per second (300 send, receive, or delete operations per second) without batching. Therefore, using batching you can meet a throughput requirement of upto 3,000 messages per second.

The name of a FIFO queue must end with the .fifo suffix. The suffix counts towards the 80-character queue name limit. To determine whether a queue is FIFO, you can check whether the queue name ends with the suffix.

If you have an existing application that uses standard queues and you want to take advantage of the ordering or exactly-once processing features of FIFO queues, you need to configure the queue and your application correctly. You can’t convert an existing standard queue into a FIFO queue. To make the move, you must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue.

Incorrect options:

D) Convert the existing standard queue into a FIFO queue - You can’t convert an existing standard queue into a FIFO queue.

A) Make sure that the name of the FIFO queue is the same as the standard queue - The name of a FIFO queue must end with the .fifo suffix.

C) Make sure that the throughput for the target FIFO queue does not exceed 300 messages per second - By default, FIFO queues support up to 3,000 messages per second with batching.

340
Q

A company wants to store business-critical data on EBS volumes which provide persistent storage independent of EC2 instances. During a test run, the development team found that on terminating an EC2 instance, the attached EBS volume was also lost, which was contrary to their assumptions.

As a solutions architect, could you explain this issue?

A) The EBS volume was configured as the root volume of the Amazon EC2 instance. On termination of the instance, the default behavior is to also terminate the attached root volume

B) On termination of an EC2 instance, all the attached EBS volumes are always terminated

C) The EBS volumes were not backed up on Amazon S3 storage, resulting in the loss of volume

D) The EBS volumes were not backed up on EFS file system storage, resulting in the loss of volume

A

A) The EBS volume was configured as the root volume of the Amazon EC2 instance. On termination of the instance, the default behavior is to also terminate the attached root volume

Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.

When you launch an instance, the root device volume contains the image used to boot the instance. You can choose between AMIs backed by Amazon EC2 instance store and AMIs backed by Amazon EBS.

By default, the root volume for an AMI backed by Amazon EBS is deleted when the instance terminates. You can change the default behavior to ensure that the volume persists after the instance terminates. Non-root EBS volumes remain available even after you terminate an instance to which the volumes were attached. Therefore, this option is correct.

Incorrect options:

C) The EBS volumes were not backed up on Amazon S3 storage, resulting in the loss of volume

D) The EBS volumes were not backed up on EFS file system storage, resulting in the loss of volume

EBS volumes do not need to back up the data on Amazon S3 or EFS filesystem. Both these options are added as distractors.

B) On termination of an EC2 instance, all the attached EBS volumes are always terminated - As mentioned earlier, non-root EBS volumes remain available even after you terminate an instance to which the volumes were attached. Hence this option is incorrect.

341
Q

A financial services company is looking to move its on-premises IT infrastructure to AWS Cloud. The company has multiple long-term server bound licenses across the application stack and the CTO wants to continue to utilize those licenses while moving to AWS.

As a solutions architect, which of the following would you recommend as the MOST cost-effective solution?

A) Use EC2 on-demand instances

B) Use EC2 dedicated hosts

C) Use EC2 reserved instances

D) Use EC2 dedicated instances

A

B) Use EC2 dedicated hosts

You can use Dedicated Hosts to launch Amazon EC2 instances on physical servers that are dedicated for your use. Dedicated Hosts give you additional visibility and control over how instances are placed on a physical server, and you can reliably use the same physical server over time. As a result, Dedicated Hosts enable you to use your existing server-bound software licenses like Windows Server and address corporate compliance and regulatory requirements.

Incorrect options:

D) Use EC2 dedicated instances - Dedicated instances are Amazon EC2 instances that run in a VPC on hardware that’s dedicated to a single customer. Your dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts. Dedicated instances may share hardware with other instances from the same AWS account that are not dedicated instances. Dedicated instances cannot be used for existing server-bound software licenses.

A) Use EC2 on-demand instances

C) Use EC2 reserved instances

Amazon EC2 presents a virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of operating systems, load them with your custom application environment, manage your network’s access permissions, and run your image using as many or few systems as you desire.

Amazon EC2 provides the following purchasing options to enable you to optimize your costs based on your needs:

On-Demand Instances – Pay, by the second, for the instances that you launch.

Reserved Instances – Reduce your Amazon EC2 costs by making a commitment to a consistent instance configuration, including instance type and Region, for a term of 1 or 3 years.

Neither on-demand instances nor reserved instances can be used for existing server-bound software licenses.

342
Q

The engineering team at an e-commerce company is working on cost optimizations for EC2 instances. The team wants to manage the workload using a mix of on-demand and spot instances across multiple instance types. They would like to create an Auto Scaling group with a mix of these instances.

Which of the following options would allow the engineering team to provision the instances for this use-case?

A) You can only use a launch configuration to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost

B) You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost

C) You can use a launch configuration or a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost

D) You can neither use a launch configuration nor a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost

A

B) You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost

A launch template is similar to a launch configuration, in that it specifies instance configuration information such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances. Also, defining a launch template instead of a launch configuration allows you to have multiple versions of a template.

With launch templates, you can provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost. Hence this is the correct option.

Incorrect options:

A) You can only use a launch configuration to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost

C) You can use a launch configuration or a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost

A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping.

You cannot use a launch configuration to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances. Therefore both these options are incorrect.

D) You can neither use a launch configuration nor a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost - You can use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances. So this option is incorrect.

343
Q

The engineering team at a company wants to use Amazon SQS to decouple components of the underlying application architecture. However, the team is concerned about the VPC-bound components accessing SQS over the public internet.

As a solutions architect, which of the following solutions would you recommend to address this use-case?

A) Use VPC endpoint to access Amazon SQS

B) Use VPN connection to access Amazon SQS

C) Use Internet Gateway to access Amazon SQS

D) Use Network Address Translation (NAT) instance to access Amazon SQS

A

A) Use VPC endpoint to access Amazon SQS

AWS customers can access Amazon Simple Queue Service (Amazon SQS) from their Amazon Virtual Private Cloud (Amazon VPC) using VPC endpoints, without using public IPs, and without needing to traverse the public internet. VPC endpoints for Amazon SQS are powered by AWS PrivateLink, a highly available, scalable technology that enables you to privately connect your VPC to supported AWS services.

Amazon VPC endpoints are easy to configure. They also provide reliable connectivity to Amazon SQS without requiring an internet gateway, Network Address Translation (NAT) instance, VPN connection, or AWS Direct Connect connection. With VPC endpoints, the data between your Amazon VPC and Amazon SQS queue is transferred within the Amazon network, helping protect your instances from internet traffic.

AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture.

Incorrect options:

C) Use Internet Gateway to access Amazon SQS - An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic. This option is ruled out as the team does not want to use the public internet to access Amazon SQS.

B) Use VPN connection to access Amazon SQS - AWS Site-to-Site VPN (aka VPN Connection) enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch office network to the cloud with an AWS Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity. As the existing infrastructure is within AWS Cloud, therefore a VPN connection is not required.

D) Use Network Address Translation (NAT) instance to access Amazon SQS - You can use a network address translation (NAT) instance in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet or other AWS services, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. Amazon provides Amazon Linux AMIs that are configured to run as NAT instances. These AMIs include the string amzn-ami-vpc-nat in their names, so you can search for them in the Amazon EC2 console. This option is ruled out because NAT instances are used to provide internet access to any instances in a private subnet.

344
Q

A financial services company recently launched an initiative to improve the security of its AWS resources and it had enabled AWS Shield Advanced across multiple AWS accounts owned by the company. Upon analysis, the company has found that the costs incurred are much higher than expected.

Which of the following would you attribute as the underlying reason for the unexpectedly high costs for AWS Shield Advanced service?

A) AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud, thereby resulting in increased costs

B) AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in increased costs

C) Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once

D) Savings Plans has not been enabled for the AWS Shield Advanced service across all the AWS accounts

A

C) Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once - If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API. You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts.

Incorrect options:

A) AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud, thereby resulting in increased costs - AWS Shield Advanced does offer protection to resources outside of AWS. This should not cause unexpected spike in billing costs.

B) AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in increased costs - AWS Shield Standard is automatically enabled for all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service.

D) Savings Plans has not been enabled for the AWS Shield Advanced service across all the AWS accounts - This option has been added as a distractor. Savings Plans is a flexible pricing model that offers low prices on EC2, Lambda, and Fargate usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. Savings Plans is not applicable for the AWS Shield Advanced service.

345
Q

A media agency stores its re-creatable assets on Amazon S3 buckets. The assets are accessed by a large number of users for the first few days and the frequency of access falls down drastically after a week. Although the assets would be accessed occasionally after the first week, but they must continue to be immediately accessible when required. The cost of maintaining all the assets on S3 storage is turning out to be very expensive and the agency is looking at reducing costs as much as possible.

As a Solutions Architect, can you suggest a way to lower the storage costs while fulfilling the business requirements?

A) Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days

B) Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days

C) Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days

D) Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days

A

A) Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days - S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA. S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed and re-creatable data but do not require the availability and resilience of S3 Standard or S3 Standard-IA. The minimum storage duration is 30 days before you can transition objects from S3 Standard to S3 One Zone-IA.

S3 One Zone-IA offers the same high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. S3 Storage Classes can be configured at the object level, and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.

Incorrect options:

D) Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days

B) Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days

As mentioned earlier, the minimum storage duration is 30 days before you can transition objects from S3 Standard to S3 One Zone-IA or S3 Standard-IA, so both these options are added as distractors.

C) Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days - S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance makes S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. But, it costs more than S3 One Zone-IA because of the redundant storage across availability zones. As the data is re-creatable, so you don’t need to incur this additional cost.

346
Q

A leading social media analytics company is contemplating moving its dockerized application stack into AWS Cloud. The company is not sure about the pricing for using Elastic Container Service (ECS) with the EC2 launch type compared to the Elastic Container Service (ECS) with the Fargate launch type.

Which of the following is correct regarding the pricing for these two services?

A) ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests

B) Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on EC2 instances and EBS volumes used

C) Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on vCPU and memory resources that the containerized application requests

D) Both ECS with EC2 launch type and ECS with Fargate launch type are just charged based on Elastic Container Service used per hour

A

A) ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. ECS allows you to easily run, scale, and secure Docker container applications on AWS.

With the Fargate launch type, you pay for the amount of vCPU and memory resources that your containerized application requests. vCPU and memory resources are calculated from the time your container images are pulled until the Amazon ECS Task* terminates, rounded up to the nearest second. With the EC2 launch type, there is no additional charge for the EC2 launch type. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application.

Incorrect options:

C) Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on vCPU and memory resources that the containerized application requests

B) Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on EC2 instances and EBS volumes used

As mentioned above - with the Fargate launch type, you pay for the amount of vCPU and memory resources. With EC2 launch type, you pay for AWS resources (e.g. EC2 instances or EBS volumes). Hence both these options are incorrect.

D) Both ECS with EC2 launch type and ECS with Fargate launch type are just charged based on Elastic Container Service used per hour

This is a made-up option and has been added as a distractor.

347
Q

A pharma company is working on developing a vaccine for the COVID-19 virus. The researchers at the company want to process the reference healthcare data in an in-memory database that is highly available as well as HIPAA compliant.

As a solutions architect, which of the following AWS services would you recommend for this task?

A) ElastiCache for Memcached

B) ElastiCache for Redis

C) DocumentDB

D) DynamoDB

A

B) ElastiCache for Redis

Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store. ElastiCache for Redis supports replication, high availability, and cluster sharding right out of the box. Amazon ElastiCache for Redis is also HIPAA Eligible Service. Therefore, this is the correct option.

Incorrect Options:

A) ElastiCache for Memcached - Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. Amazon ElastiCache for Memcached is a great choice for implementing an in-memory cache to decrease access latency, increase throughput, and ease the load off your relational or NoSQL database. Session stores are easy to create with Amazon ElastiCache for Memcached. Elasticache for Memcached is not HIPAA eligible, so this option is incorrect.

D) DynamoDB - Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-region, multi-master, durable database with built-in security, backup and restore, and in-memory caching (via DAX) for internet-scale applications. DynamoDB is not an in-memory database, so this option is incorrect.

C) DocumentDB - Amazon DocumentDB is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. DocumentDB is not an in-memory database, so this option is incorrect.

348
Q

A cyber security company is running a mission critical application using a single Spread placement group of EC2 instances. The company needs 15 Amazon EC2 instances for optimal performance.

How many Availability Zones (AZs) will the company need to deploy these EC2 instances per the given use-case?

A) 14

B) 15

C) 3

D) 7

A

C) 3

When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload. Depending on the type of workload, you can create a placement group using one of the following placement strategies:

Cluster placement group

Partition placement group

Spread placement group.

A Spread placement group is a group of instances that are each placed on distinct racks, with each rack having its own network and power source.

Spread placement groups are recommended for applications that have a small number of critical instances that should be kept separate from each other. Launching instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same racks.

A spread placement group can span multiple Availability Zones in the same Region. You can have a maximum of seven running instances per Availability Zone per group. Therefore, to deploy 15 EC2 instances in a single Spread placement group, the company needs to use 3 AZs.

349
Q

The development team at a social media company wants to handle some complicated queries such as “What are the number of likes on the videos that have been posted by friends of a user A?”.

As a solutions architect, which of the following AWS database services would you suggest as the BEST fit to handle such use cases?

A) Amazon Neptune

B) Amazon Redshift

C) Amazon ElasticSearch

D) Amazon Aurora

A

A) Amazon Neptune - Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security.

Amazon Neptune is highly available, with read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across Availability Zones. Neptune is secure with support for HTTPS encrypted client connections and encryption at rest. Neptune is fully managed, so you no longer need to worry about database management tasks such as hardware provisioning, software patching, setup, configuration, or backups.

Amazon Neptune can quickly and easily process large sets of user-profiles and interactions to build social networking applications. Neptune enables highly interactive graph queries with high throughput to bring social features into your applications. For example, if you are building a social feed into your application, you can use Neptune to provide results that prioritize showing your users the latest updates from their family, from friends whose updates they ‘Like,’ and from friends who live close to them.

Incorrect options:

C) Amazon ElasticSearch - Elasticsearch is a search engine based on the Lucene library. Amazon Elasticsearch Service is a fully managed service that makes it easy for you to deploy, secure, and run Elasticsearch cost-effectively at scale. You can build, monitor, and troubleshoot your applications using the tools you love, at the scale you need. The service provides support for open-source Elasticsearch APIs, managed Kibana, integration with Logstash and other AWS services, and built-in alerting and SQL querying.

B) Amazon Redshift - Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. The given use-case is not about data warehousing, so this is not a correct option.

D) Amazon Aurora - Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. Aurora is not an in-memory database. Here, we need a graph database due to the highly connected datasets and queries, therefore Neptune is the best answer