Quizzes Flashcards

This deck contains the AWS Udemy course quizzes and their answers

1
Q

What is a proper definition of an IAM Role?
- IAM Users in multiple User Groups
- An IAM entity that defines a password policy for IAM Users
- An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS Service
- Permissions assigned to IAM Users to perform actions

A

An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS Service

Some AWS services need to perform actions on your behalf. To do so, you assign permissions to AWS services with IAM Roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following is an IAM Security Tool?
- IAM Credentials Report
- IAM Root Account Manager
- IAM Services Report
- IAM Security Advisor

A

IAM Credentials Report

IAM Credentials Report lists all your AWS Account’s IAM Users and the status of their various credentials.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which answer is INCORRECT regarding IAM Users?
- IAM Users can belong to multiple User Groups
- IAM Users don’t have to belong to a User Group
- IAM Policies can be attached directly to IAM Users
- IAM Users access AWS Services using root account credentials

A

IAM Users access AWS Services using root account credentials

IAM Users access AWS services using their own credentials (username & password or Access Keys).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is an IAM best practice?
- Create several IAM Users for one physical person
- Don’t use the root user account
- Share your AWS account credentials with your colleague, so (s)he can perform a task for you
- Do not enable MFA for easier access

A

Don’t use the root user account

Use the root account only to create your first IAM User and a few account/service management tasks. For everyday tasks, use an IAM User.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are IAM Policies?
- A set of policies that defines how AWS accounts interact with each other
- JSON document that defines a set of permissions for making requests to AWS services, and can be used by AWS Users, User Groups, and IAM Roles
- A set of policies that define a password for IAM Users
- A set of policies defined by AWS that show how customers interact with AWS

A

JSON document that defines a set of permissions for making requests to AWS services, and can be used by AWS Users, User Groups, and IAM Roles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which principle should you apply regarding IAM Permissions?
- Grant most privilege
- Grant more permissions if your employee asks you to
- Grant least privilege
- Restrict root account permissions

A

Grant least privilege

Don’t give more permissions than the user needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What should you do to increase your root account security?
- Remove permissions from the root account
- Only access AWS services through AWS Command Line Interface (CLI)
- Don’t create IAM Users, only access your AWS account using the root account
- Enable Multi-Factor Authentication (MFA)

A

Enable Multi-Factor Authentication (MFA)

When you enable MFA, this adds another layer of security. Even if your password is stolen, lost, or hacked your account is not compromised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

IAM User Groups can contain IAM Users and other User Groups
- True
- False

A

False

IAM User Groups can contain only IAM Users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An IAM policy consists of one or more statements. A statement in IAM Policy consists of the following, EXCEPT:
- Effect
- Principal
- Version
- Action
- Resource

A

Version

A statement in an IAM Policy consists of Sid, Effect, Principal, Action, Resource, and Condition. Version is part of the IAM Policy itself, not the statement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which EC2 Purchasing Option can provide you the biggest discount, but it is not suitable for critical jobs or databases?
- Convertible Reserved Instances
- Dedicated Hosts
- Spot Instances

A

Spot Instances

Spot Instances are good for short workloads and this is the cheapest EC2 Purchasing Option. But, they are less reliable because you can lose your EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What should you use to control traffic in and out of EC2 instances?
- Network Access Control List (NACL)
- Security Groups
- IAM Policies

A

Security Groups

Security Groups operate at the EC2 instance level and can control traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How long can you reserve an EC2 Reserved Instance?
- 1 or 3 years
- 2 or 4 years
- 6 months or 1 year
- Anytime between 1 and 3 years

A

1 or 3 years

EC2 Reserved Instances can be reserved for 1 or 3 years only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You would like to deploy a High-Performance Computing (HPC) application on EC2 instances. Which EC2 instance type should you choose?
- Storage Optimized
- Compute Optimized
- Memory Optimized
- General Purpose

A

Compute Optimized

Compute Optimized EC2 instances are great for compute-intensive workloads requiring high-performance processors (e.g., batch processing, media transcoding, high-performance computing, scientific modeling & machine learning, and dedicated gaming servers).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which EC2 Purchasing Option should you use for an application you plan to run on a server continuously for 1 year?
- Reserved Instances
- Spot Instances
- On-Demand Instances

A

Reserved Instances

Reserved Instances are good for long workloads. You can reserve EC2 instances for 1 or 3 years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You are preparing to launch an application that will be hosted on a set of EC2 instances. This application needs some software installation and some OS packages need to be updated during the first launch. What is the best way to achieve this when you launch the EC2 instances?
- Connect to each EC2 instance using SSH, then install the required software and update your OS packages manually
- Write a bash script that installs the required software and updates to your OS, then contact AWS Support and provice them with the script. They will run it on your EC2 instances at launch
- Write a bash script that installs the required software and updates to your OS, then use this script in EC2 User Data when you launch your EC2 instances

A

Write a bash script that installs the required software and updates to your OS, then use this script in EC2 User Data when you launch your EC2 instances

EC2 User Data is used to bootstrap your EC2 instances using a bash script. This script can contain commands such as installing software/packages, download files from the Internet, or anything you want.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which EC2 Instance Type should you choose for a critical application that uses an in-memory database?
- Compute Optimized
- Storage Optimized
- Memory Optimized
- General Purpose

A

Memory Optimized

Memory Optimized EC2 instances are great for workloads requiring large data sets in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Security Groups can be attached to only one EC2 instance.
- True
- False

A

False

Security Groups can be attached to multiple EC2 instances within the same AWS Region/VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You have an e-commerce application with an OLTP database hosted on-premises. This application has popularity which results in its database having thousands of requests per second. You want to migrate the database to an EC2 instance. Which EC2 Instance Type should you choose to handle this high-frequency OLTP database?
- Compute Optimized
- Storage Optimized
- Memory Optimized
- General Purpose

A

Storage Optimized

Storage Optimized EC2 instances are great for workloads requiring high, sequential read/write access to large data sets on local storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You’re planning to migrate on-premises applications to AWS. Your company has strict compliance requirements that require your applications to run on dedicated servers. You also need to use your own server-bound software license to reduce costs. Which EC2 Purchasing Option is suitable for you?
- Convertible Reserved Instances
- Dedicated Hosts
- Spot Instances

A

Dedicated Hosts

Dedicated Hosts are good for companies with strong compliance needs or for software that have complicated licensing models. This is the most expensive EC2 Purchasing Option available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You would like to deploy a database technology on an EC2 instance and the vendor license bills you based on the physical cores and underlying network socket visibility. Which EC2 Purchasing Option allows you to get visibility into them?
- Spot Instances
- On-Demand
- Dedicated Hosts
- Reserved Instances

A

Dedicated Hosts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Spot Fleet is a set of Spot Instances and optionally ……………
- Reserved Instances
- On-Demand Instances
- Dedicated Hosts
- Dedicated Instances

A

On-Demand Instances

Spot Fleet is a set of Spot Instances and optionally On-demand Instances. It allows you to automatically request Spot Instances with the lowest price.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You have launched an EC2 instance that will host a NodeJS application. After installing all the required software and configured your application, you noted down the EC2 instance public IPv4 so you can access it. Then, you stopped and then started your EC2 instance to complete the application configuration. After restart, you can’t access the EC2 instance, and you found that the EC2 instance public IPv4 has been changed. What should you do to assign a fixed public IPv4 to your EC2 instance?
- Allocate an Elastic IP and assign it to your EC2 instance
- From inside your EC2 instance OS, change network configuration from DHCP to static and assign a public IPv4
- Contact AWS Support and request a fixed public IPv4 to your EC2 instance
- This can’t be done, you can only assign a fixed private IPv4 to your EC2 instance

A

Allocate an Elastic IP and assign it to your EC2 instance

Elastic IP is a public IPv4 that you own as long as you want and you can attach it to one EC2 instance at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have an application performing big data analysis hosted on a fleet of EC2 instances. You want to ensure your EC2 instances have the highest networking performance while communicating with each other. Which EC2 Placement Group should you choose?
- Spread Placement Group
- Cluster Placement Group
- Partition Placement Group

A

Cluster Placement Group

Cluster Placement Groups place your EC2 instances next to each other which gives you high-performance computing and networking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have a critical application hosted on a fleet of EC2 instances in which you want to achieve maximum availability when there’s an AZ failure. Which EC2 Placement Group should you choose?
- Spread Placement Group
- Cluster Placement Group
- Partition Placement Group

A

Spread Placement Group

Spread Placement Group places your EC2 instances on different physical hardware across different AZs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Elastic Network Interface (ENI) can be attached to EC2 instances in another AZ.
- True
- False

A

False

Elastic Network Interfaces (ENIs) are bounded to a specific AZ. You can not attach an ENI to an EC2 instance in a different AZ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

The following are true regarding EC2 Hibernate, EXCEPT:
- EC2 Instance Root Volume must be an Instance Store Volume
- Supports On-Demand and Reserved Instances
- EC2 Instance RAM must be less than 150GB
- EC2 Instance Root Volume type must be an EBS Volume

A

EC2 Instance Root Volume must be an Instance Store Volume

To enable EC2 Hibernate, the EC2 Instance Root Volume type must be an EBS volume and must be encrypted to ensure the protection of sensitive content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

You have just terminated an EC2 instance in us-east-1a, and its attached EBS volume is now available. Your teammate tries to attach it to an EC2 instance in us-east-1b but he can’t. What is a possible cause for this?
- He’s missing IAM permissions
- EBS volumes are locked to an AWS Region
- EBS volumes are locked to an Availability Zone

A

EBS volumes are locked to an Availability Zone

EBS Volumes are created for a specific AZ. It is possible to migrate them between different AZs using EBS Snapshots.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

You have launched an EC2 instance with two EBS volumes, Root volume type and the other EBS volume type to store the data. A month later you are planning to terminate the EC2 instance. What’s the default behavior that will happen to each EBS volume?
- Both the Root volume type and the EBS volume type will be deleted
- The Root volume type will be deleted and the EBS volume type will not be deleted
- The Root volume type will not be deleted and the EBS volume type will be deleted
- Both the Root volume type and the EBS volume type will not be deleted

A

The Root volume type will be deleted and the EBS volume type will not be deleted

By default, the Root volume type will be deleted as its “Delete On Termination” attribute checked by default. Any other EBS volume types will not be deleted as its “Delete On Termination” attribute disabled by default.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You can use an AMI in N.Virginia Region us-east-1 to launch an EC2 instance in any AWS Region.
- True
- False

A

False

AMIs are built for a specific AWS Region, they’re unique for each AWS Region. You can’t launch an EC2 instance using an AMI in another AWS Region, but you can copy the AMI to the target AWS Region and then use it to create your EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Which of the following EBS volume types can be used as boot volumes when you create EC2 instances?
- gp2, gp3, st1, sc1
- gp2, gp3, io1, io2
- io1, io2 st1, sc1

A

gp2, gp3, io1, io2

When creating EC2 instances, you can only use the following EBS volume types as boot volumes: gp2, gp3, io1, io2, and Magnetic (Standard).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is EBS Multi-Attach?
- Attach the same EBS volume to multiple EC2 instances in multiple AZs
- Attach multiple EBS volumes in the same AZ to the same EC2 instance
- Attach the same EBS volume to multiple EC2 instances in the same AZ
- Attach multiple EBS volumes in multiple AZs to the same EC2 instance

A

Attach the same EBS volume to multiple EC2 instances in the same AZ

Using EBS Multi-Attach, you can attach the same EBS volume to multiple EC2 instances in the same AZ. Each EC2 instance has full read/write permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

You would like to encrypt an unencrypted EBS volume attached to your EC2 instance. What should you do?
- Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume
- Select your EBS volume, choose Edit Attributes, then tick the Encrypt using KMS option
- Create a new encrypted EBS volume, then copy data from your unencrypted EBS volume to the new EBS volume
- Submit a request to AWS Support to encrypt your EBS volume

A

Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

You have a fleet of EC2 instances distributes across AZs that process a large data set. What do you recommend to make the same data to be accessible as an NFS drive to all of your EC2 instances?
- Use EBS
- Use EFS
- Use an Instance Store

A

Use EFS

EFS is a network file system (NFS) that allows you to mount the same file system on EC2 instances that are in different AZs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

You would like to have a high-performance local cache for your application hosted on an EC2 instance. You don’t mind losing the cache upon the termination of your EC2 instance. Which storage mechanism do you recommend as a Solutions Architect?
- EBS
- EFS
- Instance Store

A

Instance Store

EC2 Instance Store provides the best disk I/O performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

You are running a high-performance database that requires an IOPS of 310,000 for its underlying storage. What do you recommend?
- Use an EBS gp2 drive
- Use an EBS io1 drive
- Use an EC2 Instance Store
- Use an EBS io2 Block Express drive

A

Use an EC2 Instance Store

You can run a database on an EC2 instance that uses an Instance Store, but you’ll have a problem that the data will be lost if the EC2 instance is stopped (it can be restarted without problems). One solution is that you can set up a replication mechanism on another EC2 instance with an Instance Store to have a standby copy. Another solution is to set up backup mechanisms for your data. It’s all up to you how you want to set up your architecture to validate your requirements. In this use case, it’s around IOPS, so we have to choose an EC2 Instance Store.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Scaling an EC2 instance from ` r4.large to r4.4xlarge` is called …………………
- Horizontal Scalability
- Vertical Scalability

A

Vertical Scalability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Running an application on an Auto Scaling Group that scales the number of EC2 instances in and out is called …………………
- Horizontal Scalability
- Vertical Scalability

A

Horizontal Scalability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Elastic Load Balancers provide a …………………..
- static IPv4 we can use in our application
- static DNS name we can use in our application
- static IPv6 we can use in our application

A

static DNS name we can use in our application

Only Network Load Balancer provides both static DNS name and static IP. While, Application Load Balancer provides a static DNS name but it does NOT provide a static IP. The reason being that AWS wants your Elastic Load Balancer to be accessible using a static endpoint, even if the underlying infrastructure that AWS manages changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

You are running a website on 10 EC2 instances fronted by an Elastic Load Balancer. Your users are complaining about the fact that the website always asks them to re-authenticate when they are moving between website pages. You are puzzled because it’s working just fine on your machine and in the Dev environment with 1 EC2 instance. What could be the reason?
- Your website must have an issue when hosted on multiple EC2 instances
- The EC2 instances log out users as they can’t see their IP addresses, instead, they receive ELB IP addresses
- The Elastic Load Balancer does not have Sticky Sessions enabled

A

The Elastic Load Balancer does not have Sticky Sessions enabled

ELB Sticky Session feature ensures traffic for the same client is always redirected to the same target (e.g., EC2 instance). This helps that the client does not lose his session data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You are using an Application Load Balancer to distribute traffic to your website hosted on EC2 instances. It turns out that your website only sees traffic coming from private IPv4 addresses which are in fact your Application Load Balancer’s IP addresses. What should you do to get the IP address of clients connected to your website?
- Modify your website’s frontend so that users send their IP in every request
- Modify your website’s backend to get the client IP address from the X-Forwarded-For header
- Modify your website’s backend to get the client IP address from the X-Forwarded-Port header
- Modify your website’s backend to get the client IP address from the X-Forwarded-Proto header

A

Modify your website’s backend to get the client IP address from the X-Forwarded-For header

When using an Application Load Balancer to distribute traffic to your EC2 instances, the IP address you’ll receive requests from will be the ALB’s private IP addresses. To get the client’s IP address, ALB adds an additional header called “X-Forwarded-For” contains the client’s IP address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

You hosted an application on a set of EC2 instances fronted by an Elastic Load Balancer. A week later, users begin complaining that sometimes the application just doesn’t work. You investigate the issue and found that some EC2 instances crash from time to time. What should you do to protect users from connecting to the EC2 instances that are crashing?
- Enable ELB Health Checks
- Enable ELB Stickiness
- Enable SSL Termination
- Enable Cross-Zone Load Balancing

A

Enable ELB Health Checks

When you enable ELB Health Checks, your ELB won’t send traffic to unhealthy (crashed) EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

You are working as a Solutions Architect for a company and you are required to design an architecture for a high-performance, low-latency application that will receive millions of requests per second. Which type of Elastic Load Balancer should you choose?
- Application Load Balancer
- Classic Load Balancer
- Network Load Balancer

A

Network Load Balancer

Network Load Balancer provides the highest performance and lowest latency if your application needs it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Application Load Balancers support the following protocols, EXCEPT:
- HTTP
- HTTPS
- TCP
- WebSocket

A

TCP

Application Load Balancers support HTTP, HTTPS and WebSocket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Application Load Balancers can route traffic to different Target Groups based on the following, EXCEPT:
- Client’s Location (Geography)
- Hostname
- Request URL Path
- Source IP Address

A

Client’s Location (Geography)

ALBs can route traffic to different Target Groups based on URL Path, Hostname, HTTP Headers, and Query Strings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Registered targets in a Target Groups for an Application Load Balancer can be one of the following, EXCEPT:
- EC2 Instances
- Network Load Balancer
- Private IP Addresses
- Lambda Functions

A

Network Load Balancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

For compliance purposes, you would like to expose a fixed static IP address to your end-users so that they can write firewall rules that will be stable and approved by regulators. What type of Elastic Load Balancer would you choose?
- Application Load Balancer with an Elastic IP attached to it
- Network Load Balancer
- Classic Load Balancer

A

Network Load Balancer

Network Load Balancer has one static IP address per AZ and you can attach an Elastic IP address to it. Application Load Balancers and Classic Load Balancers have a static DNS name.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

You want to create a custom application-based cookie in your Application Load Balancer. Which of the following you can use as a cookie name?
- AWSALBAPP
- APPUSERC
- AWSALBTG
- AWSALB

A

APPUSERC

The following cookie names are reserved by the ELB (AWSALB, AWSALBAPP, AWSALBTG).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

You have a Network Load Balancer that distributes traffic across a set of EC2 instances in us-east-1. You have 2 EC2 instances in us-east-1b AZ and 5 EC2 instances in us-east-1e AZ. You have noticed that the CPU utilization is higher in the EC2 instances in us-east-1b AZ. After more investigation, you noticed that the traffic is equally distributed across the two AZs. How would you solve this problem?
- Enable Cross-Zone Load Balancing
- Enable Sticky Sessions
- Enable ELB Health Checks
- Enable SSL Termination

A

Enable Cross-Zone Load Balancing

When Cross-Zone Load Balancing is enabled, ELB distributes traffic evenly across all registered EC2 instances in all AZs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Which feature in both Application Load Balancers and Network Load Balancers allows you to load multiple SSL certificates on one listener?
- TLS Termination
- Server Name Indication (SNI)
- SSL Security Policies
- Host Headers

A

Server Name Indication (SNI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

You have an Application Load Balancer that is configured to redirect traffic to 3 Target Groups based on the following hostnames: users.example.com, api.external.example.com, and checkout.example.com. You would like to configure HTTPS for each of these hostnames. How do you configure the ALB to make this work?
- Use an HTTP to HTTPS redirect rule
- Use a security group SSL certificate
- Use Server Name Indication (SNI)

A

Use Server Name Indication (SNI)

Server Name Indication (SNI) allows you to expose multiple HTTPS applications each with its own SSL certificate on the same listener. Read more here: https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

You have an application hosted on a set of EC2 instances managed by an Auto Scaling Group that you configured both desired and maximum capacity to 3. Also, you have created a CloudWatch Alarm that is configured to scale out your ASG when CPU Utilization reaches 60%. Your application suddenly received huge traffic and is now running at 80% CPU Utilization. What will happen?
- Nothing
- The desired capacity will go up to 4 and the maximum capacity will stay at 3
- The desired capacity will go up to 4 and the maximum capacity will stay at 4

A

Nothing

The Auto Scaling Group can’t go over the maximum capacity (you configured) during scale-out events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

You have an Auto Scaling Group fronted by an Application Load Balancer. You have configured the ASG to use ALB Health Checks, then one EC2 instance has just been reported unhealthy. What will happen to the EC2 instance?
- The ASG will keep the instance running and re-start the application
- The ASG will detach the EC2 instance and leave it running
- The ASG will terminate the EC2 instance

A

The ASG will terminate the EC2 instance

You can configure the Auto Scaling Group to determine the EC2 instances’ health based on Application Load Balancer Health Checks instead of EC2 Status Checks (default). When an EC2 instance fails the ALB Health Checks, it is marked unhealthy and will be terminated while the ASG launches a new EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Your boss asked you to scale your Auto Scaling Group based on the number of requests per minute your application makes to your database. What should you do?
- Create a CloudWatch custom metric then create a Cloudwatch Alarm on this metric to scale your ASG
- You politelly tell him it’s impossible
- Enable Detailed Monitoring then create a CloudWatch alarm to scale your ASG

A

Create a CloudWatch custom metric then create a Cloudwatch Alarm on this metric to scale your ASG

There’s no CloudWatch Metric for “requests per minute” for backend-to-database connections. You need to create a CloudWatch Custom Metric, then create a CloudWatch Alarm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

An application is deployed with an Application Load Balancer and an Auto Scaling Group. Currently, you manually scale the ASG and you would like to define a Scaling Policy that will ensure the average number of connections to your EC2 instances is around 1000. Which Scaling Policy should you use?
- Simple Scaling Policy
- Step Scaling Policy
- Target Tracking Policy
- Scheduled Scaling Policy

A

Target Tracking Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

You have an ASG and a Network Load Balancer. The application on your ASG supports the HTTP protocol and is integrated with the Load Balancer health checks. You are currently using the TCP health checks. You would like to migrate to using HTTP health checks, what do you do?
- Migrate to an Application Load Balancer
- Migrate to health check to HTTP

A

Migrate to health check to HTTP

the NLB supports HTTP health checks as well as TCP and HTTPS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

You have a website hosted in EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. Currently, the website is served over HTTP, and you have been tasked to configure it to use HTTPS. You have created a certificate in ACM and attached it to the Application Load Balancer. What you can do to force users to access the website using HTTPS instead of HTTP?
- Send an email to all customers to use HTTPS instead of HTTP
- Configure the Application Load Balancer to redirect HTTP to HTTPS
- Configure the DNS record to redirect HTTP to HTTPS

A

Configure the Application Load Balancer to redirect HTTP to HTTPS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Amazon RDS supports the following databases, EXCEPT:
- MongoDB
- MySQL
- MariaDB
- Microsoft SQL Server

A

MongoDB

RDS supports MySQL, PostgreSQL, MariaDB, Oracle, MS SQL Server, and Amazon Aurora.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

You’re planning for a new solution that requires a MySQL database that must be available even in case of a disaster in one of the Availability Zones. What should you use?
- Create Read Replicas
- Enable Encryption
- Enable Multi-AZ

A

Enable Multi-AZ

Multi-AZ helps when you plan a disaster recovery for an entire AZ going down. If you plan against an entire AWS Region going down, you should use backups and replication across AWS Regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

We have an RDS database that struggles to keep up with the demand of requests from our website. Our million users mostly read news, and we don’t post news very often. Which solution is NOT adapted to this problem?
- An ElastiCache Cluster
- RDS Multi-AZ
- RDS Read Replicas

A

RDS Multi-AZ

ElastiCache and RDS Read Replicas do indeed help with scaling reads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

You have set up read replicas on your RDS database, but users are complaining that upon updating their social media posts, they do not see their updated posts right away. What is a possible cause for this?
- There must be a bug in your application
- Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency
- You should have setup Multi-AZ instead

A

Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Which RDS (NOT Aurora) feature when used does not require you to change the SQL connection string?
- Multi-AZ
- Read Replicas

A

Multi-AZ

Multi-AZ keeps the same connection string regardless of which database is up.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Your application running on a fleet of EC2 instances managed by an Auto Scaling Group behind an Application Load Balancer. Users have to constantly log back in and you don’t want to enable Sticky Sessions on your ALB as you fear it will overload some EC2 instances. What should you do?
- Use your own custom Load Balancer on EC2 instances instead of using ALB
- Store session data in RDS
- Store session data in ElastiCache
- Store session data in a shared EBS volume

A

Store session data in ElastiCache

Storing Session Data in ElastiCache is a common pattern to ensuring different EC2 instances can retrieve your user’s state if needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

An analytics application is currently performing its queries against your main production RDS database. These queries run at any time of the day and slow down the RDS database which impacts your users’ experience. What should you do to improve the users’ experience?
- Setup a Read Replica
- Setup Multi-AZ
- Run the analytics queries at night

A

Setup a Read Replica

Read Replicas will help as your analytics application can now perform queries against it, and these queries won’t impact the main production RDS database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

You would like to ensure you have a replica of your database available in another AWS Region if a disaster happens to your main AWS Region. Which database do you recommend to implement this easily?
- RDS Read Replicas
- RDS Multi-AZ
- Aurora Read Replicas
- Aurora Global Database

A

Aurora Global Database

Aurora Global Databases allows you to have an Aurora Replica in another AWS Region, with up to 5 secondary regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

How can you enhance the security of your ElastiCache Redis Cluster by allowing users to access your ElastiCache Redis Cluster using their IAM Identities (e.g., Users, Roles)?
- Using Redis Authentication
- Using IAM Authentication
- Use Security Groups

A

Using IAM Authentication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Your company has a production Node.js application that is using RDS MySQL 5.6 as its database. A new application programmed in Java will perform some heavy analytics workload to create a dashboard on a regular hourly basis. What is the most cost-effective solution you can implement to minimize disruption for the main application?
- Enable Multi-AZ for the RDS database and run the analytics workload on the standby database
- Create a Read Replica in a different AZ and run the analytics on the replica database
- Create a Read Replica in a different AZ and run the analytics workload on the source database

A

Create a Read Replica in a different AZ and run the analytics on the replica database

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

You would like to create a disaster recovery strategy for your RDS PostgreSQL database so that in case of a regional outage the database can be quickly made available for both read and write workloads in another AWS Region. The DR database must be highly available. What do you recommend?
- Create a Read Replica in the same region and enable Multi-AZ on the main database
- Create a Read Replica in a different region and enable Multi-AZ on the Read Replica
- Create a Read Replica in the same region and enable Multi-AZ on the Read Replica
- Enable Multi-Region option on the main database

A

Create a Read Replica in a different region and enable Multi-AZ on the Read Replica

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

You have migrated the MySQL database from on-premises to RDS. You have a lot of applications and developers interacting with your database. Each developer has an IAM user in the company’s AWS account. What is a suitable approach to give access to developers to the MySQL RDS DB instance instead of creating a DB user for each one?
- By default IAM users have access to your RDS database
- Use Amazon Cognito
- Enable IAM Database Authentication

A

Enable IAM Database Authentication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Which of the following statement is true regarding replication in both RDS Read Replicas and Multi-AZ?
- Read Replica uses Asynchronous Replication and Multi-AZ uses Asynchronous Replication
- Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication
- Read Replica uses Synchronous Replication and Multi-AZ uses Synchronous Replication
- Read Replica uses Synchronous Replication and Multi-AZ uses Asynchronous Replication

A

Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

How do you encrypt an unencrypted RDS DB instance?
- Do it straight from AWS Console, select your RDS DB instance, choose Actions then Encrypt using KMS
- Do it straight from AWS Console, after stopping the RDS DB instance
- Create a snapshot of the unencrypted RDS DS instance, copy the snapshot and tick “Enable encryption”, then restore the RDS DB instance from the encrypted snapshot

A

Create a snapshot of the unencrypted RDS DS instance, copy the snapshot and tick “Enable encryption”, then restore the RDS DB instance from the encrypted snapshot

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

For your RDS database, you can have up to ………… Read Replicas.
- 5
- 15
- 7

A

15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Which RDS database technology does NOT support IAM Database Authentication?
- Oracle
- PostgreSQL
- MySQL

A

Oracle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

You have an un-encrypted RDS DB instance and you want to create Read Replicas. Can you configure the RDS Read Replicas to be encrypted?
- No
- Yes

A

No

You can not create encrypted Read Replicas from an unencrypted RDS DB instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

An application running in production is using an Aurora Cluster as its database. Your development team would like to run a version of the application in a scaled-down application with the ability to perform some heavy workload on a need-basis. Most of the time, the application will be unused. Your CIO has tasked you with helping the team to achieve this while minimizing costs. What do you suggest?
- Use an Aurora Global Database
- Use an RDS Database
- Use Aurora Serverless
- Run Aurora on EC2, and write script to shut down the EC2 instance at night

A

Use Aurora Serverless

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

How many Aurora Read Replicas can you have in a single Aurora DB Cluster?
- 5
- 10
- 15

A

15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Amazon Aurora supports both …………………….. databases.
- MySQL and MariaDB
- MySQL and PostgreSQL
- Oracle and MariaDB
- Oracle and MS SQL Server

A

MySQL and PostgreSQL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

You work as a Solutions Architect for a gaming company. One of the games mandates that players are ranked in real-time based on their score. Your boss asked you to design then implement an effective and highly available solution to create a gaming leaderboard. What should you use?
- Use RDS for MySQL
- Use Amazon Aurora
- Use ElastiCache for Memcached
- Use ElastiCache for Redis - Sorted Sets

A

Use ElastiCache for Redis - Sorted Sets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

You need full customization of an Oracle Database on AWS. You would like to benefit from using the AWS services. What do you recommend?
- RDS for Oracle
- RDS Custom for Oracle
- Deploy Oracle on EC2

A

RDS Custom for Oracle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

You need to store long-term backups for your Aurora database for disaster recovery and audit purposes. What do you recommend?
- Enable Automated Backups
- Perform On Demand Backups
- Use Aurora Database Cloning

A

Perform On Demand Backups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Your development team would like to perform a suite of read and write tests against your production Aurora database because they need access to production data as soon as possible. What do you advise?
- Create an Aurora Read Replica for them
- Do the test against the production database
- Make a DB Snapshot and Restore it into a new database
- Use the Aurora Cloning Feature

A

Use the Aurora Cloning Feature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

You have 100 EC2 instances connected to your RDS database and you see that upon a maintenance of the database, all your applications take a lot of time to reconnect to RDS, due to poor application logic. How do you improve this?
- Fix all the applications
- Disable Multi-AZ
- Enable Multi-AZ
- Use an RDS Proxy

A

Use an RDS Proxy

This reduces the failover time by up to 66% and keeps connection actives for your applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

You have purchased mycoolcompany.com on Amazon Route 53 Registrar and would like the domain to point to your Elastic Load Balancer my-elb-1234567890.us-west-2.elb.amazonaws.com. Which Route 53 Record type must you use here?
- CNAME
- Alias

A

Alias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

You have deployed a new Elastic Beanstalk environment and would like to direct 5% of your production traffic to this new environment. This allows you to monitor for CloudWatch metrics and ensuring that there’re no bugs exist with your new environment. Which Route 53 Record type allows you to do so?
- Simple
- Weighted
- Latency
- Failover

A

Weighted

Weighted Routing Policy allows you to redirect part of the traffic based on weight (e.g., percentage). It’s a common use case to send part of traffic to a new version of your application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

You have updated a Route 53 Record’s myapp.mydomain.com value to point to a new Elastic Load Balancer, but it looks like users are still redirected to the old ELB. What is a possible cause for this behavior?
- Because of the Alias record
- Because of the CNAME record
- Because of the TTL
- Because of Route 53 Health Checks

A

Because of the TTL

Each DNS record has a TTL (Time To Live) which orders clients for how long to cache these values and not overload the DNS Resolver with DNS requests. The TTL value should be set to strike a balance between how long the value should be cached vs. how many requests should go to the DNS Resolver.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

You have an application that’s hosted in two different AWS Regions us-west-1 and eu-west-2. You want your users to get the best possible user experience by minimizing the response time from application servers to your users. Which Route 53 Routing Policy should you choose?
- Multi Value
- Weighted
- Latency
- Geolocation

A

Latency

Latency Routing Policy will evaluate the latency between your users and AWS Regions, and help them get a DNS response that will minimize their latency (e.g. response time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

You have a legal requirement that people in any country but France should NOT be able to access your website. Which Route 53 Routing Policy helps you in achieving this?
- Latency
- Simple
- Multi Value
- Geolocation

A

Geolocation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

You have purchased a domain on GoDaddy and would like to use Route 53 as the DNS Service Provider. What should you do to make this work?
- Request for a domain transfer
- Create a Private Hosted Zone and update the 3rd party Registrar NS records
- Create a Public Hosted Zone and update the Route 53 NS records
- Create a Public Hosted Zone and update the 3rd party Registrar NS records

A

Create a Public Hosted Zone and update the 3rd party Registrar NS records

Public Hosted Zones are meant to be used for people requesting your website through the Internet. Finally, NS records must be updated on the 3rd party Registrar.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Which of the following are NOT valid Route 53 Health Checks?
- Health Check that monitors a SQS Queue
- Health Check that monitors an Endpoint
- Health Check that monitors other Health Checks
- Health Check that monitors CloudWatch Alarms

A

Health Check that monitors a SQS Queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Your website TriangleSunglasses.com is hosted on a fleet of EC2 instances managed by an Auto Scaling Group and fronted by an Application Load Balancer. Your ASG has been configured to scale on-demand based on the traffic going to your website. To reduce costs, you have configured the ASG to scale based on the traffic going through the ALB. To make the solution highly available, you have updated your ASG and set the minimum capacity to 2. How can you further reduce the costs while respecting the requirements?
- Remove the ALB and use an Elastic IP Instead
- Reserve two EC2 Instances
- Reduce the minimum capacity to 1
- Reduce the minimum capacity to 0

A

Reserve two EC2 Instances

This is the way to save further costs as we will run 2 EC2 instances no matter what.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Which of the following will NOT help us while designing a STATELESS application tier?
- Store session data in Amazon RDS
- Store session data in Amazon ElastiCache
- Store session data in the client HTTP cookies
- Store session data on EBS Volumes

A

Store session data on EBS Volumes

EBS volumes are created in a specific AZ and can only be attached to one EC2 instance at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

You want to install software updates on 100s of Linux EC2 instances that you manage. You want to store these updates on shared storage which should be dynamically loaded on the EC2 instances and shouldn’t require heavy operations. What do you suggest?
- Store the software updates on EBS and sync them using data replication software from one master in each AZ
- Store the software updates on EFS and mount EFS as a network drive at startup
- Package the software updates as an EBS snapshot and create EBS volumes for each new software update
- Store the software updates on Amazon RDS

A

Store the software updates on EFS and mount EFS as a network drive at startup

EFS is a network file system (NFS) that allows you to mount the same file system to 100s of EC2 instances. Storing software updates on an EFS allows each EC2 instance to access them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

As a Solutions Architect, you’re planning to migrate a complex ERP software suite to AWS Cloud. You’re planning to host the software on a set of Linux EC2 instances managed by an Auto Scaling Group. The software traditionally takes over an hour to set up on a Linux machine. How do you recommend you speed up the installation process when there’s a scale-out event?
- Use a Golden AMI
- Bootstrap using EC2 User Data
- Store the application in Amazon RDS
- Retrieve the application setup files from RDS

A

Use a Golden AMI

Golden AMI is an image that contains all your software installed and configured so that future EC2 instances can boot up quickly from that AMI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

You’re developing an application and would like to deploy it to Elastic Beanstalk with minimal cost. You should run it in ………………
- Single Instance Mode
- High Availability Mode

A

Single Instance Mode

The question mentions that you’re still in the development stage and you want to reduce costs. Single Instance Mode will create one EC2 instance and one Elastic IP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

You’re deploying your application to an Elastic Beanstalk environment but you notice that the deployment process is painfully slow. After reviewing the logs, you found that your dependencies are resolved on each EC2 instance each time you deploy. How can you speed up the deployment process with minimal impact?
- Remove some dependencies in your code
- Place the dependencies in Amazon EFS
- Create a Golden AMI that contains the depedencies and use that image to launch the EC2 instances

A

Create a Golden AMI that contains the depedencies and use that image to launch the EC2 instances

Golden AMI is an image that contains all your software, dependencies, and configurations, so that future EC2 instances can boot up quickly from that AMI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

You have a 25 GB file that you’re trying to upload to S3 but you’re getting errors. What is a possible solution for this?
- The file size limit on S3 is 5GB
- Update your bucket policy to allow the larger file
- Use Multi-Part upload when uploading files larger than 5GB
- Encrypt the file

A

Use Multi-Part upload when uploading files larger than 5GB

Multi-Part Upload is recommended as soon as the file is over 100 MB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

You’re getting errors while trying to create a new S3 bucket named “dev”. You’re using a new AWS Account with no S3 buckets created before. What is a possible cause for this?
- You’re missing IAM permissions to create an S3 bucket
- S3 bucket names must be globally unique and “dev” is already taken

A

S3 bucket names must be globally unique and “dev” is already taken

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

You have enabled versioning in your S3 bucket which already contains a lot of files. Which version will the existing files have?
- 1
- 0
- -1
- null

A

null

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

You have updated an S3 bucket policy to allow IAM users to read/write files in the S3 bucket, but one of the users complain that he can’t perform a PutObject API call. What is a possible cause for this?
- The S3 bucket policy must be wrong
- The user is lacking permissions
- The IAM user must have an explicit DENY in the attached IAM Policy
- You need to contact AWS Support to lift this limit

A

The IAM user must have an explicit DENY in the attached IAM Policy

Explicit DENY in an IAM Policy will take precedence over an S3 bucket policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

You want the content of an S3 bucket to be fully available in different AWS Regions. That will help your team perform data analysis at the lowest latency and cost possible. What S3 feature should you use?
- Amazon Cloudfront Distributions
- S3 Versioning
- S3 Static Website Hosting
- S3 Replication

A

S3 Replication

S3 Replication allows you to replicate data from an S3 bucket to another in the same/different AWS Region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

You have 3 S3 buckets. One source bucket A, and two destination buckets B and C in different AWS Regions. You want to replicate objects from bucket A to both bucket B and C. How would you achieve this?
- Configure replication from bucket A to bucket B, then from bucket A to bucket C
- Configure replication from bucket A to bucket B, then from bucket B to bucket C
- Configure replication from bucket A to bucket C, then from bucket C to bucket B

A

Configure replication from bucket A to bucket B, then from bucket A to bucket C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

Which of the following is NOT a Glacier Deep Archive retrieval mode?
- Expedited (1-5 minutes)
- Standard (12 hours)
- Bulk (48hs)

A

Expedited (1-5 minutes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

Which of the following is NOT a Glacier Flexible retrieval mode?
- Instant (10 seconds)
- Expedited (1-5 minutes)
- Standard (3-5 hours)
- Bulk (5-12 hours)

A

Instant (10 seconds)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

How can you be notified when there’s an object uploaded to your S3 bucket?
- S3 Select
- S3 Access Logs
- S3 Event Notifications
- S3 Analytics

A

S3 Event Notifications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

You have an S3 bucket that has S3 Versioning enabled. This S3 bucket has a lot of objects, and you would like to remove old object versions to reduce costs. What’s the best approach to automate the deletion of these old object versions?
- S3 Lifecycle Rules - Transition Actions
- S3 Lifecycle Rules - Expiration Actions
- S3 Access Logs

A

S3 Lifecycle Rules - Expiration Actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

How can you automate the transition of S3 objects between their different tiers?
- AWS Lambda
- CloudWatch Events
- S3 Lifecycle Rules

A

S3 Lifecycle Rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

While you’re uploading large files to an S3 bucket using Multi-part Upload, there are a lot of unfinished parts stored in the S3 bucket due to network issues. You are not using these unfinished parts and they cost you money. What is the best approach to remove these unfinished parts?
- Use AWS Lambda to loop on each old/unfinished part and delete them
- Request AWS Support to help you delete old/unfinished parts
- Use an S3 Lifecycle Policy to automate old/unfinished parts deletion

A

Use an S3 Lifecycle Policy to automate old/unfinished parts deletion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

You are looking to get recommendations for S3 Lifecycle Rules. How can you analyze the optimal number of days to move objects between different storage tiers?
- S3 Inventory
- S3 Analytics
- S3 Lifecycle Rules Advisor

A

S3 Analytics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

You are looking to build an index of your files in S3, using Amazon RDS PostgreSQL. To build this index, it is necessary to read the first 250 bytes of each object in S3, which contains some metadata about the content of the file itself. There are over 100,000 files in your S3 bucket, amounting to 50 TB of data. How can you build this index efficiently?
- Use RDS Import feature to load the data form S3 to PostgreSQL, and run a SQL query to build the index
- Create an application that will traverse the S3 bucket, read all the files one by one, extract the first 250 bytes, and store that information in RDS
- Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in RDS
- Create an application that will traverse the S3 bucket, use S3 Select to get the first 250 bytes, and store that information in RDS

A

Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in RDS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

You have a large dataset stored on-premises that you want to upload to the S3 bucket. The dataset is divided into 10 GB files. You have good bandwidth but your Internet connection isn’t stable. What is the best way to upload this dataset to S3 and ensure that the process is fast and avoid any problems with the Internet connection?
- Use Multi-part Upload Only
- Use S3 Select & Use S3 Transfer Acceleration
- Use S3 Multi-part Upload & S3 Transfer Acceleration

A

Use S3 Multi-part Upload & S3 Transfer Acceleration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

You would like to retrieve a subset of your dataset stored in S3 with the .csv format. You would like to retrieve a month of data and only 3 columns out of 10, to minimize compute and network costs. What should you use?
- S3 Analytics
- S3 Access Logs
- S3 Select
- S3 Inventory

A

S3 Select

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

A company is preparing for compliance and regulatory review on its infrastructure on AWS. Currently, they have their files stored on S3 buckets that are not encrypted, which must be encrypted as required for compliance and regulatory review. Which S3 feature allows them to encrypt all files in their S3 buckets in the most efficient and cost-effective way?
- S3 Access Points
- S3 Cross-Region Replication
- S3 Batch Operations
- S3 Lifecycle Rules

A

S3 Batch Operations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

Your client wants to make sure that file encryption is happening in S3, but he wants to fully manage the encryption keys and never store them in AWS. You recommend him to use ……………………….
- SSE-S3
- SSE-KMS
- SSE-C
- Client-Side Encryption

A

SSE-C

With SSE-C, the encryption happens in AWS and you have full control over the encryption keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

A company you’re working for wants their data stored in S3 to be encrypted. They don’t mind the encryption keys stored and managed by AWS, but they want to maintain control over the rotation policy of the encryption keys. You recommend them to use ………………..
- SSE-S3
- SSE-KMS
- SSE-C
- Client-Side Encryption

A

SSE-KMS

With SSE-KMS, the encryption happens in AWS, and the encryption keys are managed by AWS but you have full control over the rotation policy of the encryption key. Encryption keys stored in AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Your company does not trust AWS for the encryption process and wants it to happen on the application. You recommend them to use ………………..
- SSE-S3
- SSE-KMS
- SSE-C
- Client-Side Encryption

A

Client-Side Encryption

With Client-Side Encryption, you have to do the encryption yourself and you have full control over the encryption keys. You perform the encryption yourself and send the encrypted data to AWS. AWS does not know your encryption keys and cannot decrypt your data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

You have a website that loads files from an S3 bucket. When you try the URL of the files directly in your Chrome browser it works, but when a website with a different domain tries to load these files it doesn’t. What’s the problem?
- The Bucket policy is wrong
- The IAM policy is wrong
- CORS is wrong
- Encryption is wrong

A

CORS is wrong

Cross-Origin Resource Sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. To learn more about CORS, go here: https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

An e-commerce company has its customers and orders data stored in an S3 bucket. The company’s CEO wants to generate a report to show the list of customers and the revenue for each customer. Customer data stored in files on the S3 bucket has sensitive information that we don’t want to expose in the report. How do you recommend the report can be created without exposing sensitive information?
- Use S3 Object Lambda to change the objects before they are retrieved by the report generator application
- Create another S3 bucket. Create a lambda function to process each file, remove the sensitive information, and then move them to the new S3 bucket
- Use S3 Object Lock to lock the sensitive information from being fetched by the report generator application

A

Use S3 Object Lambda to change the objects before they are retrieved by the report generator application

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

You suspect that some of your employees try to access files in an S3 bucket that they don’t have access to. How can you verify this is indeed the case without them noticing?
- Enable S3 Access Logs and analyze them using Athena
- Restrict their IAM policies and look at CloudTrail logs
- Use a bucket policy

A

Enable S3 Access Logs and analyze them using Athena

S3 Access Logs log all the requests made to S3 buckets and Amazon Athena can then be used to run serverless analytics on top of the log files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

You are looking to provide temporary URLs to a growing list of federated users to allow them to perform a file upload on your S3 bucket to a specific location. What should you use?
- S3 CORS
- S3 Pre-Signed URL
- S3 Bucket Policies

A

S3 Pre-Signed URL

S3 Pre-Signed URLs are temporary URLs that you generate to grant time-limited access to some actions in your S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

For compliance reasons, your company has a policy mandate that database backups must be retained for 4 years. It shouldn’t be possible to erase them. What do you recommend?
- Glacier Vaults with Vault Lock Policies
- EFS network drives with restrictive Linux Permissions
- S3 with Bucket Policies

A

Glacier Vaults with Vault Lock Policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

You would like all your files in an S3 bucket to be encrypted by default. What is the optimal way of achieving this?
- Use a bucket policy that forces HTTPS connections
- Do nothing, Amazon S3 automatically encrypts new objects using Server-Side Encryption with S3-Managed Keys (SSE-S3)
- Enable Versioning

A

Do nothing, Amazon S3 automatically encrypts new objects using Server-Side Encryption with S3-Managed Keys (SSE-S3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

You have enabled versioning and want to be extra careful when it comes to deleting files on an S3 bucket. What should you enable to prevent accidental permanent deletions?
- Use a bucket policy
- Enable MFA Delete
- Encrypt the files
- Disable Versioning

A

Enable MFA Delete

MFA Delete forces users to use MFA codes before deleting S3 objects. It’s an extra level of security to prevent accidental deletions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

A company has its data and files stored on some S3 buckets. Some of these files need to be kept for a predefined period of time and protected from being overwritten and deletion according to company compliance policy. Which S3 feature helps you in doing this?
- S3 Object Lock - Retention Governance Mode
- S3 Versioning
- S3 Object Lock - Retention Compliance Mode
- S3 Glacier Vault Lock

A

S3 Object Lock - Retention Compliance Mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

Which of the following S3 Object Lock configuration allows you to prevent an object or its versions from being overwritten or deleted indefinitely and gives you the ability to remove it manually?
- Retention Governance Mode
- Retention Compliance Mode
- Legal Hold

A

Legal Hold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

You have a CloudFront Distribution that serves your website hosted on a fleet of EC2 instances behind an Application Load Balancer. All your clients are from the United States, but you found that some malicious requests are coming from other countries. What should you do to only allow users from the US and block other countries?
- Use CloudFront Geo Restriction
- Use Origin Access Control
- Set up a security group and attach it to your CloudFront Distribution
- Use a Route 53 Latency record and attach it to CloudFront

A

Use CloudFront Geo Restriction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

You have a static website hosted on an S3 bucket. You have created a CloudFront Distribution that points to your S3 bucket to better serve your requests and improve performance. After a while, you noticed that users can still access your website directly from the S3 bucket. You want to enforce users to access the website only through CloudFront. How would you achieve that?
- Send an email to your clients and tell them not to use the S3 endpoint
- Configure your Cloudfront Distribution and create an Origin Access Control (OAC), then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution
- Use S3 Access Points to redirect clients to CloudFront

A

Configure your Cloudfront Distribution and create an Origin Access Control (OAC), then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution

125
Q

What does this S3 bucket policy do?

{
    "Version": "2012-10-17",
    "Id": "Mystery policy",
    "Statement": [{
        "Sid": "What could it be?",
        "Effect": "Allow",
        "Principal": {
           "Service": "cloudfront.amazonaws.com"
        },
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::examplebucket/*",
        "Condition": {
            "StringEquals": {
                "AWS:SourceArn": "arn:aws:cloudfront::123456789012:distribution/EDFDVBD6EXAMPLE"
            }
        }
    }]
}
  • Forces GetObject request to be encrypted if coming from CloudFront
  • Only allows the S3 bucket content to be accessed from your CloudFront Distribution
  • Only allows GetObject type of request on the S3 bucket from anybody
A

Only allows the S3 bucket content to be accessed from your CloudFront Distribution

126
Q

A WordPress website is hosted in a set of EC2 instances in an EC2 Auto Scaling Group and fronted by a CloudFront Distribution which is configured to cache the content for 3 days. You have released a new version of the website and want to release it immediately to production without waiting for 3 days for the cached content to be expired. What is the easiest and most efficient way to solve this?
- Open a support ticket with AWS Support to remove the CloudFront Cache
- CloudFront Cache Invalidation
- EC2 Cache Invalidation

A

CloudFront Cache Invalidation

127
Q

A company is deploying a media-sharing website to AWS. They are going to use CloudFront to deliver the content with low latency to their customers where they are located in both US and Europe only. After a while there a huge costs for CloudFront. Which CloudFront feature allows you to decrease costs by targeting only US and Europe?
- CloudFront Cache Invalidation
- CloudFront Price Classes
- CloudFront Cache Behavior
- Origin Access Control

A

CloudFront Price Classes

128
Q

A company is migrating a web application to AWS Cloud and they are going to use a set of EC2 instances in an EC2 Auto Scaling Group. The web application is made of multiple components so they will need a host-based routing feature to route to specific web application components. This web application is used by many customers and therefore the web application must have a static IP address so it can be whitelisted by the customers’ firewalls. As the customers are distributed around the world, the web application must also provide low latency to all customers. Which AWS service can help you to assign a static IP address and provide low latency across the globe?
- AWS Global Accelerator + Application Load Balancer
- Amazon CloudFront
- Network Load Balancer
- Application Load Balancer

A

AWS Global Accelerator + Application Load Balancer

129
Q

You need to move hundreds of Terabytes into Amazon S3, then process the data using a fleet of EC2 instances. You have a 1 Gbit/s broadband. You would like to move the data faster and possibly processing it while in transit. What do you recommend?
- Use your network
- Use Snowcone
- Use AWS Data Migration
- Use Snowball Edge

A

Use Snowball Edge

Snowball Edge is the right answer as it comes with computing capabilities and allows you to pre-process the data while it’s being moved into Snowball.

130
Q

You want to expose virtually infinite storage for your tape backups. You want to keep the same software you’re using and want an iSCSI compatible interface. What do you use?
- AWS Snowball
- AWS Storage Gateway - Tape Gateway
- AWS Storage Gateway - Volume Gateway
- AWS Storage Gateway - S3 File Gateway

A

AWS Storage Gateway - Tape Gateway

131
Q

Your EC2 Windows Servers need to share some data by having a Network File System mounted on them which respects the Windows security mechanisms and has integration with Microsoft Active Directory. What do you recommend?
- Amazon FSx for Windows (File Server)
- Amazon EFS
- Amazon FSx for Lustre
- S3 File Gateway

A

Amazon FSx for Windows (File Server)

132
Q

You have hundreds of Terabytes that you want to migrate to AWS S3 as soon as possible. You tried to use your network bandwidth and it will take around 3 weeks to complete the upload process. What is the recommended approach to using in this situation?
- AWS Storage Gateway - Volume Gateway
- S3 Multi-part Upload
- AWS Snowball Edge
- AWS Data Migration Service

A

AWS Snowball Edge

133
Q

You have a large dataset stored in S3 that you want to access from on-premises servers using the NFS or SMB protocol. Also, you want to authenticate access to these files through on-premises Microsoft AD. What would you use?
- AWS Storage Gateway - Volume Gateway
- AWS Storage Gateway - S3 File Gateway
- AWS Storage Gateway - Tape Gateway
- AWS Data Migration Service

A

AWS Storage Gateway - S3 File Gateway

134
Q

You are planning to migrate your company’s infrastructure from on-premises to AWS Cloud. You have an on-premises Microsoft Windows File Server that you want to migrate. What is the most suitable AWS service you can use?
- Amazon FSx for Windows (File Server)
- AWS Storage Gateway - S3 File Gateway
- AWS Managed Microsoft AD

A

Amazon FSx for Windows (File Server)

135
Q

You would like to have a distributed POSIX compliant file system that will allow you to maximize the IOPS in order to perform some High-Performance Computing (HPC) and genomics computational research. This file system has to easily scale to millions of IOPS. What do you recommend?
- EFS with Max. IO enabled
- Amazon FSx for Lustre
- Amazon S3 mounted on the EC2 instances
- EC2 Instance Store

A

Amazon FSx for Lustre

136
Q

Which deployment option in the FSx file system provides you with long-term storage that’s replicated within AZ?
- Scratch File System
- Persistent File System

A

Persistent File System

Provides long-term storage where data is replicated within the same AZ. Failed files were replaced within minutes.

137
Q

Which of the following protocols is NOT supported by AWS Transfer Family?
- File Transfer Protocol (FTP)
- File Transfer Protocol over SSL (FTPS)
- Transport Layer Security (SSL)
- Secure File Transfer Protocol (SFTP)

A

Transport Layer Security (SSL)

AWS Transfer Family is a managed service for file transfers into and out of S3 or EFS using the FTP protocol, thus TLS is not supported.

138
Q

A company uses a lot of files and data which is stored in an FSx for Windows File Server storage on AWS. Those files are currently used by the resources hosted on AWS. There’s a requirement for those files to be accessed on-premises with low latency. Which AWS service can help you achieve this?
- S3 File Gateway
- FSx for Windows File Server On-Premises
- FSx File Gateway
- Volume Gateway

A

FSx File Gateway

139
Q

A Solutions Architect is working on planning the migration of a startup company from on-premises to AWS. Currently, their infrastructure consists of many servers and 30 TB of data hosted on a shared NFS storage. He has decided to use Amazon S3 to host the data. Which AWS service can efficiently migrate the data from on-premises to S3?
- AWS Storage Tape Gateway
- Amazon EBS
- AWS Transfer Family
- AWS DataSync

A

AWS DataSync

140
Q

Which AWS service is best suited to migrate a large amount of data from an S3 bucket to an EFS file system?
- AWS Snowball
- AWS DataSync
- AWS Transfer Family
- AWS Backup

A

AWS DataSync

141
Q

A Machine Learning company is working on a set of datasets that are hosted on S3 buckets. The company decided to release those datasets to the public to be useful for others in their research, but they don’t want to configure the S3 bucket to be public. And those datasets should be exposed over the FTP protocol. What can they do to do the requirement efficiently and with the least effort?
- Use AWS Transfer Family
- Create an EC2 instance with an FTP server installed then copy the data from the S3 to the EC2 instance
- Use AWS Storage Gateway
- Copy the data from S3 to an EFS file system, then expose them over the FTP protocol

A

Use AWS Transfer Family

142
Q

Amazon FSx for NetApp ONTAP is compatible with the following protocols, EXCEPT ………………
- NFS
- SMB
- FTP
- iSCI

A

FTP

143
Q

Which AWS service is best suited when migrating from an on-premises ZFS file system to AWS?
- Amazon FSx for OpenZFS
- Amazon FSx for NetApp ONTAP
- Amazon FSx for Windows File Server
- Amazon FSx for Luster

A

Amazon FSx for OpenZFS

144
Q

A company is running Amazon S3 File Gateway to host their data on S3 buckets and is able to mount them on-premises using SMB. The data currently is hosted on S3 Standard storage class and there is a requirement to reduce the costs for S3. So, they have decided to migrate some of those data to S3 Glacier. What is the most efficient way they can use to move the data to S3 Glacier automatically?
- Create a Lambda function to migrate data to S3 Glacier and periodically trigger it every day using Amazon EventBridge
- Use S3 Batch Operations to loop through S3 files and move them to S3 Glacier every day
- Use S3 Lifecycle Policy
- Use AWS DataSync to replicate data to Glacier every day
- Configure S3 File Gateway to send the data to S3 Glacier directly

A

Use S3 Lifecycle Policy

145
Q

You have on-premises sensitive files and documents that you want to regularly synchronize to AWS to keep another copy. Which AWS service can help you with that?
- AWS Database Migration Service
- Amazon EFS
- AWS DataSync

A

AWS DataSync

AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS Storage services, as well as between AWS Storage services.

146
Q

You have an e-commerce website and you are preparing for Black Friday which is the biggest sale of the year. You expect that your traffic will increase by 100x. Your website already using an SQS Standard Queue, and you’re running a fleet of EC2 instances in an Auto Scaling Group to consume SQS messages. What should you do to prepare your SQS Queue?
- Contact AWS Support to pre-warm your SQS Standard Queue
- Enable Auto Scaling in your SQS Queue
- Increase the capacity of the SQS Queue
- Do nothing, SQS Scales automatically

A

Do nothing, SQS Scales automatically

147
Q

You have an SQS Queue where each consumer polls 10 messages at a time and finishes processing them in 1 minute. After a while, you noticed that the same SQS messages are received by different consumers resulting in your messages being processed more than once. What should you do to resolve this issue?
- Enable Long Polling
- Add DelaySeconds parameter to the messages while being produced
- Increase the Visibility Timeout
- Decrease the Visibility Timeout

A

Increase the Visibility Timeout

SQS Visibility Timeout is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message again. In Visibility Timeout, a message is hidden only after it is consumed from the queue. Increasing the Visibility Timeout gives more time to the consumer to process the message and prevent duplicate reading of the message. (default: 30 sec., min.: 0 sec., max.: 12 hours)

148
Q

Which SQS Queue type allows your messages to be processed exactly once and in order?
- SQS Standard Queue
- SQS Dead Letter Queue
- SQS Delay Queue
- SQS FIFO Queue

A

SQS FIFO Queue

SQS FIFO (First-In-First-Out) Queues have all the capabilities of the SQS Standard Queue, plus the following two features. First, The order in which messages are sent and received are strictly preserved and a message is delivered once and remains available until a consumer process and deletes it. Second, duplicated messages are not introduced into the queue.

149
Q

You have 3 different applications that you’d like to send them the same message. All 3 applications are using SQS. What is the best approach would you choose?
- Use SQS Replication Feature
- Use SNS + SQS Fan Out Pattern
- Send messages individually to 3 SQS queues

A

Use SNS + SQS Fan Out Pattern

This is a common pattern where only one message is sent to the SNS topic and then “fan-out” to multiple SQS queues. This approach has the following features: it’s fully decoupled, no data loss, and you have the ability to add more SQS queues (more applications) over time.

150
Q

You have a Kinesis data stream with 6 shards provisioned. This data stream usually receiving 5 MB/s of data and sending out 8 MB/s. Occasionally, your traffic spikes up to 2x and you get a ProvisionedThroughputExceeded exception. What should you do to resolve the issue?
- Add more Shards
- Enable Kinesis Replication
- Use SQS as a buffer to Kinesis

A

Add more Shards

The capacity limits of a Kinesis data stream are defined by the number of shards within the data stream. The limits can be exceeded by either data throughput or the number of reading data calls. Each shard allows for 1 MB/s incoming data and 2 MB/s outgoing data. You should increase the number of shards within your data stream to provide enough capacity.

151
Q

You have a website where you want to analyze clickstream data such as the sequence of clicks a user makes, the amount of time a user spends, and where the navigation begins and how it ends. You decided to use Amazon Kinesis, so you have configured the website to send these clickstream data all the way to a Kinesis data stream. While you checking the data sent to your Kinesis data stream, you found that the users’ data is not ordered and the data for one individual user is spread across many shards. How would you fix this problem?
- There are too many shards, you should use only 1 shard
- You shouldn’t use multiple consumers, only one and it should re-order data
- For each record sent to Kinesis add a partition key that represents the identity of the user

A

For each record sent to Kinesis add a partition key that represents the identity of the user

Kinesis Data Stream uses the partition key associated with each data record to determine which shard a given data record belongs to. When you use the identity of each user as the partition key, this ensures the data for each user is ordered hence sent to the same shard.

152
Q

You are running an application that produces a large amount of real-time data that you want to load into S3 and Redshift. Also, these data need to be transformed before being delivered to their destination. What is the best architecture would you choose?
- SQS + AWS Lambda
- SNS + HTTP Endpoint
- Kinesis Data Streams + Kinesis Data Firehose

A

Kinesis Data Streams + Kinesis Data Firehose

This is a perfect combo of technology for loading data near real-time data into S3 and Redshift. Kinesis Data Firehose supports custom data transformations using AWS Lambda.

153
Q

Which of the following is NOT a supported subscriber for AWS SNS?
- Amazon Kinesis Data Streams
- Amazon SQS
- HTTP(S) Endpoint
- AWS Lambda

A

Amazon Kinesis Data Streams

Note: Kinesis Data Firehose is now supported, but not Kinesis Data Streams.

154
Q

Which AWS service helps you when you want to send email notifications to your users?
- Amazon SQS with AWS Lambda
- Amazon SNS
- Amazon Kinesis

A

Amazon SNS

155
Q

You’re running many micro-services applications on-premises and they communicate using a message broker that supports MQTT protocol. You’re planning to migrate these applications to AWS without re-engineering the applications and modifying the code. Which AWS service allows you to get a managed message broker that supports the MQTT protocol?
- Amazon SQS
- Amazon SNS
- Amazon Kinesis
- Amazon MQ

A

Amazon MQ

Amazon MQ supports industry-standard APIs such as JMS and NMS, and protocols for messaging, including AMQP, STOMP, MQTT, and WebSocket.

156
Q

An e-commerce company is preparing for a big marketing promotion that will bring millions of transactions. Their website is hosted on EC2 instances in an Auto Scaling Group and they are using Amazon Aurora as their database. The Aurora database has a bottleneck and a lot of transactions have been failed in the last promotion they have made as they had a lot of transaction and the Aurora database wasn’t prepared to handle these too many transactions. What do you recommend to handle those transactions and prevent any failed transactions?
- Use SQS as a buffer to write to Aurora
- Host the website in AWS Fargate instead of EC2 instances
- Migrate Aurora to RDS for SQL Server

A

Use SQS as a buffer to write to Aurora

157
Q

A company is using Amazon Kinesis Data Streams to ingest clickstream data and then do some analytical processes on it. There is a campaign in the next few days and the traffic is unpredictable which might grow up to 100x. What Kinesis Data Stream capacity mode do you recommend?
- Provisioned Mode
- On-demand Mode

A

On-demand Mode

158
Q

You have multiple Docker-based applications hosted on-premises that you want to migrate to AWS. You don’t want to provision or manage any infrastructure; you just want to run your containers on AWS. Which AWS service should you choose?
- Elastic Container Service (ECS) in EC2 Launch Mode
- Elastic Container Registry (ECR)
- AWS Fargate on ECS

A

AWS Fargate on ECS

AWS Fargate allows you to run your containers on AWS without managing any servers.

159
Q

Amazon Elastic Container Service (ECS) has two Launch Types: ……………… and ………………
- Amazon EC2 Launch Type and Fargate Launch Type
- Amazon EC2 Launch Type and EKS Launch Type
- Fargate Launch Type and EKS Launch Type

A

Amazon EC2 Launch Type and Fargate Launch Type

160
Q

You have an application hosted on an ECS Cluster (EC2 Launch Type) where you want your ECS tasks to upload files to an S3 bucket. Which IAM Role for your ECS Tasks should you modify?
- EC2 Instance Profile
- ECS Task Role

A

ECS Task Role

ECS Task Role is the IAM Role used by the ECS task itself. Use when your container wants to call other AWS services like S3, SQS, etc.

161
Q

You’re planning to migrate a WordPress website running on Docker containers from on-premises to AWS. You have decided to run the application in an ECS Cluster, but you want your docker containers to access the same WordPress website content such as website files, images, videos, etc. What do you recommend to achieve this?
- Mount an EFS Volume
- Mount an EBS Volume
- Use an EC2 Instance Store

A

Mount an EFS Volume

EFS volume can be shared between different EC2 instances and different ECS Tasks. It can be used as a persistent multi-AZ shared storage for your containers.

162
Q

You are deploying an application on an ECS Cluster made of EC2 instances. Currently, the cluster is hosting one application that is issuing API calls to DynamoDB successfully. Upon adding a second application, which issues API calls to S3, you are getting authorization issues. What should you do to resolve the problem and ensure proper security?
- Edit the EC2 instance role to add permissions to S3
- Create an IAM task for the new application
- Enable the Fargate mode
- Edit the S3 bucket policy to allow the ECS task

A

Create an IAM task for the new application

163
Q

You are migrating your on-premises Docker-based applications to Amazon ECS. You were using Docker Hub Container Image Library as your container image repository. Which is an alternative AWS service which is fully integrated with Amazon ECS?
- AWS Fargate
- Elastic Container Registry (ECR)
- Elastic Kubernetes Service (EKS)
- Amazon EC2

A

Elastic Container Registry (ECR)

164
Q

Amazon EKS supports the following node types, EXCEPT ………………..
- Managed Node Groups
- Self-Managed Nodes
- AWS Fargate
- AWS Lambda

A

AWS Lambda

165
Q

A developer has a running website and APIs on his local machine using containers and he wants to deploy both of them on AWS. The developer is new to AWS and doesn’t know much about different AWS services. Which of the following AWS services allows the developer to build and deploy the website and the APIs in the easiest way according to AWS best practices?
- AWS App Runner
- EC2 Instances + Application Load Balancer
- Amazon ECS
- AWS Fargate

A

AWS App Runner

166
Q

You have created a Lambda function that typically will take around 1 hour to process some data. The code works fine when you run it locally on your machine, but when you invoke the Lambda function it fails with a “timeout” error after 3 seconds. What should you do?
- Configure your Lambda’s timeout to 25 minutes
- Configure your Lambda’s memory to 10 GB
- Run your code somewhere else (e.g. EC2 instance)

A

Run your code somewhere else (e.g. EC2 instance)

Lambda’s maximum execution time is 15 minutes. You can run your code somewhere else such as an EC2 instance or use Amazon ECS.

167
Q

Before you create a DynamoDB table, you need to provision the EC2 instance the DynamoDB table will be running on.
- True
- False

A

False

DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain or operate. It automatically scales tables up and down to adjust for capacity and maintain performance. It provides both provisioned (specify RCU & WCU) and on-demand (pay for what you use) capacity modes.

168
Q

You have provisioned a DynamoDB table with 10 RCUs and 10 WCUs. A month later you want to increase the RCU to handle more read traffic. What should you do?
- Increase RCU and keep WCU the same
- You need to increase both RCU and WCU
- Increase RCU and decrease WCU

A

Increase RCU and keep WCU the same

RCU and WCU are decoupled, so you can increase/decrease each value separately.

169
Q

You have an e-commerce website where you are using DynamoDB as your database. You are about to enter the Christmas sale and you have a few items which are very popular and you expect that they will be read often. Unfortunately, last year due to the huge traffic you had the ProvisionedThroughputExceededException exception. What would you do to prevent this error from happening again?
- Increase the RCU to a very high value
- Create a DAX Cluster
- Migrate the database away from DynamoDB for the time of the sale

A

Create a DAX Cluster

DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to 10x performance improvement. It caches the most frequently used data, thus offloading the heavy reads on hot keys off your DynamoDB table, hence preventing the “ProvisionedThroughputExceededException” exception.

170
Q

You have developed a mobile application that uses DynamoDB as its datastore. You want to automate sending welcome emails to new users after they sign up. What is the most efficient way to achieve this?
- Schedule a Lambda function to run every minute using CloudWatch Events, scan the entire table looking for new users
- Enable SNS and DynamoDB integration
- Enable DynamoDB Streams and configure it to invoke a Lambda function to send emails

A

Enable DynamoDB Streams and configure it to invoke a Lambda function to send emails

DynamoDB Streams allows you to capture a time-ordered sequence of item-level modifications in a DynamoDB table. It’s integrated with AWS Lambda so that you create triggers that automatically respond to events in real-time.

171
Q

To create a serverless API, you should integrate Amazon API Gateway with ………………….
- EC2 Instance
- Elastic Load Balancing
- AWS Lambda

A

AWS Lambda

172
Q

When you are using an Edge-Optimized API Gateway, your API Gateway lives in CloudFront Edge Locations across all AWS Regions.
- False
- True

A

False

An Edge-Optimized API Gateway is best for geographically distributed clients. API requests are routed to the nearest CloudFront Edge Location which improves latency. The API Gateway still lives in one AWS Region.

173
Q

You are running an application in production that is leveraging DynamoDB as its datastore and is experiencing smooth sustained usage. There is a need to make the application run in development mode as well, where it will experience the unpredictable volume of requests. What is the most cost-effective solution that you recommend?
- Use Provisioned Capacity Mode with Auto Scaling enabled for both development and production
- Use Provisioned Capacity Mode with Auto Scaling enabled for production and use On-Demand Capacity Mode for development
- Use Provisioned Capacity Mode with Auto Scaling enabled for development and use On-Demand Capacity Mode for production
- Use On-Demand Capacity Mode for both Development and Production

A

Use Provisioned Capacity Mode with Auto Scaling enabled for production and use On-Demand Capacity Mode for development

174
Q

You have an application that is served globally using CloudFront Distribution. You want to authenticate users at the CloudFront Edge Locations instead of authentication requests go all the way to your origins. What should you use to satisfy this requirement?
- Lambda@Edge
- API Gateway
- DynamoDB
- AWS Global Accelerator

A

Lambda@Edge

Lambda@Edge is a feature of CloudFront that lets you run code closer to your users, which improves performance and reduces latency.

175
Q

The maximum size of an item in a DynamoDB table is ……………….
- 1MB
- 400KB
- 500KB
- 400MB

A

400KB

176
Q

Which AWS service allows you to build Serverless workflows using AWS services (e.g., Lambda) and supports human approval?
- AWS Lambda
- Amazon ECS
- AWS Step Functions
- AWS Storage Gateway

A

AWS Step Functions

177
Q

A company has a serverless application on AWS which consists of Lambda, DynamoDB, and Step Functions. In the last month, there are an increase in the number of requests against the application which results in an increase in DynamoDB costs, and requests started to be throttled. After further investigation, it shows that the majority of requests are read requests against some queries in the DynamoDB table. What do you recommend to prevent throttles and reduce costs efficiently?
- Use an EC2 Instance with Redis installed and place it between the Lambda function and the DynamoDB table
- Migrate from DynamoDB to Aurora and use ElastiCache to cache the most requested read data
- Migrate from DynamoDB to S3 and use Cloudfront to cache the most requested read data
- Use DynamoDB Accelerator (DAX) to cache the most requested read data

A

Use DynamoDB Accelerator (DAX) to cache the most requested read data

178
Q

You are a DevOps engineer in a football company that has a website that is backed by a DynamoDB table. The table stores viewers’ feedback for football matches. You have been tasked to work with the analytics team to generate reports on the viewers’ feedback. The analytics team wants the data in DynamoDB in json format and hosted in an S3 bucket to start working on it and create the reports. What is the best and most cost-effective way to convert DynamoDB data to json files?
- Select DynamoDB and then select export to S3
- Create a Lambda function to read DynamoDB data, convert them to json files, then store the files in S3 bucket
- Use AWS Transfer Family
- Use AWS DataSync

A

Select DynamoDB and then select export to S3

179
Q

A website is currently in the development process and it is going to be hosted on AWS. There is a requirement to store user sessions for users logged in to the website with an automatic expiry and deletion of expired user sessions. Which of the following AWS services are best suited for this use case?
- Store users’ sessions in an S3 bucket and enable S3 Lifecycle Policy
- Store users’ sessions locally in an EC2 instance
- Store users’ sessions in a DynamoDB table and enable TTL
- Store users’ sessions in an EFS File System

A

Store users’ sessions in a DynamoDB table and enable TTL

180
Q

You have a mobile application and would like to give your users access to their own personal space in the S3 bucket. How do you achieve that?
- Generate IAM user credentials for each of your application’s users
- Use Amazon Cognito Identity Federation
- Use SAML Identity Federation
- Use a Bucket Policy to make your bucket public

A

Use Amazon Cognito Identity Federation

Amazon Cognito can be used to federate mobile user accounts and provide them with their own IAM permissions, so they can be able to access their own personal space in the S3 bucket.

181
Q

You are developing a new web and mobile application that will be hosted on AWS and currently, you are working on developing the login and signup page. The application backend is serverless and you are using Lambda, DynamoDB, and API Gateway. Which of the following is the best and easiest approach to configure the authentication for your backend?
- Store users’ credentials in a DynamoDB table encrypted using KMS
- Store users’ credentials in a S3 bucket encrypted using KMS
- Use Cognito User Pools
- Store users’ credentials in AWS Secrets Manager

A

Use Cognito User Pools

182
Q

You are running a mobile application where you want each registered user to upload/download images to/from his own folder in the S3 bucket. Also, you want to give your users to sign-up and sign in using their social media accounts (e.g., Facebook). Which AWS service should you choose?
- AWS Identity and Access Management (IAM)
- AWS IAM Identity center
- Amazon Cognito
- Amazon CloudFront

A

Amazon Cognito

Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0 and OpenID Connect.

183
Q

A startup company plans to run its application on AWS. As a solutions architect, the company hired you to design and implement a fully Serverless REST API. Which technology stack do you recommend?
- API Gateway + AWS Lambda
- Application Load Balancer + EC2
- Elastic Container Service (ECS) + Elastic Block Store
- Amazon CloudFront + S3

A

API Gateway + AWS Lambda

184
Q

The following AWS services have an out of the box caching feature, EXCEPT ……………..
- API Gateway
- Lambda
- DynamoDB

A

Lambda

185
Q

You have a lot of static files stored in an S3 bucket that you want to distribute globally to your users. Which AWS service should you use?
- S3 Cross-Region Replication
- Amazon CloudFront
- Amazon Route 53
- API Gateway

A

Amazon CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds. This is a perfect use case for Amazon CloudFront.

186
Q

You have created a DynamoDB table in ap-northeast-1 and would like to make it available in eu-west-1, so you decided to create a DynamoDB Global Table. What needs to be enabled first before you create a DynamoDB Global Table?
- DynamoDB Streams
- DynamoDB DAX
- DynamoDB Versioning
- DynamoDB Backups

A

DynamoDB Streams

DynamoDB Streams enable DynamoDB to get a changelog and use that changelog to replicate data across replica tables in other AWS Regions.

187
Q

You have configured a Lambda function to run each time an item is added to a DynamoDB table using DynamoDB Streams. The function is meant to insert messages into the SQS queue for further long processing jobs. Each time the Lambda function is invoked, it seems able to read from the DynamoDB Stream but it isn’t able to insert the messages into the SQS queue. What do you think the problem is?
- Lambda can’t be used to insert messages into the SQS queue, use an EC2 instance instead
- The Lambda Execution IAM Role is missing permissions
- The Lambda security group must allow outbound access to SQS
- The SQS security group must be edited to allow AWS Lambda

A

The Lambda Execution IAM Role is missing permissions

188
Q

You would like to create an architecture for a micro-services application whose sole purpose is to encode videos stored in an S3 bucket and store the encoded videos back into an S3 bucket. You would like to make this micro-services application reliable and has the ability to retry upon failures. Each video may take over 25 minutes to be processed. The services used in the architecture should be asynchronous and should have the capability to be stopped for a day and resume the next day from the videos that haven’t been encoded yet. Which of the following AWS services would you recommend in this scenario?
- Amazon S3 + AWS Lambda
- Amazon SNS + Amazon EC2
- Amazon SQS + Amazon EC2
- Amazon SQS + AWS Lambda

A

Amazon SQS + Amazon EC2

Amazon SQS allows you to retain messages for days and process them later, while we can take down our EC2 instances.

189
Q

You are running a photo-sharing website where your images are downloaded from all over the world. Every month you publish a master pack of beautiful mountain images that are over 15 GB in size. The content is currently hosted on an Elastic File System (EFS) file system and distributed by an Application Load Balancer and a set of EC2 instances. Each month, you are experiencing very high traffic which increases the load on your EC2 instances and increases network costs. What do you recommend to reduce EC2 load and network costs without refactoring your website?
- Hosts the master pack into S3
- Enable Application Load Balancer Caching
- Scale up the EC2 instances
- Create a CloudFront Distribution

A

Create a CloudFront Distribution

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds. Amazon CloudFront can be used in front of an Application Load Balancer.

190
Q

Which database helps you store relational datasets, with SQL language compatibility and the capability of processing transactions such as insert, update, and delete?
- Amazon DocumentDB
- Amazon RDS
- Amazon DynamoDB
- Amazon ElastiCache

A

Amazon RDS

191
Q

Which AWS service provides you with caching capability that is compatible with Redis API?
- Amazon RDS
- Amazon DynamoDB
- Amazon OpenSearch
- Amazon ElastiCache

A

Amazon ElastiCache

Amazon ElastiCache is a fully managed in-memory data store, compatible with Redis or Memcached.

192
Q

You want to migrate an on-premises MongoDB NoSQL database to AWS. You don’t want to manage any database servers, so you want to use a managed NoSQL Serverless database, that provides you with high availability, durability, and reliability, and the capability to take your database global. Which database should you choose?
- Amazon RDS
- Amazon DynamoDB
- Amazon DocumentDB
- Amazon Aurora

A

Amazon DynamoDB

Amazon DynamoDB is a key-value, document, NoSQL database.

193
Q

You are looking to perform Online Transaction Processing (OLTP). You would like to use a database that has built-in auto-scaling capabilities and provides you with the maximum number of replicas for its underlying storage. What AWS service do you recommend?
- Amazon ElastiCache
- Amazon Neptune
- Amazon Aurora
- Amazon RDS

A

Amazon Aurora

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database. It features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across 3 AZs.

194
Q

As a Solutions Architect, a startup company asked you for help as they are working on an architecture for a social media website where users can be friends with each other, and like each other’s posts. The company plan on performing some complicated queries such as “What are the number of likes on the posts that have been posted by the friends of Mike?”. Which database do you recommend?
- Amazon RDS
- Amazon QLDB
- Amazon Neptune
- Amazon OpenSearch

A

Amazon Neptune

Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets.

195
Q

You have a set of files, 100MB each, that you want to store in a reliable and durable key-value store. Which AWS service do you recommend?
- Amazon Aurora
- Amazon S3
- Amazon DynamoDB
- Amazon ElastiCache

A

Amazon S3

196
Q

A company has an on-premises website that uses ReactJS as its frontend, NodeJS as its backend, and MongoDB for the database. There are some issues with the self-hosted MongoDB database as there is a lot of maintenance required and they don’t have and can’t afford the resources or experience to handle those issues. So, a decision was made to migrate the website to AWS. They have decided to host the frontend ReactJS application in an S3 bucket and the NodeJS backend on a set of EC2 instances. Which AWS service can they use to migrate the MongoDB database that provides them with high scalability and availability without making any code changes?
- Amazon ElastiCache
- Amazon DocumentDB
- Amazon RDS for MongoDB
- Amazon Neptune

A

Amazon DocumentDB

197
Q

A company using a self-hosted on-premises Apache Cassandra database which they want to migrate to AWS. Which AWS service can they use which provides them with a fully managed, highly available, and scalable Apache Cassandra database?
- Amazon DocumentDB
- Amazon DynamoDB
- Amazon Timestream
- Amazon Keyspaces

A

Amazon Keyspaces

198
Q

An online payment company is using AWS to host its infrastructure. Due to the application’s nature, they have a strict requirement to store an accurate record of financial transactions such as credit and debit transactions. Those transactions must be stored in secured, immutable, encrypted storage which can be cryptographically verified. Which AWS service is best suited for this use case?
- Amazon DocumentDB
- Amazon Aurora
- Amazon QLDB
- Amazon Neptune

A

Amazon QLDB

199
Q

A startup is working on developing a new project to reduce forest fires due to climate change. The startup is developing sensors that will be spread across the entire forest to make some readings such as temperature, humidity, and pressures which will help detect the forest fires before it happens. They are going to have thousands of sensors that are going to store a lot of readings each second. There is a requirement to store those readings and do fast analytics so they can predict if there is a fire. Which AWS service can they use to store those readings?
- Amazon Timestream
- Amazon Neptune
- Amazon S3
- Amazon ElastiCache

A

Amazon Timestream

200
Q

You would like to have a database that is efficient at performing analytical queries on large sets of columnar data. You would like to connect to this Data Warehouse using a reporting and dashboard tool such as Amazon QuickSight. Which AWS technology do you recommend?
- Amazon RDS
- Amazon S3
- Amazon Redshift
- Amazon Neptune

A

Amazon Redshift

201
Q

You have a lot of log files stored in an S3 bucket that you want to perform a quick analysis, if possible Serverless, to filter the logs and find users that attempted to make an unauthorized action. Which AWS service allows you to do so?
- Amazon DynamoDB
- Amazon Redshift
- S3 Glacier
- Amazon Athena

A

Amazon Athena

202
Q

As a Solutions Architect, you have been instructed you to prepare a disaster recovery plan for a Redshift cluster. What should you do?
- Enable Multi-AZ
- Enable Automated Snapshots, then configure your Redshift cluster to automatically copy snapshots to another AWS region
- Take a snapshot then restore to a Redshift Global Cluster

A

Enable Automated Snapshots, then configure your Redshift cluster to automatically copy snapshots to another AWS region

203
Q

Which feature in Redshift forces all COPY and UNLOAD traffic moving between your cluster and data repositories through your VPCs?
- Enhanced VPC Routing
- Improved VPC Routing
- Redshift Spectrum

A

Enhanced VPC Routing

204
Q

You are running a gaming website that is using DynamoDB as its data store. Users have been asking for a search feature to find other gamers by name, with partial matches if possible. Which AWS technology do you recommend to implement this feature?
- Amazon DynamoDB
- Amazon Redshift
- Amazon OpenSearch Service
- Amazon Neptune

A

Amazon OpenSearch Service

205
Q

An AWS service allows you to create, run, and monitor ETL (extract, transform, and load) jobs in a few clicks.
- AWS Glue
- Amazon Redshift
- Amazon RDS
- Amazon DynamoDB

A

AWS Glue

206
Q

A company is using AWS to host its public websites and internal applications. Those different websites and applications generate a lot of logs and traces. There is a requirement to centrally store those logs and efficiently search and analyze those logs in real-time for detection of any errors and if there is a threat. Which AWS service can help them efficiently store and analyze logs?
- Amazon S3
- Amazon OpenSearch Service
- Amazon ElastiCache
- Amazon QLDB

A

Amazon OpenSearch Service

207
Q

……………………….. makes it easy and cost-effective for data engineers and analysts to run applications built using open source big data frameworks such as Apache Spark, Hive, or Presto without having to operate or manage clusters.
- AWS Lambda
- Amazon EMR
- Amazon Athena
- Amazon OpenSearch Service

A

Amazon EMR

208
Q

An e-commerce company has all its historical data such as orders, customers, revenues, and sales for the previous years hosted on a Redshift cluster. There is a requirement to generate some dashboards and reports indicating the revenues from the previous years and the total sales, so it will be easy to define the requirements for the next year. The DevOps team is assigned to find an AWS service that can help define those dashboards and have native integration with Redshift. Which AWS service is best suited?
- Amazon OpenSearch Service
- Amazon Athena
- Amazon QuickSight
- Amazon EMR

A

Amazon QuickSight

209
Q

Which AWS Glue feature allows you to save and track the data that has already been processed during a previous run of a Glue ETL job?
- Glue Job Bookmarks
- Glue Elastic Views
- Glue Streaming ETL
- Glue DataBrew

A

Glue Job Bookmarks

210
Q

You are a DevOps engineer in a machine learning company with 3 TB of JSON files stored in an S3 bucket. There’s a requirement to do some analytics on those files using Amazon Athena and you have been tasked to find a way to convert those files’ format from JSON to Apache Parquet. Which AWS service is best suited?
- S3 Object Versioning
- Kinesis Data Streams
- Amazon MSK
- AWS Glue

A

AWS Glue

211
Q

You have an on-premises application that is used together with an on-premises Apache Kafka to receive a stream of clickstream events from multiple websites. You have been tasked to migrate this application as soon as possible without any code changes. You decided to host the application on an EC2 instance. What is the best option you recommend to migrate Apache Kafka?
- Kinesis Data Streams
- AWS Glue
- Amazon MSK
- Kinesis Data Analytics

A

Amazon MSK

212
Q

You have data stored in RDS, S3 buckets and you are using AWS Lake Formation as a data lake to collect, move and catalog data so you can do some analytics. You have a lot of big data and ML engineers in the company and you want to control access to part of the data as it might contain sensitive information. What can you use?
- Lake Formation Fine-grained Access Control
- Amazon Cognito
- AWS Shield
- S3 Object Lock

A

Lake Formation Fine-grained Access Control

213
Q

Which AWS service is most appropriate when you want to perform real-time analytics on streams of data?
- Amazon SQS
- Amazon SNS
- Amazon Kinesis Data Analytics
- Amazon Kinesis Data Firehose

A

Amazon Kinesis Data Analytics

Use Kinesis Data Analytics with Kinesis Data Streams as the underlying source of data.

214
Q

You should use Amazon Transcribe to turn text into lifelike speech using deep learning.
- True
- False

A

False

Amazon Transcribe is an AWS service that makes it easy for customers to convert speech-to-text. Amazon Polly is a service that turns text into lifelike speech.

215
Q

A company would like to implement a chatbot that will convert speech-to-text and recognize the customers’ intentions. What service should it use?
- Transcribe
- Rekognition
- Connect
- Lex

A

Lex

Amazon Lex is a service for building conversational interfaces into any application using voice and text. Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.

216
Q

Which fully managed service can deliver highly accurate forecasts?
- Personalize
- SageMaker
- Lex
- Forecast

A

Forecast

Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts.

217
Q

You would like to find objects, people, text, or scenes in images and videos. What AWS service should you use?
- Rekognition
- Polly
- Kendra
- Lex

A

Rekognition

Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use.

218
Q

A start-up would like to rapidly create customized user experiences. Which AWS service can help?
- Personalize
- Kendra
- Connect

A

Personalize

Amazon Personalize is a machine learning service that makes it easy for developers to create individualized recommendations for customers using their applications.

219
Q

A research team would like to group articles by topics using Natural Language Processing (NLP). Which service should they use?
- Translate
- Comprehend
- Lex
- Rekognition

A

Comprehend

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text.

220
Q

A company would like to convert its documents into different languages, with natural and accurate wording. What should they use?
- Transcribe
- Polly
- Translate
- WordTranslator

A

Translate

Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation.

221
Q

A developer would like to build, train, and deploy a machine learning model quickly. Which service can he use?
- SageMaker
- Polly
- Comprehend
- Personalize

A

SageMaker

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.

222
Q

Which AWS service makes it easy to convert speech-to-text?
- Connect
- Translate
- Transcribe
- Polly

A

Transcribe

Amazon Transcribe is an AWS service that makes it easy for customers to convert speech-to-text.

223
Q

Which of the following services is a document search service powered by machine learning?
- Forecast
- Kendra
- Comprehend
- Polly

A

Kendra

Amazon Kendra is a highly accurate and easy to use enterprise search service that’s powered by machine learning.

224
Q

A company is managing an image and video sharing platform which is used by customers around the globe. The platform is running on AWS using an S3 bucket to host both images and videos and using CloudFront as the CDN to deliver content to customers all over the world with low latency. In the last couple of months, a lot of customers have complained that they have started to see inappropriate content on the platform which started to increase in the last week. It will be very expensive and time-consuming to manually approve those images and videos by employees before its published on the platform. There is a requirement to find a solution that can automatically detect inappropriate and offensive images and videos and give you the ability to set a minimum confidence threshold for items that will be flagged and allows for manual review. Which AWS service can fit the requirement?
- Amazon Polly
- Amazon Translate
- Amazon Lex
- Amazon Rekognition

A

Amazon Rekognition

225
Q

An online medical company that allows you to book an appointment with doctors using through a phone call is using AWS to host their infrastructure. They are using Amazon Connect and Amazon Lex to receive calls and create a workflow, book an appointment, and pay. According to the company’s policy, all calls must be recorded for review. But, there is a requirement to remove any Personally Identifiable Information (PII) from the call before it’s saved. What do you recommend to use which helps in removing PII from calls?
- Amazon Polly
- Amazon Transcribe
- Amazon Rekognition
- Amazon Forecast

A

Amazon Transcribe

226
Q

Amazon Polly allows you to turn text into speech. It has two important features. First is ……………….. which allows you to customize the pronunciation of words (e.g., “Amazon EC2” will be “Amazon Elastic Compute Cloud”). The second is ……………….. which allows you to emphasize words, including breathing sounds, whispering, and more.
- Speech Synthesis Markup Language (SSML), Pronunciation Lexicons
- Pronunciation Lexicons, Security Assertion Markup Language (SAML)
- Pronunciation Lexicons, Speech Synthesis Markup Language (SSML)
- Security Assertion Markup Language (SAML), Pronunciation Lexicons

A

Pronunciation Lexicons, Speech Synthesis Markup Language (SSML)

227
Q

A medical company is in the process of implementing a solution to detect, extract, and analyze information from unstructured medical text like doctors’ notes, clinical trial reports, and radiology reports. Those documents are uploaded and stored on S3 buckets. According to the company’s regulations, the solution must be designed and implemented to keep patients’ privacy by identifying Protected Health Information (PHI) so the solution will be eligible with HIPAA. Which AWS service should you use?
- Amazon Comprehend Medical
- Amazon Rekognition
- Amazon Polly
- Amazon Translate

A

Amazon Comprehend Medical

228
Q

You have an RDS DB instance that’s configured to push its database logs to CloudWatch. You want to create a CloudWatch alarm if there’s an Error found in the logs. How would you do that?
- Create a scheduled CloudWatch Event that triggers an AWS Lambda every 1 hour, scans the logs, and notify you through SNS topic
- Create a CloudWatch Logs Metric Filter that filters the logs for the keyword Error, then create a CloudWatch Alarm based on that Metric Filter
- Create an AWS Config Rule that monitors Error in your database logs and notify you through SNS topic

A

Create a CloudWatch Logs Metric Filter that filters the logs for the keyword Error, then create a CloudWatch Alarm based on that Metric Filter

229
Q

You have an application hosted on a fleet of EC2 instances managed by an Auto Scaling Group that you configured its minimum capacity to 2. Also, you have created a CloudWatch Alarm that is configured to scale in your ASG when CPU Utilization is below 60%. Currently, your application runs on 2 EC2 instances and has low traffic and the CloudWatch Alarm is in the ALARM state. What will happen?
- One EC2 instance will be terminated and the ASG desired and minimum capacity will go to 1
- The CloudWatch Alarm will remain in ALARM state but never decrease the number of EC2 Instances in the ASG
- The CloudWatch Alarm will be detached from my ASG
- The CloudWatch Alarm will go in OK state

A

The CloudWatch Alarm will remain in ALARM state but never decrease the number of EC2 Instances in the ASG

The number of EC2 instances in an ASG can not go below the minimum capacity, even if the CloudWatch alarm would in theory trigger an EC2 instance termination.

230
Q

How would you monitor your EC2 instance memory usage in CloudWatch?
- Enable EC2 Detailed Monitoring
- By default, the EC2 instance pushed memory usage to CloudWatch
- Use the Unified CloudWatch Agent to push memory usage as a custom metric to CloudWatch

A

Use the Unified CloudWatch Agent to push memory usage as a custom metric to CloudWatch

230
Q

You have made a configuration change and would like to evaluate the impact of it on the performance of your application. Which AWS service should you use?
- Amazon CloudWatch
- AWS CloudTrail

A

Amazon CloudWatch

Amazon CloudWatch is a monitoring service that allows you to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. It is used to monitor your applications’ performance and metrics.

231
Q

Someone has terminated an EC2 instance in your AWS account last week, which was hosting a critical database that contains sensitive data. Which AWS service helps you find who did that and when?
- CloudWatch Metrics
- CloudWatch Alarms
- CloudWatch Events
- AWS CloudTrail

A

AWS CloudTrail

AWS CloudTrail allows you to log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. It provides the event history of your AWS account activity, audit API calls made through the AWS Management Console, AWS SDKs, AWS CLI. So, the EC2 instance termination API call will appear here. You can use CloudTrail to detect unusual activity in your AWS accounts.

232
Q

You have CloudTrail enabled for your AWS Account in all AWS Regions. What should you use to detect unusual activity in your AWS Account?
- CloudTrail Data Events
- CloudTrail Insights
- CloudTrail Management Events

A

CloudTrail Insights

233
Q

One of your teammates terminated an EC2 instance 4 months ago which has critical data. You don’t know who made this so you are going to review all API calls within this period using CloudTrail. You already have CloudTrail set up and configured to send logs to the S3 bucket. What should you do to find out who made this?
- Use CloudTrail Event History in CloudTrail Console
- Analyze CloudTrail logs in S3 bucket using Amazon Athena

A

Analyze CloudTrail logs in S3 bucket using Amazon Athena

You can use the CloudTrail Console to view the last 90 days of recorded API activity. For events older than 90 days, use Athena to analyze CloudTrail logs stored in S3.

234
Q

You are running a website on a fleet of EC2 instances with OS that has a known vulnerability on port 84. You want to continuously monitor your EC2 instances if they have port 84 exposed. How should you do this?
- Setup CloudWatch Metrics
- Setup CloudTrail Trails
- Setup Config Rules
- Schedule a CloudWatch Event to trigger a Lambda function to scan your EC2 instances

A

Setup Config Rules

235
Q

You would like to evaluate the compliance of your resource’s configurations over time. Which AWS service will you choose?
- AWS Config
- Amazon CloudWatch
- AWS CloudTrail

A

AWS Config

236
Q

Someone changed the configuration of a resource and made it non-compliant. Which AWS service is responsible for logging who made modifications to resources?
- Amazon CloudWatch
- AWS CloudTrail
- AWS Config

A

AWS CloudTrail

237
Q

You have enabled AWS Config to monitor Security Groups if there’s unrestricted SSH access to any of your EC2 instances. Which AWS Config feature can you use to automatically re-configure your Security Groups to their correct state?
- AWS Config Remediations
- AWS Config Rules
- AWS Config Notifications

A

AWS Config Remediations

238
Q

You are running a critical website on a set of EC2 instances with a tightened Security Group that has restricted SSH access. You have enabled AWS Config in your AWS Region and you want to be notified via email when someone modified your EC2 instances’ Security Group. Which AWS Config feature helps you do this?
- AWS Config Remediations
- AWS Config Rules
- AWS Config Notifications

A

AWS Config Notifications

239
Q

…………………………. is a CloudWatch feature that allows you to send CloudWatch metrics in near real-time to S3 bucket (through Kinesis Data Firehose) and 3rd party destinations (e.g., Splunk, Datadog, …).
- CloudWatch Metric Stream
- CloudWatch Log Stream
- CloudWatch Metric Filter
- CloudWatch Log Group

A

CloudWatch Metric Stream

240
Q

A DevOps engineer is working for a company and managing its infrastructure and resources on AWS. There was a sudden spike in traffic for the main application for the company which was not normal in this period of the year. The application is hosted on a couple of EC2 instances in private subnets and is fronted by an Application Load Balancer in a public subnet. To detect if this is normal traffic or an attack, the DevOps engineer enabled the VPC Flow Logs for the subnets and stored those logs in CloudWatch Log Group. The DevOps wants to analyze those logs and find out the top IP addresses making requests against the website to check if there is an attack. Which of the following can help the DevOps engineer to analyze those logs?
- CloudWatch Metric Stream
- CloudWatch Alarm
- CloudWatch Contributor Insights
- CloudWatch Metric Filter

A

CloudWatch Contributor Insights

241
Q

A company is developing a Serverless application on AWS using Lambda, DynamoDB, and Cognito. A junior developer joined a few weeks ago and accidentally deleted one of the DynamoDB tables in the dev AWS account which contained important data. The CTO asks you to prevent this from happening again and there must be a notification system to monitor if there is an attempt to make such deletion actions for the DynamoDB tables. What would you do?
- Assign developers to a certain IAM group which prevents deletion of DynamoDB tables. Configure EventBridge to capture any DeleteTable API calls through S3 and send a notification using KMS
- Assign developers to a certain IAM group which prevents deletion of DynamoDB tables. Configure EventBridge to capture any DeleteTable API calls through CloudTrail and send a notification using SNS
- Assign developers to a certain IAM group which prevents deletion of DynamoDB tables. Configure EventBridge to capture any DeleteTable API calls through S3 and send a notification using SNS

A

Assign developers to a certain IAM group which prevents deletion of DynamoDB tables. Configure EventBridge to capture any DeleteTable API calls through CloudTrail and send a notification using SNS

242
Q

A company has a running Serverless application on AWS which uses EventBridge as an inter-communication channel between different services within the application. There is a requirement to use the events in the prod environment in the dev environment to make some tests. The tests will be done every 6 months, so the events need to be stored and used later on. What is the most efficient and cost-effective way to store EventBridge events and use them later?
- Use EventBridge Archive and Replay feature
- Create a Lambda function to store the EventBridge events in an S3 bucket for later usage
- Configure EventBridge to store events in a DynamoDB Table

A

Use EventBridge Archive and Replay feature

243
Q

You have strong regulatory requirements to only allow fully internally audited AWS services in production. You still want to allow your teams to experiment in a development environment while services are being audited. How can you best set this up?
- Provide the Dev team with a completely independent AWS account
- Apply a global IAM policy on your Prod account
- Create an AWS Organization and create two Prod and Dev OUs, then Apply a SCP on the Prod OU
- Create an AWS Config Rule

A

Create an AWS Organization and create two Prod and Dev OUs, then Apply a SCP on the Prod OU

244
Q

You are managing the AWS account for your company, and you want to give one of the developers access to read files from an S3 bucket. You have updated the bucket policy to this, but he still can’t access the files in the bucket. What is the problem?
~~~
{
“Version”: “2012-10-17”,
“Statement”: [{
“Sid”: “AllowsRead”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::123456789012:user/Dave”
},
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::static-files-bucket-xxx”
}]
}
~~~
- Everything is okay, he just needs to logout and login again
- The bucket does not contain any files yet
- You should change the resource to arn:aws:s3:::static-files-bucket-xxx/*, because this is an object-level permission

A

You should change the resource to arn:aws:s3:::static-files-bucket-xxx/*, because this is an object-level permission

245
Q

You have 5 AWS Accounts that you manage using AWS Organizations. You want to restrict access to certain AWS services in each account. How should you do that?
- Using IAM Roles
- Using AWS Organizations SCP
- Using AWS Config

A

Using AWS Organizations SCP

246
Q

Which of the following IAM condition keys can you use to only allow API calls to a specified AWS region?
- aws:RequiredRegion
- aws:SourceRegion
- aws:InitialRegion
- aws:RequestedRegion

A

aws:RequestedRegion

247
Q

When configuring permissions for EventBridge to configure a Lambda function as a target you should use ………………….. but when you want to configure a Kinesis Data Streams as a target you should use …………………..
- Identity-Based Policy, Resource-based Policy
- Resource-based Policy, Identity-Based Policy
- Identity-Based Policy, Identity-Based Policy
- Resource-based Policy, Resource-based Policy

A

Resource-based Policy, Identity-Based Policy

248
Q

To enable In-flight Encryption (In-Transit Encryption), we need to have ……………………
- an HTTP endpoint with an SSL certificate
- an HTTPS endpoint with an SSL certificate
- a TCP endpoiont

A

an HTTPS endpoint with an SSL certificate

In-flight Encryption = HTTPS, and HTTPS can not be enabled without an SSL certificate.

249
Q

Server-Side Encryption means that the data is sent encrypted to the server.
- True
- False

A

False

Server-Side Encryption means the server will encrypt the data for us. We don’t need to encrypt it beforehand.

250
Q

In Server-Side Encryption, where do the encryption and decryption happen?
- Both Encryption and Decryption happen on the server
- Both Encryption and Decryption happen on the client
- Encryption happens on the server and Decryption happens on the client
- Encryption happens on the client and Decryption happens on the server

A

Both Encryption and Decryption happen on the server

In Server-Side Encryption, we can’t do encryption/decryption ourselves as we don’t have access to the corresponding encryption key.

251
Q

In Client-Side Encryption, the server must know our encryption scheme before we can upload the data.
- False
- True

A

False

With Client-Side Encryption, the server doesn’t need to know any information about the encryption scheme being used, as the server will not perform any encryption or decryption operations.

252
Q

You need to create KMS Keys in AWS KMS before you are able to use the encryption features for EBS, S3, RDS …
- True
- False

A

False

You can use the AWS Managed Service keys in KMS, therefore we don’t need to create our own KMS keys.

253
Q

AWS KMS supports both symmetric and asymmetric KMS keys.
- True
- False

A

True

KMS keys can be symmetric or asymmetric. A symmetric KMS key represents a 256-bit key used for encryption and decryption. An asymmetric KMS key represents an RSA key pair used for encryption and decryption or signing and verification, but not both. Or it represents an elliptic curve (ECC) key pair used for signing and verification.

254
Q

When you enable Automatic Rotation on your KMS Key, the backing key is rotated every ……………..
- 90 days
- 1 year
- 2 years
- 3 years

A

1 year

255
Q

You have an AMI that has an encrypted EBS snapshot using KMS CMK. You want to share this AMI with another AWS account. You have shared the AMI with the desired AWS account, but the other AWS account still can’t use it. How would you solve this problem?
- The other AWS account needs to logout and login again to refresh its credentials
- You need to share the KMS CMK used to encrypt the AMI with the other AWS account
- You can’t share the AMI that has an encrypted EBS snapshot

A

You need to share the KMS CMK used to encrypt the AMI with the other AWS account

256
Q

You have created a Customer-managed CMK in KMS that you use to encrypt both S3 buckets and EBS snapshots. Your company policy mandates that your encryption keys be rotated every 3 months. What should you do?
- Re-configure your KMS CMK and enable Automatic Rotation, in the “Period” select 3 months
- Use AWS Managed Keys as they are automatically rotated by AWS every 3 months
- Rotate the KMS CMK manually. Create a new KMS CMK and use Key Aliases to reference the new KMS CMK. Keep the old KMS CMK so you can decrypt the old data

A

Rotate the KMS CMK manually. Create a new KMS CMK and use Key Aliases to reference the new KMS CMK. Keep the old KMS CMK so you can decrypt the old data

257
Q

What should you use to control access to your KMS CMKs?
- KMS Key Policies
- KMS IAM Policy
- AWS GuardDuty
- KMS Access Control List (KMS ACL)

A

KMS Key Policies

258
Q

You have a Lambda function used to process some data in the database. You would like to give your Lambda function access to the database password. Which of the following options is the most secure?
- Embed it in the code
- Have it as a plaintext environment variable
- Have it as an encrypted environment variable and decrypt it at runtime

A

Have it as an encrypted environment variable and decrypt it at runtime

This is the most secure solution amongst these options.

259
Q

You have a secret value that you use for encryption purposes, and you want to store and track the values of this secret over time. Which AWS service should you use?
- AWS KMS Versioning feature
- SSM Parameter Store
- Amazon S3

A

SSM Parameter Store

SSM Parameters Store can be used to store secrets and has built-in version tracking capability. Each time you edit the value of a parameter, SSM Parameter Store creates a new version of the parameter and retains the previous versions. You can view the details, including the values, of all versions in a parameter’s history.

260
Q

Your user-facing website is a high-risk target for DDoS attacks and you would like to get 24/7 support in case they happen and AWS bill reimbursement for the incurred costs during the attack. What AWS service should you use?
- AWS WAF
- AWS Shield Advanced
- AWS Shield
- AWS DDoS OpsTeam

A

AWS Shield Advanced

261
Q

You would like to externally maintain the configuration values of your main database, to be picked up at runtime by your application. What’s the best place to store them to maintain control and version history?
- Amazon DynamoDB
- Amazon S3
- Amazon EBS
- SSM Parameter Store

A

SSM Parameter Store

262
Q

AWS GuardDuty scans the following data sources, EXCEPT …………….
- CloudTrail Logs
- VPC Flow Logs
- DNS Logs
- CloudWatch Logs

A

CloudWatch Logs

263
Q

You have a website hosted on a fleet of EC2 instances fronted by an Application Load Balancer. What should you use to protect your website from common web application attacks (e.g., SQL Injection)?
- AWS Shield
- AWS WAF
- AWS Security Hub
- AWS GuardDuty

A

AWS WAF

264
Q

You would like to analyze OS vulnerabilities from within EC2 instances. You need these analyses to occur weekly and provide you with concrete recommendations in case vulnerabilities are found. Which AWS service should you use?
- AWS Shield
- Amazon GuardDuty
- Amazon Inspector
- AWS Config

A

Amazon Inspector

265
Q

What is the most suitable AWS service for storing RDS DB passwords which also provides you automatic rotation?
- AWS Secrets Manager
- AWS KMS
- AWS SSM Parameter Store

A

AWS Secrets Manager

266
Q

Which AWS service allows you to centrally manage EC2 Security Groups and AWS Shield Advanced across all AWS accounts in your AWS Organization?
- AWS Shield
- AWS GuardDuty
- AWS Config
- AWS Firewall Manager

A

AWS Firewall Manager

AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. It is integrated with AWS Organizations so you can enable AWS WAF rules, AWS Shield Advanced protection, security groups, AWS Network Firewall rules, and Amazon Route 53 Resolver DNS Firewall rules.

267
Q

Which AWS service helps you protect your sensitive data stored in S3 buckets?
- Amazon GuardDuty
- Amazon Shield
- Amazon Macie
- AWS KMS

A

Amazon Macie

Amazon Macie is a fully managed data security service that uses Machine Learning to discover and protect your sensitive data stored in S3 buckets. It automatically provides an inventory of S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with other AWS accounts. It allows you to identify and alert you to sensitive data, such as Personally Identifiable Information (PII).

268
Q

An online-payment company is using AWS to host its infrastructure. The frontend is created using VueJS and is hosted on an S3 bucket and the backend is developed using PHP and is hosted on EC2 instances in an Auto Scaling Group. As their customers are worldwide, they use both CloudFront and Aurora Global database to implement multi-region deployments to provide the lowest latency and provide availability, and resiliency. A new feature is required which gives customers the ability to store data encrypted on the database and this data must not be disclosed even by the company admins. The data should be encrypted on the client side and stored in an encrypted format. What do you recommend to implement this?
- Using Aurora Client-side Encryption and KMS Multi-region Keys
- Using Lambda Client-side Encryption and KMS Multi-region Keys
- Using Aurora Client-side Encryption and CloudHSM
- Using Lambda Client-side Encryption and CloudHSM

A

Using Aurora Client-side Encryption and KMS Multi-region Keys

269
Q

You have an S3 bucket that is encrypted with SSE-KMS. You have been tasked to replicate the objects to a target bucket in the same AWS region but with a different KMS Key. You have configured the S3 replication, the target bucket, and the target KMS key and it is still not working. What is missing to make the S3 replication work?
- This is not a supported feature
- You have to raise a support ticket for AWS to start this replication process for you
- You have to configure permissions for both Source KMS Key kms:Decrypt and Target KMS Key kms:Encrypt to be used by the S3 Replication Service
- The source KMS Key and the target KMS key must be the same

A

You have to configure permissions for both Source KMS Key kms:Decrypt and Target KMS Key kms:Encrypt to be used by the S3 Replication Service

270
Q

You have generated a public certificate using LetsEncrypt and uploaded it to the ACM so you can use and attach to an Application Load Balancer that forwards traffic to EC2 instances. As this certificate is generated outside of AWS, it does not support the automatic renewal feature. How would you be notified 30 days before this certificate expires so you can manually generate a new one?
- Configure ACM to send notifications by linking it to 3rd party certificate provides LetsEncrypt
- Configure EventBridge for Daily Expiration Events for ACM to invoke SNS notifications to your email
- Configure EventBridge Monthly Expiration Events from ACM to invoke SNS notifications to your email
- Configure CloudWatch Alarms for Daily Expiration Events from ACM to invoke SNS notifications to your email

A

Configure EventBridge for Daily Expiration Events for ACM to invoke SNS notifications to your email

271
Q

You have created the main Edge-Optimized API Gateway in us-west-2 AWS region. This main Edge-Optimized API Gateway forwards traffic to the second level API Gateway in ap-southeast-1. You want to secure the main API Gateway by attaching an ACM certificate to it. Which AWS region are you going to create the ACM certificate in?
- us-east-1
- us-west-2
- ap-southeast-1
- Both us-east-1 and us-west-2 works

A

us-east-1

As the Edge-Optimized API Gateway is using a custom AWS managed CloudFront distribution behind the scene to route requests across the globe through CloudFront Edge locations, the ACM certificate must be created in us-east-1.

272
Q

You are managing an AWS Organization with multiple AWS accounts. Each account has a separate application with different resources. You want an easy way to manage Security Groups and WAF Rules across those accounts as there was a security incident the last week and you want to tighten up your resources. Which AWS service can help you to do so?
- AWS GuardDuty
- Amazon Shield
- Amazon Inspector
- AWS Firewall Manager

A

AWS Firewall Manager

273
Q

What does this CIDR 10.0.4.0/28 correspond to?
- 10.0.4.0 to 10.0.4.15
- 10.0.4.0 to 10.0.32.0
- 10.0.4.0 to 10.0.4.28
- 10.0.0.0 to 10.0.16.0

A

10.0.4.0 to 10.0.4.15

/28 means 16 IPs (=2^(32-28) = 2^4), means only the last digit can change.

274
Q

You have a corporate network of size 10.0.0.0/8 and a satellite office of size 192.168.0.0/16. Which CIDR is acceptable for your AWS VPC if you plan on connecting your networks later on?
- 172.16.0.0/12
- 172.16.0.0/16
- 10.0.16.0/16
- 192.168.4.0/18

A

172.16.0.0/16

CIDR not should overlap, and the max CIDR size in AWS is /16.

275
Q

You plan on creating a subnet and want it to have at least capacity for 28 EC2 instances. What’s the minimum size you need to have for your subnet?
- /28
- /27
- /26
- /25

A

/26

Perfect size, 64 IPs.

276
Q

Security Groups operate at the …………….. level while NACLs operate at the …………….. level.
- EC2 instance, Subnet
- Subnet, EC2 instance

A

EC2 instance, Subnet

277
Q

You have attached an Internet Gateway to your VPC, but your EC2 instances still don’t have access to the internet. What is NOT a possible issue?
- Route Tables are missing entries
- The EC2 instances don’t have public IPs
- The Security Group does not allow traffic in
- The NACL does not allow network traffic out

A

The Security Group does not allow traffic in

Security groups are stateful and if traffic can go out, then it can go back in.

277
Q

You would like to provide Internet access to your EC2 instances in private subnets with IPv4 while making sure this solution requires the least amount of administration and scales seamlessly. What should you use?
- NAT Instances with Source/Destination Check flag off
- Egress Only Internet Gateway
- NAT Gateway

A

NAT Gateway

278
Q

VPC Peering has been enabled between VPC A and VPC B, and the route tables have been updated for VPC A. But, the EC2 instances cannot communicate. What is the likely issue?
- Check the NACL
- Check the Route Tables in VPC B
- Check the EC2 instance attached Security Groups
- Check if DNS Resolution is enabled

A

Check the Route Tables in VPC B

Route tables must be updated in both VPCs that are peered.

279
Q

You have set up a Direct Connect connection between your corporate data center and your VPC A in your AWS account. You need to access VPC B in another AWS region from your corporate datacenter as well. What should you do?
- Enable VPC Peering
- Use a Customer Gateway
- Use a Direct Connect Gateway
- Set up a NAT Gateway

A

Use a Direct Connect Gateway

This is the main use case of Direct Connect Gateways.

280
Q

When using VPC Endpoints, what are the only two AWS services that have a Gateway Endpoint available?
- Amazon S3 & Amazon SQS
- Amazon SQS & DynamoDB
- Amazon S3 & DynamoDB

A

Amazon S3 & DynamoDB

These two services have a VPC Gateway Endpoint (remember it), all the other ones have an Interface endpoint (powered by Private Link - means a private IP).

281
Q

AWS reserves 5 IP addresses each time you create a new subnet in a VPC. When you create a subnet with CIDR 10.0.0.0/24, the following IP addresses are reserved, EXCEPT ………………..
- 10.0.0.1
- 10.0.0.2
- 10.0.0.3
- 10.0.0.4

A

10.0.0.4

282
Q

You have 3 VPCs A, B, and C. You want to establish a VPC Peering connection between all the 3 VPCs. What should you do?
- As VPC Peering supports Transitive Peering, you need to establish 2 VPC Peering connections (A-B, B-C)
- Establish 3 VPC Peering connections (A-B, A-C, B-C)

A

Establish 3 VPC Peering connections (A-B, A-C, B-C)

283
Q

How can you capture information about IP traffic inside your VPCs?
- Enable VPC Flow Logs
- Enable VPC Traffic Mirroring
- Enable CloudWatch Traffic Logs

A

Enable VPC Flow Logs

VPC Flow Logs is a VPC feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC.

284
Q

If you want a 500 Mbps Direct Connect connection between your corporate datacenter to AWS, you would choose a ……………… connection.
- Dedicated
- Hosted

A

Hosted

Hosted Direct Connect connection supports 50Mbps, 500Mbps, up to 10Gbps.

285
Q

When you set up an AWS Site-to-Site VPN connection between your corporate on-premises datacenter and VPCs in AWS Cloud, what are the two major components you want to configure for this connection?
- Customer Gateway and NAT Gateway
- Internet Gateway and Customer Gateway
- Virtual Private Gateway and Internet Gateway
- Virtual Private Gateway and Customer Gateway

A

Virtual Private Gateway and Customer Gateway

286
Q

Your company has several on-premises sites across the USA. These sites are currently linked using private connections, but your private connections provider has been recently quite unstable, making your IT architecture partially offline. You would like to create a backup connection that will use the public Internet to link your on-premises sites, that you can failover in case of issues with your provider. What do you recommend?
- VPC Peering
- AWS VPC CloudHub
- Direct Connect
- AWS PrivateLink

A

AWS VPC CloudHub

AWS VPN CloudHub allows you to securely communicate with multiple sites using AWS VPN. It operates on a simple hub-and-spoke model that you can use with or without a VPC.

287
Q

You need to set up a dedicated connection between your on-premises corporate datacenter and AWS Cloud. This connection must be private, consistent, and traffic must not travel through the Internet. Which AWS service should you use?
- Site-to-Site VPN
- AWS PrivateLink
- AWS Direct Connect
- Amazon EventBridge

A

AWS Direct Connect

288
Q

Using a Direct Connect connection, you can access both public and private AWS resources.
- True
- False

A

True

289
Q

You want to scale up an AWS Site-to-Site VPN connection throughput, established between your on-premises data and AWS Cloud, beyond a single IPsec tunnel’s maximum limit of 1.25 Gbps. What should you do?
- Use 2 Virtual Private Gateways
- Use Direct Connect Gateway
- Use Transit Gateway

A

Use Transit Gateway

290
Q

You have a VPC in your AWS account that runs in a dual-stack mode. You are continuously trying to launch an EC2 instance, but it fails. After further investigation, you have found that you are no longer have IPv4 addresses available. What should you do?
- Modify your VPC to run in IPv6 mode only
- Modify your VPC to run in IPv4 mode only
- Add an additional IPv4 CIDR to your VPC

A

Add an additional IPv4 CIDR to your VPC

291
Q

A web application backend is hosted on EC2 instances in private subnets fronted by an Application Load Balancer in public subnets. There is a requirement to give some of the developers access to the backend EC2 instances but without exposing the backend EC2 instances to the Internet. You have created a bastion host EC2 instance in the public subnet and configured the backend EC2 instances Security Group to allow traffic from the bastion host. Which of the following is the best configuration for bastion host Security Group to make it secure?
- Allow traffic only on port 80 from the company’s public CIDR
- Allow traffic only on port 22 from the company’s public CIDR
- Allow traffic only on port 22 from the company’s private CIDR
- Allow traffic only on port 80 from the company’s private CIDR

A

Allow traffic only on port 22 from the company’s public CIDR

292
Q

A company has set up a Direct Connect connection between their corporate data center to AWS. There is a requirement to prepare a cost-effective secure backup connection in case there are issues with this Direct Connect connection. What is the most cost effective and secure solution you recommend?
- Setup another Direct Connect connection to the same AWS region
- Setup another Direct Connect connection to a different AWS region
- Setup a Site-to-Site VPN connection as a backup

A

Setup a Site-to-Site VPN connection as a backup

293
Q

Which AWS service allows you to protect and control traffic in your VPC from layer 3 to layer 7?
- AWS Network Firewall
- Amazon Guard Duty
- Amazon Inspector
- Amazon Shield

A

AWS Network Firewall

294
Q

A web application hosted on a fleet of EC2 instances managed by an Auto Scaling Group. You are exposing this application through an Application Load Balancer. Both the EC2 instances and the ALB are deployed on a VPC with the following CIDR 192.168.0.0/18. How do you configure the EC2 instances’ security group to ensure only the ALB can access them on port 80?
- Add an Inbound Rule with port 80 and 0.0.0.0/0 as the source
- Add an Inbound Rule with port 80 and 192.168.0.0/18 as the source
- Add an Inbound Rule with port 80 and ALB’s Security Group as the source
- Load an SSL certificate on the ALB

A

Add an Inbound Rule with port 80 and ALB’s Security Group as the source

This is the most secure way of ensuring only the ALB can access the EC2 instances. Referencing by security groups in rules is an extremely powerful rule and many questions at the exam rely on it.

295
Q

As part of your Disaster Recovery plan, you would like to have only the critical infrastructure up and running in AWS. You don’t mind a longer Recovery Time Objective (RTO). Which DR strategy do you recommend?
- Backup and Restore
- Pilot Light
- Warm Standby
- Multi-Site

A

Pilot Light

If you’re interested, read more about Disaster Recovery options in AWS here: https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html

296
Q

You would like to get the Disaster Recovery strategy with the lowest Recovery Time Objective (RTO) and Recovery Point Objective (RPO), regardless of the cost. Which DR should you choose?
- Backup and Restore
- Pilot Light
- Warm Standby
- Multi-Site

A

Multi-Site

If you’re interested, read more about Disaster Recovery options in AWS here: https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html

297
Q

Which of the following Disaster Recovery strategies has a potentially high Recovery Point Objective (RPO) and Recovery Time Objective (RTO)?
- Backup and Restore
- Pilot Light
- Warm Standby
- Multi-Site

A

Backup and Restore

If you’re interested, read more about Disaster Recovery options in AWS here: https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html

298
Q

You want to make a Disaster Recovery plan where you have a scaled-down version of your system up and running, and when a disaster happens, it scales up quickly. Which DR strategy should you choose?
- Backup and Restore
- Pilot Light
- Warm Standby
- Multi-Site

A

Warm Standby

If you’re interested, read more about Disaster Recovery options in AWS here: https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html

299
Q

You have an on-premises Oracle database that you want to migrate to AWS, specifically to Amazon Aurora. How would you do the migration?
- Use AWS Schema Conversion Tool (AWS SCT) to convert the database schema, then use AWS Database Migration Service (AWS DMS) to migrate the data
- Use AWS Database Migration Service (AWS DMS) to convert the database schema, then use AWS Schema Conversion Tool (AWS SCT) to migrate the data

A

Use AWS Schema Conversion Tool (AWS SCT) to convert the database schema, then use AWS Database Migration Service (AWS DMS) to migrate the data

300
Q

AWS DataSync supports the following locations, EXCEPT ………………..
- Amazon S3
- Amazon EBS
- Amazon EFS
- Amazon FSx for Windows File Server

A

Amazon EBS

301
Q

You are running many resources in AWS such as EC2 instances, EBS volumes, DynamoDB tables… You want an easy way to manage backups across all these AWS services from a single place. Which AWS offering makes this process easy?
- Amazon S3
- AWS Storage Gateway
- AWS Backup
- EC2 Snapshots

A

AWS Backup

AWS Backup enables you to centralize and automate data protection across AWS services. It helps you support your regulatory compliance or business policies for data protection.

302
Q

A company planning to migrate its existing websites, applications, servers, virtual machines, and data to AWS. They want to do a lift-and-shift migration with minimum downtime and reduced costs. Which AWS service can help in this scenario?
- AWS Database Migration Service
- AWS Application Migration Service
- AWS Backup
- AWS Schema Conversion Tool

A

AWS Application Migration Service

303
Q

A company is using VMware on its on-premises data center to manage its infrastructure. There is a requirement to extend their data center and infrastructure to AWS but keep using the technology stack they are using which is VMware. Which AWS service can they use?
- VMware Cloud on AWS
- AWS DataSync
- AWS Application Migration Service
- AWS Application Discovery Service

A

VMware Cloud on AWS

304
Q

A company is using RDS for MySQL as their main database but, lately they have been facing issues in managing the database, performance issues, and the scalability. And they have decided to use Aurora for MySQL instead for better performance, less complexity and less administrative tasks required. What is the best way and most cost-effective way to migrate from RDS for MySQL to Aurora for MySQL?
- Raise an AWS support ticket to do the migration as it is not supported
- Create a database dump from RDS for MySQL, store it in a S3 bucket, then restore it to Aurora for MySQL
- You can not migrate directly to Aurora for MySQL, you have to create a custom application to insert the data manually
- Create a snapshot from RDS for MySQL and restore it to Aurora for MySQL

A

Create a snapshot from RDS for MySQL and restore it to Aurora for MySQL

305
Q

Which AWS service can you use to automate the backup across different AWS services such as RDS, DynamoDB, Aurora, and EFS file systems, and EBS volumes?
- Amazon S3 Lifecycle Policy
- AWS DataSync
- AWS Backup
- Amazon Glacier

A

AWS Backup

306
Q

You are working on a Serverless application where you want to process objects uploaded to an S3 bucket. You have configured S3 Events on your S3 bucket to invoke a Lambda function every time an object has been uploaded. You want to ensure that events that can’t be processed are sent to a Dead Letter Queue (DLQ) for further processing. Which AWS service should you use to set up the DLQ?
- S3 Events
- SNS Topic
- Lambda Function

A

Lambda Function

The Lambda function’s invocation is “asynchronous”, so the DLQ has to be set on the Lambda function side.

307
Q

As a Solutions Architect, you have created an architecture for a company that includes the following AWS services: CloudFront, Web Application Firewall (AWS WAF), AWS Shield, Application Load Balancer, and EC2 instances managed by an Auto Scaling Group. Sometimes the company receives malicious requests and wants to block these IP addresses. According to your architecture, Where should you do it?
- CloudFront
- AWS WAF
- AWS Shield
- ALB Security Group
- EC2 Security Group
- NACL

A

AWS WAF

308
Q

Your EC2 instances are deployed in Cluster Placement Group in order to perform High-Performance Computing (HPC). You would like to maximize network performance between your EC2 instances. What should you use?
- Elastic Fabric Adapter
- Elastic Network Interface
- Elastic Network Adapter
- FSx for Lustre

A

Elastic Fabric Adapter

309
Q

Which AWS Service analyzes your AWS account and gives recommendations for cost optimization, performance, security, fault tolerance, and service limits?
- AWS Trusted Advisor
- AWS CloudTrail
- AWS Identity Access Manager (IAM)
- AWS CloudFormation

A

AWS Trusted Advisor

AWS Trusted Advisor provides recommendations that help you follow AWS best practices. It evaluates your account by using checks. These checks identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas.