AWS Solutions Architect Flashcards

AWS architecture (465 cards)

1
Q

What is the difference between containers and virtual machines?

A) Containers share the underlying host system’s OS Kernal
B) Every Container goes through a full OS boot-up cycle
C) Containers can take a long time to start
D) All of the above

A

A) Containers share the underlying host system’s OS Kernal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following is true about Docker?

A) Provides tools to build, manage, and deploy containers
B) Leverages file system layers to be lightweight and fast
C) Creates container images that can be modified by running containers
D) Both A and B

A

D) Both A and B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are microservices

A

Microservices are an architectural organizational approach to speed up scalablility of application services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the Characteristics of microservices?

A

1) Decentralized, evolutinary design
2) Smart endpoints, dumb pipes
3) Independent products, not projects
4) Designed for failure
5) Disposablility
6) Development and production parity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following is NOT TRUE of microservices architectures?

A) Decomplese monolithic applications into smaller pieces
B) Create faster development and test cycles
C) Work well within container-based workloads
D) require that all applicaitons be developed in the same programming language

A

D) require that all applicaitons be developed in the same programming language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following can be public or private storage for Docker images?

A) Image holder
B) Binder
C) Regisry
D) Container box

A

C) Registry

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the three Cloud computing deployment models?

A

1) On-premises
2) Cloud
3) Hybrid

On-premises

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the Six advantages of cloud computing?

A

1) Pay-as-you-go
2) Benefit from massive economies of scale
3) Stop guessing capacity
4) Increase speed and agility
5) Realize cost savings
6) go global in minutes

Pay-as-you-go
The cloud computing model is based on paying only for the resources that you use. This is in contrast to on-premises models of investing in data centers and hardware that might not be fully used.

Benefit from massive economies of scale
By using cloud computing, you can achieve a lower cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, AWS can achieve higher economies of scale, which translates into lower pay-as-you-go prices.

Increase speed and agility
IT resources are only a click away, which means that you reduce the time to make resources available to developers from weeks to minutes. This results in a dramatic increase in agility for the organization, because the cost and time it takes to experiment and develop is significantly lower.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are Regions?

A

Regions are geographic locations worldwide where AWS hosts its data centers. AWS Regions are named after the location where they reside. For example, in the United States, the Region in Northern Virginia is called the Northern Virginia Region, and the Region in Oregon is called the Oregon Region. AWS has Regions in Asia Pacific, China, Europe, the Middle East, North America, and South America. And we continue to expand to meet our customers’ needs.

Choosing the right AWS Region

AWS Regions are independent from one another. Without explicit customer consent and authorization, data is not replicated from one Region to another. When you decide which AWS Region to host your applications and workloads, consider four main aspects: latency, price, service availability, and compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a Availability Zone (AZ)?

A

Availability Zones

Inside every Region is a cluster of Availability Zones. An Availability Zone consists of one or more data centers with redundant power, networking, and connectivity. These data centers operate in discrete facilities in undisclosed locations. They are connected using redundant high-speed and low-latency links.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Scope of AWS services?

A

Scope of AWS services

Depending on the AWS service that you use, your resources are either deployed at the Availability Zone, Region, or Global level. Each service is different, so you must understand how the scope of a service might affect your application architecture.

When you operate a Region-scoped service, you only need to select the Region that you want to use. If you are not asked to specify an individual Availability Zone to deploy the service in, this is an indicator that the service operates on a Region-scope level. For Region-scoped services, AWS automatically performs actions to increase data durability and availability.

On the other hand, some services ask you to specify an Availability Zone. With these services, you are often responsible for increasing the data durability and high availability of these resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is Maintaining resiliency?

A

Maintaining resiliency

To keep your application available, you must maintain high availability and resiliency. A well-known best practice for cloud architecture is to use Region-scoped, managed services. These services come with availability and resiliency built in. When that is not possible, make sure your workload is replicated across multiple Availability Zones. At a minimum, you should use two Availability Zones. That way, if an Availability Zone fails, your application will have infrastructure up and running in a second Availability Zone to take over the traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is an Edge location?

A

Edge locations

Edge locations are global locations where content is cached. For example, if your media content is in London and you want to share video files with your customers in Sydney, you could have the videos cached in an edge location closest to Sydney. This would make it possible for your customers to access the cached videos more quickly than accessing them from London. Currently, there are over 400+ edge locations globally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the three ways you can interact with AWS when creating objects?

A

1) Console
2) Command Line Interface
3) AWS SDKs

AWS Management Console

One way to manage cloud resources is through the web-based console, where you log in and choose the desired service.

AWS CLI
The AWS CLI is a unified tool that you can use to manage AWS services. You can download and configure one tool that you can use to control multiple AWS services from the command line, and automate them with scripts. The AWS CLI is open source, and installers are available for Windows, Linux, and macOS.

AWS SDKs
API calls to AWS can also be performed by running code with programming languages. You can do this by using AWS SDKs. SDKs are open source and maintained by AWS for the most popular programming languages, such as C++, Go, Java, JavaScript, .NET, Node.js, PHP, Python, Ruby, Rust, and Swift.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the responsibility of AWS vs Customer security?

A

AWS responsibility

AWS is responsible for security of the cloud. This means that AWS protects and secures the infrastructure that runs the services offered in the AWS Cloud. AWS is responsible for the following:

Protecting and securing AWS Regions, Availability Zones, and data centers, down to the physical security of the buildings
Managing the hardware, software, and networking components that run AWS services, such as the physical servers, host operating systems, virtualization layers, and AWS networking components

Customer responsibility

Customers are responsible for security in the cloud. When using any AWS service, the customer is responsible for properly configuring the service and their applications, in addition to ensuring that their data is secure.

The customers’ level of responsibility depends on the AWS service. Some services require the customer to perform all the necessary security configuration and management tasks. Other more abstracted services require customers to only manage the data and control access to their resources. Using the two categories of AWS services, customers can determine their level of responsibility for each AWS service that they use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the AWS root user?

A

AWS root user
When you first create an AWS account, you begin with a single sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS root user and is accessed by signing in with the email address and password that were used to create the account.

AWS root user credentials

The AWS root user has two sets of credentials associated with it. One set of credentials is the email address and password that were used to create the account. This allows you to access the AWS Management Console. The second set of credentials is called access keys, which allow you to make programmatic requests from the AWS Command Line Interface (AWS CLI) or AWS API.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the root user best practices?

A

To ensure the safety of the root user, follow these best practices:

1) Choose a strong password for the root user.

2) Enable multi-factor authentication (MFA) for the root user.

3) Never share your root user password or access keys with anyone.

4) Disable or delete the access keys associated with the root user.

5) Create an Identity and Access Management (IAM) user for administrative tasks or everyday tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is MFA (Multi-factor authentication)?

A

When you create an AWS account and first log in to the account, you use single-factor authentication. Single-factor authentication is the simplest and most common form of authentication. It only requires one authentication method. In this case, you use a user name and password to authenticate as the AWS root user. Other forms of single-factor authentication include a security pin or a security token.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What Three categories does MFA pull from?

A

1) Something you know
2) Something you have
3) Something you are

Something you know, such as a user name and password or pin number

Something you have, such as a one-time passcode from a hardware device or mobile app

Something you are, such as a fingerprint or face scanning technology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are the supported MFA devices?

A

1) Virtual MFA
2) Hardware TOTP token
3) FIDO security keys

Virtual MFA
A software app that runs on a phone or other device that provides a one-time passcode. These applications can run on unsecured mobile devices, and because of that, they might not provide the same level of security as hardware or FIDO security keys.

Hardware TOTP token
A hardware device, generally a key fob or display card device, that generates a one-time, six-digit numeric code based on the time-based one-time password (TOTP) algorithm.

FIDO security keys
FIDO-certified hardware security keys are provided by third-party providers such as Yubico. You can plug your FIDO security key into a USB port on your computer and enable it using the instructions that follow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

True or False: you can apply IAM policies to the root user?

A

False; you can only apply the IAM policy to a IAM Group or User

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is IAM?

A

What is IAM?

AWS Identity and Access Management (IAM) is an AWS service that helps you manage access to your AWS account and resources. It also provides a centralized view of who and what are allowed inside your AWS account (authentication), and who and what have permissions to use and work with your AWS resources (authorization).

With IAM, you can share access to an AWS account and resources without sharing your set of access keys or password. You can also provide granular access to those working in your account, so people and services only have permissions to the resources that they need. For example, to provide a user of your AWS account with read-only access to a particular AWS service, you can granularly select which actions and which resources in that service that they can access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the features of IAM?

A

Global
IAM is global and not specific to any one Region. You can see and use your IAM configurations from any Region in the AWS Management Console.

Integrated with AWS services
IAM is integrated with many AWS services by default.

Shared access
You can grant other identities permission to administer and use resources in your AWS account without having to share your password and key.

Multi-factor authentication
IAM supports MFA. You can add MFA to your account and to individual users for extra security.

Identity federation
IAM supports identity federation, which allows users with passwords elsewhere—like your corporate network or internet identity provider—to get temporary access to your AWS account.

Free to use
Any AWS customer can use IAM; the service is offered at no additional charge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are IAM user credentials?

A

IAM user credentials

An IAM user consists of a name and a set of credentials. When you create a user, you can provide them with the following types of access:

Access to the AWS Management Console
Programmatic access to the AWS CLI and AWS API
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What are IAM groups?
An IAM group is a collection of users. All users in the group inherit the permissions assigned to the group. This makes it possible to give permissions to multiple users at once. It’s a more convenient and scalable way of managing permissions for users in your AWS account. This is why using IAM groups is a best practice.
26
What are IAM policies?
IAM policies To manage access and provide permissions to AWS services and resources, you create IAM policies and attach them to an IAM identity. Whenever an IAM identity makes a request, AWS evaluates the policies associated with them. For example, if you have a developer inside the developers group who makes a request to an AWS service, AWS evaluates any policies attached to the developers group and any policies attached to the developer user to determine if the request should be allowed or denied. Example: "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "*", "Resource": "*" }] }
27
What are the different parts of an IAM Jason policy?
This policy has four major JSON elements: Version, Effect, Action, and Resource. The Version element defines the version of the policy language. It specifies the language syntax rules that are needed by AWS to process a policy. To use all the available policy features, include "Version": "2012-10-17" before the "Statement" element in your policies. The Effect element specifies whether the policy will allow or deny access. In this policy, the Effect is "Allow", which means you’re providing access to a particular resource. The Action element describes the type of action that should be allowed or denied. In the example policy, the action is "*". This is called a wildcard, and it is used to symbolize every action inside your AWS account. The Resource element specifies the object or objects that the policy statement covers. In the policy example, the resource is the wildcard "*". This represents all resources inside your AWS console.
28
What are the four main factors that you should take into consideration when choosing a Region? A) Latency, high availability, taxes, and compliance B) Latency, price, service availability, and compliance C) Latency, taxes, speed, and compliance D) Latency, security, high availability, and resiliency
B) Latency, price, service availability, and compliance
29
Which of the following best describes the relationship between Regions, Availability Zones, and data centers? A) Regions are a grouping of Availability Zones. Data centers are one or more discrete Availability Zones. B) Data centers are a grouping of Regions. Regions are one or more discrete Availability Zones. C) Regions are a grouping of Availability Zones. Availability Zones are one or more discrete data centers. D) Availability Zones are a grouping of Regions. Regions are one or more discrete data centers.
C) Regions are a grouping of Availability Zones. Availability Zones are one or more discrete
30
Which of the following is a benefit of cloud computing? A) Run and maintain your own data centers. B) Increase time to market. C) Overprovision for scale. D) Pay as you go.
D) Pay as you go.
31
What is a client?
A client is a person or computer that sends a request.
32
What are the THREE AWS types of compute options that are available?
1) Virtual machines (instances) 2) Containers 3) Serverless
33
What is Amazon Elastic Compute Cloud (Amazon EC2)?
Is a web service that provides secure and resizable compute capacity in the cloud
34
What are THREE things that you can do with Amazon EC2?
1) Provision and launch one or more EC2 instances in minutes. 2) Stop or shut down EC2 instances when you finish running a workload. 3) Pay by the hour or second for each instance type (minimum of 60 seconds).
35
How can you Manage AWS EC2 instances?
through the: 1) AWS Management Console 2) AWS CLI 3) AWS SDKs 4) automation tools 5) and infrastructure orchestration services
36
What does AMI stand for?
Amazon Machine Image (AMI) In the AWS Cloud, the operating system installation is not your responsibility. Instead, it's built into the AMI that you choose. An AMI includes the operating system, storage mapping, architecture type, launch permissions, and any additional preinstalled software applications.
37
What are the multiple ways to start an AMI?
1) Quick Start AMIs Quick Start AMIs are commonly used AMIs created by AWS that you can select to get started quickly. 2) AWS Marketplace AMIs AWS Marketplace AMIs provide popular open-source and commercial software from third-party vendors. 3) My AMIs My AMIs are created from your EC2 instances. 4) Community AMIs Community AMIs are provided by the AWS user community. 5) Custom image Build your own custom image with EC2 Image Builder.
38
What does the "c5n.xlarge" instant type name indicate?
First position – The first position, c, indicates the instance family. This indicates that this instance belongs to the compute optimized family. Second position – The second position, 5, indicates the generation of the instance. This instance belongs to the fifth generation of instances. Remaining letters before the period – In this case, n indicates additional attributes, such as local NVMe storage. After the period – After the period, xlarge indicates the instance size. In this example, it's xlarge.
39
What is meant by Elastic?
being able to scale in or out based on need
40
What is the difference between stop and stop-hibernate of an EC2 instance?
When you stop an instance, it enters the stopping state until it reaches the stopped state. AWS does not charge usage or data transfer fees for your instance after you stop it. But storage for any Amazon EBS volumes is still charged. While your instance is in the stopped state, you can modify some attributes, like the instance type. When you stop your instance, the data from the instance memory (RAM) is lost. When you stop-hibernate an instance, Amazon EC2 signals the operating system to perform hibernation (suspend-to-disk), which saves the contents from the instance memory (RAM) to the EBS root volume. You can hibernate an instance only if hibernation is turned on and the instance meets the hibernation prerequisites.
41
What is the On-Demand Instance?
With On-Demand Instances, you pay for compute capacity per hour or per second, depending on which instances that you run. There are no long-term commitments or upfront payments required. Billing begins whenever the instance is running, and billing stops when the instance is in a stopped or terminated state. You can increase or decrease your compute capacity to meet the demands of your application and only pay the specified hourly rates for the instance that you use.
42
What is a Spot Instance?
For applications that have flexible start and end times, Amazon EC2 offers the Spot Instances option. With Amazon EC2 Spot Instances, you can request spare Amazon EC2 computing capacity for up to 90 percent off the On-Demand price. Spot Instances are recommended for the following use cases: Applications that have flexible start and end times Applications that are only feasible at very low compute prices Users with fault-tolerant or stateless workloads With Spot Instances, you set a limit on how much you want to pay for the instance hour. This is compared against the current Spot price that AWS determines. Spot Instance prices adjust gradually based on long-term trends in supply and demand for Spot Instance capacity. If the amount that you pay is more than the current Spot price and there is capacity, you will receive an instance.
43
What is Saving Plan Instance?
Savings Plans are a flexible pricing model that offers low usage prices for a 1-year or 3-year term commitment to a consistent amount of usage. Savings Plans apply to Amazon EC2, AWS Lambda, and AWS Fargate usage and provide up to 72 percent savings on AWS compute usage. For workloads that have predictable and consistent usage, Savings Plans can provide significant savings compared to On-Demand Instances.
44
What are Reserved Instances?
For applications with steady state usage that might require reserved capacity, Amazon EC2 offers the Reserved Instances option. With this option, you save up to 72 percent compared to On-Demand Instance pricing. You can choose between three payment options: All Upfront, Partial Upfront, or No Upfront. You can select either a 1-year or 3-year term for each of these options. With Reserved Instances, you can choose the type that best fits your applications needs. Standard Reserved Instances: These provide the most significant discount (up to 72 percent off On-Demand pricing) and are best suited for steady-state usage. Convertible Reserved Instances: These provide a discount (up to 54 percent off On-Demand pricing) and the capability to change the attributes of the Reserved Instance if the exchange results in the creation of Reserved Instances of equal or greater value. Like Standard Reserved Instances, Convertible Reserved Instances are best suited for steady-state usage. Scheduled Reserved Instances: These are available to launch within the time windows that you reserve. With this option, you can match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month.
45
What are Dedicated Hosts?
A Dedicated Host is a physical Amazon EC2 server that is dedicated for your use. Dedicated Hosts can help you reduce costs because you can use your existing server-bound software licenses, such as Windows Server, SQL Server, and Oracle licenses. And they can also help you meet compliance requirements. Amazon EC2 Dedicated Host is also integrated with AWS License Manager, a service that helps you manage your software licenses, including Microsoft Windows Server and Microsoft SQL Server licenses. Dedicated Hosts can be purchased on demand (hourly). Dedicated Hosts can be purchased as a Reservation for up to 70 percent off the On-Demand price.
46
What is AMI?
Amazon Machine Image
47
What is a container?
A container is a standardized unit that packages your code and its dependencies. This package is designed to run reliably on any platform, because the container creates its own independent environment. With containers, workloads can be carried from one place to another, such as from development to production or from on-premises environments to the cloud.
48
What is the deference between VMs and Containers?
Compared to virtual machines (VMs), containers share the same operating system and kernel as the host that they are deployed on. But virtual machines contain their own operating system. Each virtual machine must maintain a copy of an operating system, which results in a degree of wasted resources.
49
What is Orchestrating containers?
In AWS, containers can run on EC2 instances. For example, you might have a large instance and run a few containers on that instance. Although running one instance is uncomplicated to manage, it lacks high availability and scalability. Most companies and organizations run many containers on many EC2 instances across several Availability Zones. If you’re trying to manage your compute at a large scale, you should consider the following: * How to place your containers on your instances * What happens if your container fails * What happens if your instance fails * How to monitor deployments of your containers This coordination is handled by a container orchestration service. AWS offers two container orchestration services: Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS).
50
What are the two types of container orchestration services?
1) ECS 2) EKS
51
What is Amazon ECS?
Amazon ECS is an end-to-end container orchestration service that helps you spin up new containers. With Amazon ECS, your containers are defined in a task definition that you use to run an individual task or a task within a service. You have the option to run your tasks and services on a serverless infrastructure that's managed by another AWS service called AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of EC2 instances that you manage.
52
When the Amazon ECS container instances are up and running what are some of the actions you can perform?
1) Launching and stopping containers 2) Getting cluster state 3) Scaling in and out 4) Scheduling the placement of containers across your cluster 5) Assigning permissions 6) Meeting availability requirements
53
Whar are Kubernetes?
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services
54
What is Amazon EKS?
Amazon EKS is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Amazon EKS is conceptually similar to Amazon ECS.
55
What are the deferences between EKS and ECS?
1) In Amazon ECS, the machine that runs the containers is an EC2 instance that has an ECS agent installed and configured to run and manage your containers. This instance is called a container instance. In Amazon EKS, the machine that runs the containers is called a worker node or Kubernetes node. 2) An ECS container is called a task. 3) An EKS container is called a pod. Amazon ECS runs on AWS native technology. Amazon EKS runs on Kubernetes.
56
How do you remove the undifferentiated heavy lifting of your responsibility of maintaning security and patching on an EC2?
By using serverless compute
57
What is serverless compute?
With serverless compute, you can spend time on the things that differentiate your application, rather than spending time on ensuring availability, scaling, and managing servers. Every definition of serverless mentions the following four aspects: * There are no servers to provision or manage. * It scales with usage. * You never pay for idle resources. * Availability and fault tolerance are built in.
58
What are the TWO serverless services AWS provides?
* AWS Fargate * AWS Lambda
59
What is Amazon ECR?
Amazon Container Resources: Used to store ECS and EKS images for use. Like an image library.
60
What are some of the advantages of using Amazon Fargate?
Fargate abstracts the EC2 instance so that you’re not required to manage the underlying compute infrastructure. However, with Fargate, you can use all the same Amazon ECS concepts, APIs, and AWS integrations. It natively integrates with IAM and Amazon Virtual Private Cloud (Amazon VPC). With native integration with Amazon VPC, you can launch Fargate containers inside your network and control connectivity to your applications. AWS Fargate is a purpose-built serverless compute engine for containers. AWS Fargate scales and manages the infrastructure, so developers can work on what they do best, application development. It achieves this by allocating the right amount of compute. This eliminates the need to choose and manage EC2 instances, cluster capacity, and scaling. Fargate supports both Amazon ECS and Amazon EKS architecture and provides workload isolation and improved security by design.
61
What can you use if you want to deploy your workloads and applications without having to manage any EC2 instances or containers?
Use Lambda With Lambda, you can run code without provisioning or managing servers. You can run code for virtually any type of application or backend service. This includes data processing, real-time stream processing, machine learning, WebSockets, IoT backends, mobile backends, and web applications like your employee directory application! Lambda runs your code on a high availability compute infrastructure and requires no administration from the user. You upload your source code in one of the languages that Lambda supports, and Lambda takes care of everything required to run and scale your code with high availability. There are no servers to manage. You get continuous scaling with subsecond metering and consistent performance.
62
How does Lambda works
You have the option of configuring your Lambda functions using the Lambda console, Lambda API, AWS CloudFormation, or AWS Serverless Application Model (AWS SAM). You can invoke your function directly by using the Lambda API, or you can configure an AWS service or resource to invoke your function in response to an event.
63
Lamda SEVEN Concepts?
1) Function A function is a resource that you can invoke to run your code in Lambda. Lambda runs instances of your function to process events. When you create the Lambda function, it can be authored in several ways: * You can create the function from scratch. * You can use a blueprint that AWS provides. * You can select a container image to deploy for your function. * You can browse the AWS Serverless Application Repository. 2) Trigger Triggers describe when a Lambda function should run. A trigger integrates your Lambda function with other AWS services and event source mappings. So you can run your Lambda function in response to certain API calls or by reading items from a stream or queue. This increases your ability to respond to events in your console without having to perform manual actions. 3) Event An event is a JSON-formatted document that contains data for a Lambda function to process. The runtime converts the event to an object and passes it to your function code. When you invoke a function, you determine the structure and contents of the event. 4) Application Environment An application environment provides a secure and isolated runtime environment for your Lambda function. An application environment manages the processes and resources that are required to run the function. 5) Deployment package You deploy your Lambda function code using a deployment package. Lambda supports two types of deployment packages: * A .zip file archive – This contains your function code and its dependencies. Lambda provides the operating system and runtime for your function. * A container image – This is compatible with the Open Container Initiative (OCI) specification. You add your function code and dependencies to the image. You must also include the operating system and a Lambda runtime. 6) Runtime The runtime provides a language-specific environment that runs in an application environment. When you create your Lambda function, you specify the runtime that you want your code to run in. You can use built-in runtimes, such as Python, Node.js, Ruby, Go, Java, or .NET Core. Or you can implement your Lambda functions to run on a custom runtime. 7) Lamda function handler The AWS Lambda function handler is the method in your function code that processes events. When your function is invoked, Lambda runs the handler method. When the handler exits or returns a response, it becomes available to handle another event. You can use the following general syntax when creating a function handler in Python. def handler_name (event, context): ... return some_value
64
What does an Amazon EC2 instance type indicate? A) Instance placement and instance size B) Instance family and instance size C) Instance tenancy and instance billing D) Instance AMI and networking speed
B) Instance family and instance size
65
Which of the following is true about serverless? A) You must provision and manage servers. B) You must manually scale serverless resources. C) You must manage availability and fault tolerance. D) You never pay for idle resources.
D) You never pay for idle resources.
66
What is the Amazon VPC?
Virtual Private Cloud It is the Network
67
What is Networking?
Networking is how you connect computers around the world and allow them to communicate with one another. In this course, you’ve already seen a few examples of networking. One is the AWS Global Infrastructure. AWS has built a network of resources using data centers, Availability Zones, and Regions.
68
What is the purpose of the IP Address?
To properly route your messages to a location, you need an address. Just like each home has a mailing address, each computer has an IP address. However, instead of using the combination of street, city, state, zip code, and country, the IP address uses a combination of bits, 0s and 1s.
69
We most see IP Addresses in which format?
IPv4 notation Typically, you don’t see an IP address in its binary format. Instead, it’s converted into decimal format and noted as an IPv4 address.
70
What does CIDR stand for?
Classless Inter-Domain Routing CIDR notation 192.168.1.30 is a single IP address. If you want to express IP addresses between the range of 192.168.1.0 and 192.168.1.255, how can you do that? One way is to use CIDR notation. CIDR notation is a compressed way of representing a range of IP addresses. Specifying a range determines how many IP addresses are available to you. CIDR notation is shown here. The number after the forward slash "/" denotes how many bits in an IP address are fixed.
71
In the IP notation 192.168.1.0/24 which portion bits are fixed?
192.168.1.0 because the 24 specifies the first 24 bits where each number before the period is 8 bits; the other 8 are considered flexible. 32 total bits subtracted by 24 fixed bits leaves 8 flexible bits. Each of these flexible bits can be either 0 or 1, because they are binary. That means that you have two choices for each of the 8 bits, providing 256 IP addresses in that IP range. The higher the number after the /, the smaller the number of IP addresses in your network. For example, a range of 192.168.1.0/24 is smaller than 192.168.1.0/16.
72
What is a Amazon VPC?
A virtual private cloud (VPC) is an isolated network that you create in the AWS Cloud, similar to a traditional network in a data center. When you create an Amazon VPC, you must choose three main factors: * Name of the VPC * Region where the VPC will live – A VPC spans all the Availability Zones within the selected Region. * IP range for the VPC in CIDR notation – This determines the size of your network. Each VPC can have up to five CIDRs: one primary and four secondaries for IPv4. Each of these ranges can be between /28 (in CIDR notation) and /16 in size.
73
What should you create after creating the VPC?
The subnet. After you create your VPC, you must create subnets inside the network. Think of subnets as smaller networks inside your base network, or virtual local area networks (VLANs) in a traditional, on-premises network. In an on-premises network, the typical use case for subnets is to isolate or optimize network traffic. In AWS, subnets are used to provide high availability and connectivity options for your resources. Use a public subnet for resources that must be connected to the internet and a private subnet for resources that won't be connected to the internet.
74
When creating a subnet what do you need to specify?
1) VPC that you want your subnet to live in—in this case: VPC (10.0.0.0/16) 2) Availability Zone that you want your subnet to live in—in this case: Availability Zone 1 3) IPv4 CIDR block for your subnet, which must be a subset of the VPC CIDR block—in this case: 10.0.0.0/24
75
After creating the VPC and subnet(s), what should you consider?
High availability with a VPC When you create your subnets, keep high availability in mind. To maintain redundancy and fault tolerance, create at least two subnets configured in two Availability Zones. As you learned earlier, remember that “everything fails all of the time.” With the example network, if one of the Availability Zones fails, you will still have your resources available in another Availability Zone as backup.
76
How many Reserved IPs does AWS configure for you when you create a VPC?
AWS reserves five IP addresses in each subnet. These IP addresses are used for routing, Domain Name System (DNS), and network management.
77
What is needed to connect your VPC to the internet?
Internet gateway To activate internet connectivity for your VPC, you must create an internet gateway. Think of the gateway as similar to a modem. Just as a modem connects your computer to the internet, the internet gateway connects your VPC to the internet. Unlike your modem at home, which sometimes goes down or offline, an internet gateway is highly available and scalable. After you create an internet gateway, you attach it to your VPC.
78
What type of connectivity is needed to prevent people from the internet from accessing your subnet?
Virtual private gateway A virtual private gateway connects your VPC to another private network. When you create and attach a virtual private gateway to a VPC, the gateway acts as anchor on the AWS side of the connection. On the other side of the connection, you will need to connect a customer gateway to the other private network. A customer gateway device is a physical device or software application on your side of the connection. When you have both gateways, you can then establish an encrypted virtual private network (VPN) connection between the two sides.
79
What is AWS Direct Connect?
AWS Direct Connect To establish a secure physical connection between your on-premises data center and your Amazon VPC, you can use AWS Direct Connect. With AWS Direct Connect, your internal network is linked to an AWS Direct Connect location over a standard Ethernet fiber-optic cable. This connection allows you to create virtual interfaces directly to public AWS services or to your VPC.
80
What is the Main route table?
When you create a VPC, AWS creates a route table called the main route table. A route table contains a set of rules, called routes, that are used to determine where network traffic is directed. AWS assumes that when you create a new VPC with subnets, you want traffic to flow between them. Therefore, the default configuration of the main route table is to allow traffic between all subnets in the local network. The following rules apply to the main route table: * You cannot delete the main route table. * You cannot set a gateway route table as the main route table. * You can replace the main route table with a custom subnet route table. * You can add, remove, and modify routes in the main route table. * You can explicitly associate a subnet with the main route table, even if it's already implicitly associated.
81
What are Custom route tables?
Custom route tables The main route table is used implicitly by subnets that do not have an explicit route table association. However, you might want to provide different routes on a per-subnet basis for traffic to access resources outside of the VPC. For example, your application might consist of a frontend and a database. You can create separate subnets for the resources and provide different routes for each of them. If you associate a subnet with a custom route table, the subnet will use it instead of the main route table. Each custom route table that you create will have the local route already inside it, allowing communication to flow between all resources and subnets inside the VPC. You can protect your VPC by explicitly associating each new subnet with a custom route table and leaving the main route table in its original default state.
82
What are Network ACLs?
Firewalls at the subnet Level
83
What are Security Groups?
Firewalls at the EC2 instance Level
84
Does Security Groups by default allow all outbound taffic? True or False
True; you don't have to specify outbound traffic for Security Groups when configuring
85
With Security Groups everything is blocked by default? True of False
True; you have to create Allow rules in order to accept traffic
86
Which of the following can a route table be attached to? A) AWS accounts B) Availability Zones C) Subnets D) Regions
C) Subnets
87
Which of the following is true for a security group's default setting? A) It allows all inbound traffic and blocks all outbound traffic. B) It blocks all inbound traffic and allows all outbound traffic. C) It allows all inbound and outbound traffic. D) It blocks all inbound and outbound traffic.
B) It blocks all inbound traffic and allows all outbound traffic.
88
A network access control list (network ACL) filters traffic at the Amazon EC2 instance level? A) True B) False
B) False
89
What are the TWO types of AWS storage types?
1) Block 2) Object Block storage File storage treats files as a singular unit, but block storage splits files into fixed-size chunks of data called blocks that have their own addresses. Each block is an individual piece of data storage. Because each block is addressable, blocks can be retrieved efficiently. Think of block storage as a more direct route to access the data. When data is requested, the addresses are used by the storage system to organize the blocks in the correct order to form a complete file to present back to the requestor. Besides the address, no additional metadata is associated with each block. If you want to change one character in a file, you just change the block, or the piece of the file, that contains the character. This ease of access is why block storage solutions are fast and use less bandwidth. Object storage In object storage, files are stored as objects. Objects, much like files, are treated as a single, distinct unit of data when stored. However, unlike file storage, these objects are stored in a bucket using a flat structure, meaning there are no folders, directories, or complex hierarchies. Each object contains a unique identifier. This identifier, along with any additional metadata, is bundled with the data and stored. Changing just one character in an object is more difficult than with block storage. When you want to change one character in an object, the entire object must be updated.
90
What THREE categories is AWS storage services grouped into?
1) file storage 2) block storage, 3) and object
91
What is File storage?
File storage You might be familiar with file storage if you have interacted with file storage systems like Windows File Explorer or Finder on macOS. Files are organized in a tree-like hierarchy that consist of folders and subfolders. For example, if you have hundreds of cat photos on your laptop, you might want to create a folder called Cat photos, and place the images inside that folder to organize them. Because you know that these images will be used in an application, you might want to place the Cat photos folder inside another folder called Application files.
92
What is Block storage?
Block storage File storage treats files as a singular unit, but block storage splits files into fixed-size chunks of data called blocks that have their own addresses. Each block is an individual piece of data storage. Because each block is addressable, blocks can be retrieved efficiently. Think of block storage as a more direct route to access the data. When data is requested, the addresses are used by the storage system to organize the blocks in the correct order to form a complete file to present back to the requestor. Besides the address, no additional metadata is associated with each block. If you want to change one character in a file, you just change the block, or the piece of the file, that contains the character. This ease of access is why block storage solutions are fast and use less bandwidth.
93
What are the Use cases for Block storage?
Use cases for block storage Because block storage is optimized for low-latency operations, it is a preferred storage choice for high-performance enterprise workloads and transactional, mission-critical, and I/O-intensive applications. Transactional workloads Organizations that process time-sensitive and mission-critical transactions store such workloads into a low-latency, high-capacity, and fault-tolerant database. Block storage allows developers to set up a robust, scalable, and highly efficient transactional database. Because each block is a self-contained unit, the database performs optimally, even when the stored data grows. Containers Developers use block storage to store containerized applications on the cloud. Containers are software packages that contain the application and its resource files for deployment in any computing environment. Like containers, block storage is equally flexible, scalable, and efficient. With block storage, developers can migrate the containers seamlessly between servers, locations, and operating environments. Virtual machines Block storage supports popular virtual machine (VM) hypervisors. Users can install the operating system, file system, and other computing resources on a block storage volume. They do so by formatting the block storage volume and turning it into a VM file system. So they can readily increase or decrease the virtual drive size and transfer the virtualized storage from one host to another.
94
What are Use cases for File storage?
File storage is ideal when you require centralized access to files that must be easily shared and managed by multiple host computers. Typically, this storage is mounted onto multiple hosts, and requires file locking and integration with existing file system communication protocols. Web serving Cloud file storage solutions follow common file-level protocols, file naming conventions, and permissions that developers are familiar with. Therefore, file storage can be integrated into web applications. Analytics Many analytics workloads interact with data through a file interface and rely on features such as file lock or writing to portions of a file. Cloud-based file storage supports common file-level protocols and has the ability to scale capacity and performance. Therefore, file storage can be conveniently integrated into analytics workflows. Media and entertainment Many businesses use a hybrid cloud deployment and need standardized access using file system protocols (NFS or SMB) or concurrent protocol access. Cloud file storage follows existing file system semantics. Therefore, storage of rich media content for processing and collaboration can be integrated for content production, digital supply chains, media streaming, broadcast playout, analytics, and archive. Home directories Businesses wanting to take advantage of the scalability and cost benefits of the cloud are extending access to home directories for many of their users. Cloud file storage systems adhere to common file-level protocols and standard permissions models. Therefore, customers can lift and shift applications that need this capability to the cloud.
95
What is Object storage?
Object storage In object storage, files are stored as objects. Objects, much like files, are treated as a single, distinct unit of data when stored. However, unlike file storage, these objects are stored in a bucket using a flat structure, meaning there are no folders, directories, or complex hierarchies. Each object contains a unique identifier. This identifier, along with any additional metadata, is bundled with the data and stored. Changing just one character in an object is more difficult than with block storage. When you want to change one character in an object, the entire object must be updated.
96
What are Use cases for Object storage?
With object storage, you can store almost any type of data, and there is no limit to the number of objects stored, which makes it readily scalable. Object storage is generally useful when storing large or unstructured data sets. Data archiving Cloud object storage is excellent for long-term data retention. You can cost-effectively archive large amounts of rich media content and retain mandated regulatory data for extended periods of time. You can also use cloud object storage to replace on-premises tape and disk archive infrastructure. This storage solution provides enhanced data durability, immediate retrieval times, better security and compliance, and greater data accessibility. Backup and recovery You can configure object storage systems to replicate content so that if a physical device fails, duplicate object storage devices become available. This ensures that your systems and applications continue to run without interruption. You can also replicate data across multiple data centers and geographical regions. Rich media With object storage, you can accelerate applications and reduce the cost of storing rich media files such as videos, digital images, and music. By using storage classes and replication features, you can create cost-effective, globally replicated architecture to deliver media to distributed users.
97
How are Block and File storage simular to on-premise storage methods?
* Block storage in the cloud is analogous to direct-attached storage (DAS) or a storage area network (SAN). * File storage systems are often supported with a network-attached storage (NAS) server.
98
What is Amazon EFS?
Amazon Elastic File System (Amazon EFS) is a set-and-forget file system that automatically grows and shrinks as you add and remove files. There is no need for provisioning or managing storage capacity and performance. Amazon EFS can be used with AWS compute services and on-premises resources. You can connect tens, hundreds, and even thousands of compute instances to an Amazon EFS file system at the same time, and Amazon EFS can provide consistent performance to each compute instance. With the Amazon EFS simple web interface, you can create and configure file systems quickly without any minimum fee or setup cost. You pay only for the storage used and you can choose from a range of storage classes designed to fit your use case.
99
What are the TWO types of Amazon EFS storages?
Standard storage classes EFS Standard and EFS Standard-Infrequent Access (Standard-IA) offer Multi-AZ resilience and the highest levels of durability and availability. One zone classes EFS One Zone and EFS One Zone-Infrequent Access (EFS One Zone-IA) provide additional savings by saving your data in a single availability zone.
100
What is Amazon FSx?
Amazon FSx is a fully managed service that offers reliability, security, scalability, and a broad set of capabilities that make it convenient and cost effective to launch, run, and scale high-performance file systems in the cloud. With Amazon FSx, you can choose between four widely used file systems: Lustre, NetApp ONTAP, OpenZFS, and Windows File Server. You can choose based on your familiarity with a file system or based on your workload requirements for feature sets, performance profiles, and data management capabilities.
101
What is EC2 internal and external storage called?
Internal: EC2 Instance Store External: Amazon (Elastic Block Store)
102
What happens to the data in the Amazon Instance Store once you Stop or Terminate the instnace?
Since it is tied to the instnace it is immediately deleted.
103
TRUE or FALSE a EC2 instance is requried to access an EBS volume?
True
104
TRUE or FALSE, you need some form of Block storage when you launch an EC2 instance?
True
105
TRUE or FALSE EBS volumes are separate from the EC2 instance?
True, the EBS volume is separate. If the EC2 volume is turned off, the data is still retained on the EBS volume for access.
106
TRUE or FALSE, you cannot directly access the EBS volume?
True, you are not able to directly access the EBS volume. You will need to attach it to an EC2 instance in order to access the data.
107
TRUE or FALSE, you can attach multiple EBS volumes to an EC2?
True
108
Why is EBS considered to be persistant storage?
Because if the EC2 goes down, you can still access your data once you connect another EC2 instance.
109
What are the TWO different EBS storage types?
SSD and HDD volume types EBS volume types EBS volumes are organized into two main categories: solid-state drives (SSDs) and hard-disk drives (HDDs). SSDs are used for transactional workloads with frequent read/write operations with small I/O size. HDDs are used for large streaming workloads that need high throughput performance. AWS offers two types of each.
110
What is Amazon EC2 instance store?
Amazon Elastic Compute Cloud (Amazon EC2) instance store provides temporary block-level storage for an instance. This storage is located on disks that are physically attached to the host computer. This ties the lifecycle of the data to the lifecycle of the EC2 instance. If you delete the instance, the instance store is also deleted. Because of this, instance store is considered ephemeral storage. Instance store is ideal if you host applications that replicate data to other EC2 instances, such as Hadoop clusters. For these cluster-based workloads, having the speed of locally attached volumes and the resiliency of replicated data helps you achieve data distribution at high performance. It’s also ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content.
111
What is Amazon EBS?
As the name implies, Amazon Elastic Block Store (Amazon EBS) is block-level storage that you can attach to an Amazon EC2 instance. You can compare this to how you must attach an external drive to your laptop. This attachable storage is called an EBS volume. EBS volumes act similarly to external drives in more than one way. * Detachable: You can detach an EBS volume from one EC2 instance and attach it to another EC2 instance in the same Availability Zone to access the data on it. * Distinct: The external drive is separate from the computer. That means that if an accident occurs and the computer goes down, you still have your data on your external drive. The same is true for EBS volumes. * Size-limited: You’re limited to the size of the external drive, because it has a fixed limit to how scalable it can be. For example, you might have a 2 TB external drive, which means you can only have 2 TB of content on it. This also relates to Amazon EBS, because a volume also has a max limitation of how much content you can store on it. * 1-to-1 connection: Most external drives can only be connected with one computer at a time. Most EBS volumes have a one-to-one relationship with EC2 instances, so they cannot be shared by or attached to multiple instances at one time.
112
What TWO ways can you scale EBS volumes?
1) Increase volume size 2) Attach multiple volumes Increase the volume size only if it doesn’t increase above the maximum size limit. Depending on the volume selected, Amazon EBS currently supports a maximum volume size of 64 tebibytes (TiB). For example, if you provision a 5-TiB io2 Block Express volume, you can choose to increase the size of your volume until you get to 64 TiB. Attach multiple volumes to a single EC2 instance. Amazon EC2 has a one-to-many relationship with EBS volumes. You can add these additional volumes during or after EC2 instance creation to provide more storage capacity for your hosts.
113
What are some of the use cases for EBS?
Amazon EBS use cases Amazon EBS is useful when you must retrieve data quickly and have data persist long term. Volumes are commonly used in the following scenarios. Operating systems Boot and root volumes can be used to store an operating system. The root device for an instance launched from an Amazon Machine Image (AMI) is typically an EBS volume. These are commonly referred to as EBS-backed AMIs. Databases As a storage layer for databases running on Amazon EC2 that will scale with your performance needs and provide consistent and low-latency performance. Enterprise applications Amazon EBS provides high availability and high durability block storage to run business-critical applications. Big data analytics engines Amazon EBS offers data persistence, dynamic performance adjustments, and the ability to detach and reattach volumes, so you can resize clusters for big data analytics.
114
What are some of the benefits of EBS?
* High availability - When you create an EBS volume, it is automatically replicated in its Availability Zone to prevent data loss from single points of failure. * Data persistence - Storage persists even when your instance doesn’t * Data encryption - When activated by the user, all EBS volumes support encryption. * Flexibility - EBS volumes support on-the-fly changes. Modify volume type, volume size, and input/output operations per second (IOPS) capacity without stopping your instance. * Backups - Amazon EBS provides the ability to create backups of any EBS volume.
115
How do you backup EBS volumes?
Using snapshots EBS snapshots are incremental backups that only save the blocks on the volume that have changed after your most recent snapshot. For example, if you have 10 GB of data on a volume and only 2 GB of data have been modified since your last snapshot, only the 2 GB that have been changed are written to Amazon S3. When you take a snapshot of any of your EBS volumes, the backups are stored redundantly in multiple Availability Zones using Amazon S3. This aspect of storing the backup in Amazon S3 is handled by AWS, so you won’t need to interact with Amazon S3 to work with your EBS snapshots. You manage them in the Amazon EBS console, which is part of the Amazon EC2 console.
116
TRUE or FALSE, everything in S3 is private by default?
True, access is restricted by default; user or AWS account that created the S3 resource. Security in Amazon S3 Everything in Amazon S3 is private by default. This means that all Amazon S3 resources, such as buckets and objects, can only be viewed by the user or AWS account that created that resource. Amazon S3 resources are all private and protected to begin with. If you decide that you want everyone on the internet to see your photos, you can choose to make your buckets and objects public. A public resource means that everyone on the internet can see it. Most of the time, you don’t want your permissions to be all or nothing. Typically, you want to be more granular about the way that you provide access to your resources.
117
What is Amazon S3?
Amazon S3 Unlike Amazon EBS, Amazon Simple Storage Service (Amazon S3) is a standalone storage solution that isn’t tied to compute. With Amazon S3, you can retrieve your data from anywhere on the web. If you have used an online storage service to back up the data from your local machine, you most likely have used a service similar to Amazon S3. The big difference between those online storage services and Amazon S3 is the storage type. Amazon S3 is an object storage service. Object storage stores data in a flat structure. An object is a file combined with metadata. You can store as many of these objects as you want. All the characteristics of object storage are also characteristics of Amazon S3. In Amazon S3, you store your objects in containers called buckets. You can’t upload an object, not even a single photo, to Amazon S3 without creating a bucket first. When you store an object in a bucket, the combination of a bucket name, key, and version ID uniquely identifies the object.
118
True of False, bucket names must be unique across all AWS accounts in all AWS Regions within a partition.
True each bucket name must be unique across all AWS accounts in all AWS Regions within a partition. A partition is a grouping of Regions, of which AWS currently has three: Standard Regions, China Regions, and AWS GovCloud (US). When naming a bucket, choose a name that is relevant to you or your business. For example, you should avoid using AWS or Amazon in your bucket name.
119
What are Object key names?
Object key names The object key (key name) uniquely identifies the object in an Amazon S3 bucket. When you create an object, you specify the key name. As described earlier, the Amazon S3 model is a flat structure, meaning there is no hierarchy of subbuckets or subfolders. However, the Amazon S3 console does support the concept of folders. By using key name prefixes and delimiters, you can imply a logical hierarchy. For example, suppose your bucket called testbucket has two objects with the following object keys: 2022-03-01/AmazonS3.html and 2022-03-01/Cats.jpg. The console uses the key name prefix, 2022-03-01, and delimiter (/) to present a folder structure.
120
What are some of the use cases for Amazon S3?
Backup and storage Amazon S3 is a natural place to back up files because it is highly redundant. As mentioned in the last lesson, AWS stores your EBS snapshots in Amazon S3 to take advantage of its high availability. Media hosting Because you can store unlimited objects, and each individual object can be up to 5 TB, Amazon S3 is an ideal location to host video, photo, and music uploads. Software delivery You can use Amazon S3 to host your software applications that customers can download. Data lakes Amazon S3 is an optimal foundation for a data lake because of its virtually unlimited scalability. You can increase storage from gigabytes to petabytes of content, paying only for what you use. Static websites You can configure your S3 bucket to host a static website of HTML, CSS, and client-side scripts. Static content Because of the limitless scaling, the support for large files, and the fact that you can access any object over the web at any time, Amazon S3 is the perfect place to store static content.
121
What are the Security Management features of Amazon S3 that are used to manage access?
To be more specific about who can do what with your Amazon S3 resources, Amazon S3 provides several security management features: * IAM policies * S3 bucket policies * encryption to develop and implement your own security policies.
122
What are Access policies that are attached to resources called?
resource-based policies
123
What are Access policies that are attached to users called?
user policies
124
What are the benefits of using IAM polices to manage access to an Amazon S3?
* You have many buckets with different permission requirements. Instead of defining many different S3 bucket policies, you can use IAM policies. * You want all policies to be in a centralized location. By using IAM policies, you can manage all policy information in one location.
125
How are Amazon S3 bucket policies created?
Like IAM policies, S3 bucket policies are defined in a JSON format. Unlike IAM policies, which are attached to resources and users, S3 bucket policies can only be attached to S3 buckets. The policy that is placed on the bucket applies to every object in that bucket. S3 bucket policies specify what actions are allowed or denied on the bucket.
126
What senerios would you consider using an Amazon S3 bucket policy?
* You need a simple way to do cross-account access to Amazon S3, without using IAM roles. * Your IAM policies bump up against the defined size limit. S3 bucket policies have a larger size limit.
127
TRUE or FALSE, By default all Amazon S3 data is encrypted in transit and at rest?
True Amazon S3 reinforces encryption in transit (as it travels to and from Amazon S3) and at rest. To protect data, Amazon S3 automatically encrypts all objects on upload and applies server-side encryption with S3-managed keys as the base level of encryption for every bucket in Amazon S3 at no additional cost.
128
What is the Amazon S3 Standard storage class?
When you upload an object to Amazon S3 and you don’t specify the storage class, you upload it to the default storage class, often referred to as standard storage. In previous lessons, you learned about the default Amazon S3 standard storage class.
129
What is S3 Standard?
This is considered general-purpose storage for cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics.
130
What is S3 Intelligent-Tiering?
This tier is useful if your data has unknown or changing access patterns. S3 Intelligent-Tiering stores objects in three tiers: a frequent access tier, an infrequent access tier, and an archive instance access tier. Amazon S3 monitors access patterns of your data and automatically moves your data to the most cost-effective storage tier based on frequency of access.
131
What is S3 Standard-Infrequent Access (S3 Standard-IA)?
This tier is for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per-GB storage price and per-GB retrieval fee. This storage tier is ideal if you want to store long-term backups, disaster recovery files, and so on.
132
What is S3 One Zone-Infrequent Access (S3 One Zone-IA)?
Unlike other S3 storage classes that store data in a minimum of three Availability Zones, S3 One Zone-IA stores data in a single Availability Zone, which makes it less expensive than S3 Standard-IA. S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed data, but do not require the availability and resilience of S3 Standard or S3 Standard-IA. It's a good choice for storing secondary backup copies of on-premises data or easily recreatable data.
133
What is S3 Glacier Instant Retrieval?
Use S3 Glacier Instant Retrieval for archiving data that is rarely accessed and requires millisecond retrieval. Data stored in this storage class offers a cost savings of up to 68 percent compared to the S3 Standard-IA storage class, with the same latency and throughput performance.
134
What is S3 Glacier Flexible Retrieval?
S3 Glacier Flexible Retrieval offers low-cost storage for archived data that is accessed 1–2 times per year. With S3 Glacier Flexible Retrieval, your data can be accessed in as little as 1–5 minutes using an expedited retrieval. You can also request free bulk retrievals in up to 5–12 hours. It is an ideal solution for backup, disaster recovery, offsite data storage needs, and for when some data occasionally must be retrieved in minutes.
135
What is S3 Glacier Deep Archive?
S3 Glacier Deep Archive is the lowest-cost Amazon S3 storage class. It supports long-term retention and digital preservation for data that might be accessed once or twice a year. Data stored in the S3 Glacier Deep Archive storage class has a default retrieval time of 12 hours. It is designed for customers that retain data sets for 7–10 years or longer, to meet regulatory compliance requirements. Examples include those in highly regulated industries, such as the financial services, healthcare, and public sectors.
136
What is S3 on Outposts?
Amazon S3 on Outposts delivers object storage to your on-premises AWS Outposts environment using S3 API's and features. For workloads that require satisfying local data residency requirements or need to keep data close to on premises applications for performance reasons, the S3 Outposts storage class is the ideal option.
137
What is Amazon S3 versioning?
Versioning keeps multiple versions of a single object in the same bucket. This preserves old versions of an object without using different names, which helps with object recovery from accidental deletions, accidental overwrites, or application failures. Amazon S3 versioning As described earlier, Amazon S3 identifies objects in part by using the object name. For example, when you upload an employee photo to Amazon S3, you might name the object employee.jpg and store it in a bucket called employees. Without Amazon S3 versioning, every time you upload an object called employee.jpg to the employees bucket, it will overwrite the original object. This can be an issue for several reasons, including the following: * Common names: The employee.jpg object name is a common name for an employee photo object. You or someone else who has access to the bucket might not have intended to overwrite it; but once it's overwritten, the original object can't be accessed. * Version preservation: You might want to preserve different versions of employee.jpg. Without versioning, if you wanted to create a new version of employee.jpg, you would need to upload the object and choose a different name for it. Having several objects all with slight differences in naming variations can cause confusion and clutter in S3 buckets. To counteract these issues, you can use Amazon S3 versioning. Versioning keeps multiple versions of a single object in the same bucket. This preserves old versions of an object without using different names, which helps with object recovery from accidental deletions, accidental overwrites, or application failures.
138
How does versioning help in recovering of object from accidental deletion or overwrite?
By using versioning-enabled buckets, you can recover objects from accidental deletion or overwrite. The following are examples: * Deleting an object does not remove the object permanently. Instead, Amazon S3 puts a marker on the object that shows that you tried to delete it. If you want to restore the object, you can remove the marker and the object is reinstated. * If you overwrite an object, it results in a new object version in the bucket. You still have access to previous versions of the object.
139
What are the THREE versioning states?
1) Unversioned (default) No new and existing objects in the bucket have a version. 2) Versioning-endabled Versioning is enabled for all objects in the bucket. After you version-enable a bucket, it can never return to an unversioned state. However, you can suspend versioning on that bucket. 3) Versioning-suspended Versioning is suspended for new objects. All new objects in the bucket will not have a version. However, all existing objects keep their object versions.
140
How do you Manage your storage lifecycle?
If you keep manually changing your objects, such as your employee photos, from storage tier to storage tier, you might want to automate the process by configuring their Amazon S3 lifecycle. When you define a lifecycle configuration for an object or group of objects, you can choose to automate between two types of actions: transition and expiration. * Transition actions define when objects should transition to another storage class. * Expiration actions define when objects expire and should be permanently deleted. For example, you might transition objects to S3 Standard-IA storage class 30 days after you create them. Or you might archive objects to the S3 Glacier Deep Archive storage class 1 year after creating them.
141
What are a couple of good use cases for using lifecycle configuration rules?
* Periodic logs: If you upload periodic logs to a bucket, your application might need them for a week or a month. After that, you might want to delete them. * Data that changes in access frequency: Some documents are frequently accessed for a limited period of time. After that, they are infrequently accessed. At some point, you might not need real-time access to them. But your organization or regulations might require you to archive them for a specific period. After that, you can delete them.
142
What is the underlining storage for S3?
Object storage
143
What is the size limit of an individual S3 file object?
5TB
144
What are Amazaon S3 bucket names?
Amazon S3 bucket names Amazon S3 supports global buckets. Therefore, each bucket name must be unique across all AWS accounts in all AWS Regions within a partition. A partition is a grouping of Regions, of which AWS currently has three: Standard Regions, China Regions, and AWS GovCloud (US). When naming a bucket, choose a name that is relevant to you or your business. For example, you should avoid using AWS or Amazon in your bucket name. The following are some examples of the rules that apply for naming buckets in Amazon S3. For a full list of rules, see the link in the resources section. * Bucket names must be between 3 (min) and 63 (max) characters long. * Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-). * Bucket names must begin and end with a letter or number. * Buckets must not be formatted as an IP address. * A bucket name cannot be used by another AWS account in the same partition until the bucket is deleted. If your application automatically creates buckets, choose a bucket naming scheme that is unlikely to cause naming conflicts and will choose a different bucket name, should one not be available.
145
What are Amazon Object key names?
Object key names The object key (key name) uniquely identifies the object in an Amazon S3 bucket. When you create an object, you specify the key name. As described earlier, the Amazon S3 model is a flat structure, meaning there is no hierarchy of subbuckets or subfolders. However, the Amazon S3 console does support the concept of folders. By using key name prefixes and delimiters, you can imply a logical hierarchy. For example, suppose your bucket called testbucket has two objects with the following object keys: 2022-03-01/AmazonS3.html and 2022-03-01/Cats.jpg. The console uses the key name prefix, 2022-03-01, and delimiter (/) to present a folder structure.
146
What are some Amazon S3 use cases?
Backup and storage Amazon S3 is a natural place to back up files because it is highly redundant. As mentioned in the last lesson, AWS stores your EBS snapshots in Amazon S3 to take advantage of its high availability. Media hosting Because you can store unlimited objects, and each individual object can be up to 5 TB, Amazon S3 is an ideal location to host video, photo, and music uploads. Software delivery You can use Amazon S3 to host your software applications that customers can download. Data Lakes Amazon S3 is an optimal foundation for a data lake because of its virtually unlimited scalability. You can increase storage from gigabytes to petabytes of content, paying only for what you use. Static websites You can configure your S3 bucket to host a static website of HTML, CSS, and client-side scripts. Static content Because of the limitless scaling, the support for large files, and the fact that you can access any object over the web at any time, Amazon S3 is the perfect place to store static content.
147
How is security handled in Amazon S3?
Security in Amazon S3 Everything in Amazon S3 is private by default. This means that all Amazon S3 resources, such as buckets and objects, can only be viewed by the user or AWS account that created that resource. Amazon S3 resources are all private and protected to begin with. If you decide that you want everyone on the internet to see your photos, you can choose to make your buckets and objects public. A public resource means that everyone on the internet can see it. Most of the time, you don’t want your permissions to be all or nothing.
148
What type of policies can you use in Amazon S3?
Amazon S3 and IAM policies Previously, you learned about creating and using AWS Identity and Access Management (IAM) policies. Now you can apply that knowledge to Amazon S3. When IAM policies are attached to your resources (buckets and objects) or IAM users, groups, and roles, the policies define which actions they can perform. Access policies that you attach to your resources are referred to as resource-based policies and access policies attached to users in your account are called user policies.
149
When should you use IAM for private buckets?
* You have many buckets with different permission requirements. Instead of defining many different S3 bucket policies, you can use IAM policies. * You want all policies to be in a centralized location. By using IAM policies, you can manage all policy information in one location.
150
What format are Amazon S3 buckets policies defined in?
JSON format Unlike IAM policies, which are attached to resources and users, S3 bucket policies can only be attached to S3 buckets. The policy that is placed on the bucket applies to every object in that bucket. S3 bucket policies specify what actions are allowed or denied on the bucket.
151
When should you use Amazon S3 bucket policies?
* You need a simple way to do cross-account access to Amazon S3, without using IAM roles. * Your IAM policies bump up against the defined size limit. S3 bucket policies have a larger size limit.
152
Which of the following is a typical use case for Amazon S3? A) Object storage for media hosting B) Object storage for a boot drive C) Block storage for an Amazon EC2 instance D) File storage for multiple Amazon EC2 instances
A) Object storage for media hosting
153
A company that works with customers around the globe in multiple Regions hosts a static website in an Amazon S3 bucket. The company has decided that they want to reduce latency and increase data transfer speed by storing cache. Which solution should they choose to make their content more accessible? A) Configure Amazon CloudFront to deliver the content in the S3 bucket. B) Create multiple S3 buckets and put Amazon EC2 and Amazon S3 in the same AWS Region. C) Enable cross-Region replication to several AWS Regions to serve customers from different locations. D) Use S3 Intelligent-Tiering to automatically move their website to a bucket that would reduce their latency.
A) Configure Amazon CloudFront to deliver the content in the S3 bucket.
154
Which of the following storage services is recommended if a customer needs a storage layer for a high-transaction relational database on an Amazon EC2 instance? A) Amazon S3 B) Amazon Elastic File System (Amazon EFS) C) Amazon Elastic Block Store (Amazon EBS) D) Amazon S3 Glacier Deep Archive
C) Amazon Elastic Block Store (Amazon EBS)
155
What is a Relational database?
Relational databases A relational database organizes data into tables. Data in one table can link to data in other tables to create relationships—hence, the relational part of the name. A table stores data in rows and columns. A row, often called a record, contains all information about a specific entry. Columns describe attributes of an entry. The following image is an example of three tables in a relational database.
156
What are some common Relational databases?
* MySQL * PostgresQL * Oracle * Microsoft SQL Server * Amazon Aurora
157
What commonly used language do you use to communicate with an RDMS?
SQL SELECT * FROM table_name. This query selects all the data from a particular table. However, the power of SQL queries is in creating more complex queries that pull data from several tables to identify patterns and answers to business problems. For example, querying the sales table and the books table together to see sales in relation to an author’s books. Querying tables together to better understand their relationships is made possible by a "join".
158
What are the benefits of a Relational database?
Complex SQL With SQL, you can join multiple tables so you can better understand relationships between your data. Reduced redundancy You can store data in one table and reference it from other tables instead of saving the same data in different places. Familiarity Because relational databases have been a popular choice since the 1970s, technical professionals often have familiarity and experience with them. Accuracy Relational databases ensure that your data has high integrity and adheres to the atomicity, consistency, isolation, and durability (ACID) principle.
159
What are some Relational database use cases?
Applications that have a fixed schema These are applications that have a fixed schema and don't change often. An example is a lift-and-shift application that lifts an app from on-premises and shifts it to the cloud, with little or no modifications. Applications that need persistent storage These are applications that need persistent storage and follow the ACID principle, such as: * Enterprise resource planning (ERP) applications * Customer relationship management (CRM) applications * Commerce and financial applications
160
What is the Amazon Unmanaged database option?
If you host a database on Amazon EC2, AWS implements and maintains the physical infrastructure and hardware and installs the EC2 instance operating system (OS). However, you are still responsible for managing the EC2 instance, managing the database on that host, optimizing queries, and managing customer data. You are still responsible for the following: * App optimization * Scaling * High availability * Database backups * DB software patches * DB software installs * OS patches
161
What is the Amazon Managed database option?
Managed databases To shift more of the work to AWS, you can use a managed database service. These services provide the setup of both the EC2 instance and the database, and they provide systems for high availability, scalability, patching, and backups. However, in this model, you’re still responsible for database tuning, query optimization, and ensuring that your customer data is secure. This option provides the ultimate convenience but the least amount of control compared to the two previous options. You're just responsible for: * App optimization
162
What is Amazon RDS?
Amazon RDS is a managed database service customers can use to create and manage relational databases in the cloud without the operational burden of traditional database management. With Amazon RDS, you can offload some of the unrelated work of creating and managing a database. You can focus on the tasks that differentiate your application, instead of focusing on infrastructure-related tasks, like provisioning, patching, scaling, and restoring.
163
What are some of the RDBMSs supported by Amazon RDS?
* Commercial: Oracle, SQL Server * Open source: MySQL, PostgreSQL, MariaDB * Cloud native: Aurora
164
What is the compute portion of Amazon RDS?
Database instances The compute portion is called the database (DB) instance, which runs the DB engine. Depending on the engine selected, the instance will have different supported features and configurations. A DB instance can contain multiple databases with the same engine, and each DB can contain multiple tables. Underneath the DB instance is an EC2 instance. However, this instance is managed through the Amazon RDS console instead of the Amazon EC2 console. When you create your DB instance, you choose the instance type and size. The DB instance class you choose affects how much processing power and memory it has.
165
What is the storage portion of Amazon RDS?
Storage on Amazon RDS The storage portion of DB instances for Amazon RDS use Amazon Elastic Block Store (Amazon EBS) volumes for database and log storage. This includes MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server. When using Aurora, data is stored in cluster volumes, which are single, virtual volumes that use solid-state drives (SSDs). A cluster volume contains copies of your data across three Availability Zones in a single AWS Region. For nonpersistent, temporary files, Aurora uses local storage.
166
TRUE or FALSE, Amazon RDS is in an Amazon Virtual Private Cloud?
TRUE When you create a DB instance, you select the Amazon Virtual Private Cloud (Amazon VPC) your databases will live in. Then, you select the subnets that will be designated for your DB. This is called a DB subnet group, and it has at least two Availability Zones in its Region. The subnets in a DB subnet group should be private, so they don’t have a route to the internet gateway. This ensures that your DB instance, and the data inside it, can be reached only by the application backend. Access to the DB instance can be restricted further by using network access control lists (network ACLs) and security groups. With these firewalls, you can control, at a granular level, the type of traffic you want to provide access into your database. Using these controls provides layers of security for your infrastructure. It reinforces that only the backend instances have access to the database.
167
What TWO ways can you backup your Amazon RDBMS?
1) Automated Backups 2) Manual Snapshots Automated Backups Automated backups are turned on by default. This backs up your entire DB instance (not just individual databases on the instance) and your transaction logs. When you create your DB instance, you set a backup window that is the period of time that automatic backups occur. Typically, you want to set the window during a time when your database experiences little activity because it can cause increased latency and downtime. Retaining backups: Automated backups are retained between 0 and 35 days. You might ask yourself, “Why set automated backups for 0 days?” The 0 days setting stops automated backups from happening. If you set it to 0, it will also delete all existing automated backups. This is not ideal. The benefit of automated backups that you can do point-in-time recovery. Point-in-time recovery: This creates a new DB instance using data restored from a specific point in time. This restoration method provides more granularity by restoring the full backup and rolling back transactions up to the specified time range. Manual Snapshots If you want to keep your automated backups longer than 35 days, use manual snapshots. Manual snapshots are similar to taking Amazon EBS snapshots, except you manage them in the Amazon RDS console. These are backups that you can initiate at any time. They exist until you delete them. For example, to meet a compliance requirement that mandates you to keep database backups for a year, you need to use manual snapshots. If you restore data from a manual snapshot, it creates a new DB instance using the data from the snapshot. Choosing a backup option It is advisable to deploy both backup options. Automated backups are beneficial for point-in-time recovery. With manual snapshots, you can retain backups for longer than 35 days.
168
TRUE or FALSE, Amazon Multi-AZ deployment creates a redundant copy of your database in another Availability Zone?
TRUE An Amazon RDS Multi-AZ deployment, Amazon RDS creates a redundant copy of your database in another Availability Zone. You end up with two copies of your database—a primary copy in a subnet in one Availability Zone and a standby copy in a subnet in a second Availability Zone. The primary copy of your database provides access to your data so that applications can query and display the information. The data in the primary copy is synchronously replicated to the standby copy. The standby copy is not considered an active database, and it does not get queried by applications. To improve availability, Amazon RDS Multi-AZ ensures that you have two copies of your database running and that one of them is in the primary role. If an availability issue arises, such as the primary database loses connectivity, Amazon RDS initiates an automatic failover.
169
TRUE of FALSE, when you create a DB instance, a Domain Name System (DNS) name is provided?
TRUE When you create a DB instance, a Domain Name System (DNS) name is provided. AWS uses that DNS name to fail over to the standby database. In an automatic failover, the standby database is promoted to the primary role, and queries are redirected to the new primary database.
170
TRUE or FALSE, when it comes to security in Amazon RDS, you don't have control over managing access to your Amazon RDS resources?
FALSE When it comes to security in Amazon RDS, you have control over managing access to your Amazon RDS resources, such as your databases on a DB instance. How you manage access will depend on the tasks you or other users need to perform in Amazon RDS. Network ACLs and security groups help users dictate the flow of traffic. If you want to restrict the actions and resources others can access, you can use AWS Identity and Access Management (IAM) policies.
171
When should you use IAM?
Use IAM policies to assign permissions that determine who can manage Amazon RDS resources. For example, you can use IAM to determine who can create, describe, modify, and delete DB instances, tag resources, or modify security groups.
172
When should you use Security groups?
Use security groups to control which IP addresses or Amazon EC2 instances can connect to your databases on a DB instance. When you first create a DB instance, all database access is prevented except through rules specified by an associated security group.
173
When should you use Amazon RDS encryption?
Use Amazon RDS encryption to secure your DB instances and snapshots at rest.
174
When should you use SSL or TLS?
Use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) connections with DB instances running the MySQL, MariaDB, PostgreSQL, Oracle, or SQL Server database engines.
175
What is a Purpose-built database methodology?
Purpose-built databases for all application needs We covered Amazon RDS and relational databases in the previous lesson, and for a long time, relational databases were the default option. They were widely used in nearly all applications. A relational database is like a multi-tool. It can do many things, but it is not perfectly suited to any one particular task. It might not always be the best choice for your business needs. The one-size-fits-all approach of using a relational database for everything no longer works. Over the past few decades, there has been a shift in the database landscape, and this shift has led to the rise of purpose-built databases. Developers can consider the needs of their application and choose a database that will fit those needs. AWS offers a broad and deep portfolio of purpose-built databases that support diverse data models. Customers can use them to build data-driven, highly scalable, distributed applications. You can pick the best database to solve a specific problem and break away from restrictive commercial databases. You can focus on building applications that meet the needs of your organization.
176
What is Amazon DynamoDB?
Amazon DynamoDB DynamoDB is a fully managed NoSQL database that provides fast, consistent performance at any scale. It has a flexible billing model, tight integration with infrastructure as code (IaC), and a hands-off operational model. DynamoDB has become the database of choice for two categories of applications: high-scale applications and serverless applications. Although DynamoDB is the database of choice for high-scale and serverless applications, it can work for nearly all online transaction processing (OLTP) application workloads.
177
What is Amazon ElastiCache?
Amazon ElastiCache ElastiCache is a fully managed, in-memory caching solution. It provides support for two open-source, in-memory cache engines: Redis and Memcached. You aren’t responsible for instance failovers, backups and restores, or software upgrades.
178
What is Amazon MemoryDB for Redis?
Amazon MemoryDB for Redis MemoryDB is a Redis-compatible, durable, in-memory database service that delivers ultra-fast performance. With MemoryDB, you can achieve microsecond read latency, single-digit millisecond write latency, high throughput, and Multi-AZ durability for modern applications, like those built with microservices architectures. You can use MemoryDB as a fully managed, primary database to build high-performance applications. You do not need to separately manage a cache, durable database, or the required underlying infrastructure.
179
What is Amazon DocumentDB?
Amazon DocumentDB (with MongoDB compatibility) Amazon DocumentDB is a fully managed document database from AWS. A document database is a type of NoSQL database you can use to store and query rich documents in your application. These types of databases work well for the following use cases: content management systems, profile management, and web and mobile applications. Amazon DocumentDB has API compatibility with MongoDB. This means you can use popular open-source libraries to interact with Amazon DocumentDB, or you can migrate existing databases to Amazon DocumentDB with minimal hassle.
180
What is Amazon Keyspaces?
Amazon Keyspaces (for Apache Cassandra) Amazon Keyspaces is a scalable, highly available, and managed Apache Cassandra compatible database service. Apache Cassandra is a popular option for high-scale applications that need top-tier performance. Amazon Keyspaces is a good option for high-volume applications with straightforward access patterns. With Amazon Keyspaces, you can run your Cassandra workloads on AWS using the same Cassandra Query Language (CQL) code, Apache 2.0 licensed drivers, and tools that you use today.
181
What is Amazon Neptune?
Amazon Neptune Neptune is a fully managed graph database offered by AWS. A graph database is a good choice for highly connected data with a rich variety of relationships. Companies often use graph databases for recommendation engines, fraud detection, and knowledge graphs.
182
What is Amazon Timestream?
Amazon Timestream Timestream is a fast, scalable, and serverless time series database service for Internet of Things (IoT) and operational applications. It makes it easy to store and analyze trillions of events per day up to 1,000 times faster and for as little as one-tenth of the cost of relational databases. Time series data is a sequence of data points recorded over a time interval. It is used for measuring events that change over time, such as stock prices over time or temperature measurements over time.
183
What is Amazon Quantum Ledger Database?
Amazon Quantum Ledger Database (Amazon QLDB) With traditional databases, you can overwrite or delete data, so developers use techniques, such as audit tables and audit trails to help track data lineage. These approaches can be difficult to scale and put the burden of ensuring that all data is recorded on the application developer. Amazon QLDB is a purpose-built ledger database that provides a complete and cryptographically verifiable history of all changes made to your application data.
184
What is DynamoDB?
DynamoDB overview DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. With DynamoDB, you can offload the administrative burdens of operating and scaling a distributed database. You don't need to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can do the following: * Create database tables that can store and retrieve any amount of data and serve any level of request traffic. * Scale up or scale down your tables' throughput capacity without downtime or performance degradation. * Monitor resource usage and performance metrics using the AWS Management Console.
185
What are the core componients of DynamoDB?
tables, items, and attributes are the core components that you work with. DynamoDB uses primary keys to uniquely identify each item in a table and secondary indexes to provide more querying flexibility.
186
What are some use cases for DynamoDB?
You might want to consider using DynamoDB in the following circumstances: * You are experiencing scalability problems with other traditional database systems. * You are actively engaged in developing an application or service. * You are working with an OLTP workload. * You care deploying a mission-critical application that must be highly available at all times without manual intervention. * You require a high level of data durability, regardless of your backup-and-restore strategy.
187
What are other DynamoDB use cases?
Develop software applications Build internet-scale applications supporting user-content metadata and caches that require high concurrency and connections for millions of users and millions of requests per second. Create media metadata stores Scale throughput and concurrency for analysis of media and entertainment workloads, such as real-time video streaming and interactive content. Deliver lower latency with multi-Region replication across Regions. Scale gaming platforms Focus on driving innovation with no operational overhead. Build out your game platform with player data, session history, and leaderboards for millions of concurrent users. Deliver seamless retail experiences Use design patterns for deploying shopping carts, workflow engines, inventory tracking, and customer profiles. DynamoDB supports high-traffic, extreme-scaled events and can handle millions of queries per second.
188
What are some of DynamoDB security best practices that you can implement?
1) Use AWS CloudTrail to monitor AWS managed key usage. If you are using an AWS managed key for encryption at rest, usage of the key is recorded in AWS CloudTrail. CloudTrail can tell you who made the request, the services used, actions performed, parameters for the action, and response elements returned. 2) Use IAM roles to authenticate access to DynamoDB. For users, applications, and other AWS services to access DynamoDB, they must include valid AWS credentials in their AWS API requests. Use IAM roles to obtain temporary access keys. 3) Use IAM policy conditions for fine-grained access control. When you grant permissions in DynamoDB, you can specify conditions that determine how a permissions policy takes effect. Implementing least privilege is key in reducing security risk and the impact that can result from errors or malicious intent. 4) Monitor DynamoDB operations using CloudTrail. When activity occurs in DynamoDB, that activity is recorded in a CloudTrail event. For an ongoing record of events in DynamoDB and in your AWS account, create a trail to deliver log files to an Amazon Simple Storage Service (Amazon S3) bucket.
189
What are some Amazon "Relational" database type offerings?
Amazon RDS, Aurora, Amazon Redshift Used for: Traditional applications, ERP, CRM, ecommerce
190
What are some Amazon "Key-value" database type offerings?
DynamoDB Used for: High-traffic web applications, ecommerce systems, gaming applications
191
What are some Amazon "in-memory" database type offerings?
Amazon ElastiCache for Memcached, Amazon ElastiCache for Redis Used for: Caching, session management, gaming leaderboards, geospatial applications
192
What are some Amazon "Document" database type offerings?
Amazon DocumentDB Used for: Content management, catalogs, user profiles
193
What are some Amazon "Wide column" database type offerings?
Amazon Keyspaces Used for: High-scale industrial applications for equipment maintenance, fleet management, route optimization
194
What are some Amazon "Graph" database type offerings?
Neptune Used for: Fraud detection, social networking, recommendation engines
195
What are some Amazon "Time series" database type offerings?
Timestream Used for: IoT applications, Development Operations (DevOps), industrial telemetry
196
What are some Amazon "Ledger" database type offerings?
Amazon QLDB Used for: Systems of record, supply chain, registrations, banking transactions
197
What is the term "Breaking up applications and databases" refer to?
Breaking up applications and databases As the industry changes, applications and databases change too. Today, with larger applications, you no longer see just one database supporting them. Instead, applications are broken into smaller services, each with its own purpose-built database supporting it. This shift removes the idea of a one-size-fits-all database and replaces it with a complimentary database strategy. You can give each database the appropriate functionality, performance, and scale the workload requires.
198
With Amazon RDS, you can scale components of the service. What does this mean? A) For major database (DB) updates, you can activate automatic version upgrades. B) You can upgrade your DB instance at any time. C) You can increase or decrease specific database configurations independently. D) Amazon RDS components are coupled. When you modify any component, such as storage, memory, or processor size, all other components are also modified.
C) You can increase or decrease specific database configurations independently.
199
An organization needs a fully managed database service to build an application that requires high concurrency and connections for millions of users and millions of requests per second. Which AWS database service should the organization use? A) Amazon Redshift B) Amazon RDS C) Amazon DynamoDB D) Amazon Aurora
C) Amazon DynamoDB
200
What is the Statistics equation?
Data points generated from resources --> Metrics/Time = Statistics Use metrics to solve problems The AWS resources that host your solutions create various forms of data that you might be interested in collecting. Each individual data point that a resource creates is a metric. Metrics that are collected and analyzed over time become statistics, such as average CPU utilization over time showing a spike.
201
What are some examples of Amazaon metrics?
Amazon EC2 metrics examples: * CPUUtilization * NetworkIn * NetworkOut Amazon RDS metric example: * DatabaseConnections
202
What is Amazon CloudWatch?
A monitoring solution which allows you to monitor everything in one centralized place. Visibility using CloudWatch AWS resources create data that you can monitor through metrics, logs, network traffic, events, and more. This data comes from components that are distributed in nature. This can lead to difficulty in collecting the data you need if you don’t have a centralized place to review it all. AWS has taken care of centralizing the data collection for you with a service called CloudWatch. CloudWatch is a monitoring and observability service that collects your resource data and provides actionable insights into your applications. With CloudWatch, you can respond to system-wide performance changes, optimize resource usage, and get a unified view of operational health.
203
How can you monitor an EC2 in Amazon?
One way to evaluate the health of an EC2 instance is through CPU utilization. Generally speaking, if an EC2 instance has a high CPU utilization, it can mean a flood of requests. Or it can reflect a process that has encountered an error and is consuming too much of the CPU. When analyzing CPU utilization, take a process that exceeds a specific threshold for an unusual length of time. Use that abnormal event as a cue to either manually or automatically resolve the issue through actions like scaling the instance. CPU utilization is one example of a metric. Other examples of metrics that EC2 instances have are network utilization, disk performance, memory utilization, and the logs created by the applications running on top of Amazon EC2.
204
What type of metrics can be gathered from Amazon Simple Storage Service (Amazon S3) metrics?
* Size of objects stored in a bucket * Number of objects stored in a bucket * Number of HTTP request made to a bucket
205
What type of metrics can be gathered from Amazon Relational Database Service (Amazon RDS) metrics?
* Database connections * CPU utilization of an instance * Disk space consumption
206
What type of metrics can be gathered from Amazon EC2 metrics?
* CPU utilization * Network utilization * Disk performance * Status checks
207
What are some of the benefits of monitoring?
Respond proactively Respond to operational issues proactively before your end users are aware of them. Waiting for end users to let you know when your application is experiencing an outage is a bad practice. Through monitoring, you can keep tabs on metrics like error response rate and request latency. Over time, the metrics help signal when an outage is going to occur. You can automatically or manually perform actions to prevent the outage from happening and fix the problem before your end users are aware of it. Improve performance and reliability Monitoring can improve the performance and reliability of your resources. Monitoring the various resources that comprise your application provides you with a full picture of how your solution behaves as a system. Monitoring, if done well, can illuminate bottlenecks and inefficient architectures. This helps you drive performance and improve reliability. Recognize security threats and events By monitoring, you can recognize security threats and events. When you monitor resources, events, and systems over time, you create what is called a baseline. A baseline defines normal activity. Using a baseline, you can spot anomalies like unusual traffic spikes or unusual IP addresses accessing your resources. When an anomaly occurs, an alert can be sent out or an action can be taken to investigate the event. Make data-driven decisions Monitoring helps you make data-driven decisions for your business. Monitoring keeps an eye on IT operational health and drives business decisions. For example, suppose you launched a new feature for your cat photo app and now you want to know if it’s being used. You can collect application-level metrics and view the number of users who use the new feature. With your findings, you can decide whether to invest more time into improving the new feature. Create cost-efffective solutions Through monitoring, you can create more cost-effective solutions. You can view resources that are underused and rightsize your resources to your usage. This helps you optimize cost and make sure you aren’t spending more money than necessary.
208
What is a baseline in Amazon?
When you monitor resources, events, and systems over time, you create what is called a baseline. A baseline defines normal activity. Using a baseline, you can spot anomalies like unusual traffic spikes or unusual IP addresses accessing your resources.
209
What can CloudWatch be used for?
* Detect anomalous behavior in your environments. * Set alarms to alert you when something is not right. * Visualize logs and metrics with the AWS Management Console. * Take automated actions like scaling. * Troubleshoot issues. * Discover insights to keep your applications healthy.
210
What do you need to get started with Amazon CloudWatch?
With CloudWatch, all you need to get started is an AWS account. It is a managed service that you can use for monitoring without managing the underlying infrastructure.
211
What is basic monitoring?
Many AWS services automatically send metrics to CloudWatch for free at a rate of 1 data point per metric per 5-minute interval. This is called basic monitoring, and it gives you visibility into your systems without any extra cost. For many applications, basic monitoring is adequate. For applications running on EC2 instances, you can get more granularity by posting metrics every minute instead of every 5-minutes using a feature like detailed monitoring. Detailed monitoring incurs a fee.
212
What are metrics in Amazon CloudWatch?
Metrics are the fundamental concept in CloudWatch. A metric represents a time-ordered set of data points that are published to CloudWatch. Think of a metric as a variable to monitor and the data points as representing the values of that variable over time. Every metric data point must be associated with a timestamp.
213
What are dimensions in Amazon CloudWatch?
AWS services that send data to CloudWatch attach dimensions to each metric. A dimension is a name and value pair that is part of the metric’s identity. You can use dimensions to filter the results that CloudWatch returns. For example, many Amazon EC2 metrics publish InstanceId as a dimension name and the actual instance ID as the value for that dimension.
214
Is monitoring AWS services free?
By default, many AWS services provide metrics at no charge for resources such as EC2 instances, Amazon Elastic Block Store (Amazon EBS) volumes, and Amazon RDS database (DB) instances. For a charge, you can activate features such as detailed monitoring or publishing your own application metrics on resources such as your EC2 instances.
215
What are Custom metrics in CloudWatch?
Suppose you have an application, and you want to record the number of page views your website gets. How would you record this metric with CloudWatch? First, it's an application-level metric. That means it’s not something the EC2 instance would post to CloudWatch by default. This is where custom metrics come in. With custom metrics, you can publish your own metrics to CloudWatch. If you want to gain more granular visibility, you can use high-resolution custom metrics, which make it possible for you to collect custom metrics down to a 1-second resolution. This means you can send 1 data point per second per custom metric. Some examples of custom metrics include the following: * Webpage load times * Request error rates * Number of processes or threads on your instance * Amount of work performed by your application
216
What are CloudWatch dashboards?
Once you provision your AWS resources and they are sending metrics to CloudWatch, you can visualize and review that data using CloudWatch dashboards. Dashboards are customizable home pages you can configure for data visualization for one or more metrics through widgets, such as a graph or text. You can build many custom dashboards, each one focusing on a distinct view of your environment. You can even pull data from different AWS Regions into a single dashboard to create a global view of your architecture. The following screenshot an example of a dashboard with metrics from Amazon EC2 and Amazon EBS.
217
How does Amazon CloudWatch aggregate statistics?
CloudWatch aggregates statistics according to the period of time that you specify when creating your graph or requesting your metrics. You can also choose whether your metric widgets display live data. Live data is data published within the last minute that has not been fully aggregated. You are not bound to using CloudWatch exclusively for all your visualization needs. You can use external or custom tools to ingest and analyze CloudWatch metrics using the GetMetricData API. As far as security is concerned, with AWS Identity and Access Management (IAM) policies, you control who has access to view or manage your CloudWatch dashboards.
218
What is Amazon CloudWatch Logs?
CloudWatch Logs is centralized place for logs to be stored and analyzed. With this service, you can monitor, store, and access your log files from applications running on EC2 instances, AWS Lambda functions, and other sources.
219
What can you do with Amazon CloudWatch Logs?
With CloudWatch Logs, you can query and filter your log data. For example, suppose you’re looking into an application logic error for your application. You know that when this error occurs, it will log the stack trace. Because you know it logs the error, you query your logs in CloudWatch Logs to find the stack trace. You also set up metric filters on logs, which turn log data into numerical CloudWatch metrics that you can graph and use on your dashboards. Some services, like Lambda, are set up to send log data to CloudWatch Logs with minimal effort. With Lambda, all you need to do is give the Lambda function the correct IAM permissions to post logs to CloudWatch Logs. Other services require more configuration. For example, to send your application logs from an EC2 instance into CloudWatch Logs, you need to install and configure the CloudWatch Logs agent on the EC2 instance. With the CloudWatch Logs agent, EC2 instances can automatically send log data to CloudWatch Logs.
220
What is the CloudWatch Logs terminology?
Log event A log event is a record of activity recorded by the application or resource being monitored. It has a timestamp and an event message Log stream Log events are grouped into log streams, which are sequences of log events that all belong to the same resource being monitored. For example, logs for an EC2 instance are grouped together into a log stream that you can filter or query for insights. Log group A log group is composed of log streams that all share the same retention and permissions settings. For example, suppose you have multiple EC2 instances hosting your application and you send application log data to CloudWatch Logs. You can group the log streams from each instance into one log group.
221
What are CloudWatch alarms?
You can create CloudWatch alarms to automatically initiate actions based on sustained state changes of your metrics. You configure when alarms are invoked and the action that is performed. First, you must decide which metric you want to set up an alarm for, and then you define the threshold that will invoke the alarm. Next, you define the threshold's time period. For example, suppose you want to set up an alarm for an EC2 instance to invoke when the CPU utilization goes over a threshold of 80 percent. You also must specify the time period the CPU utilization is over the threshold. You don’t want to invoke an alarm based on short, temporary spikes in the CPU. You only want to invoke an alarm if the CPU is elevated for a sustained amount of time. For example, if CPU utilization exceeds 80 percent for 5 minutes or longer, there might be a resource issue. To set up an alarm you need to choose the metric, threshold, and time period.
222
When can an alarm be invoked?
An alarm can be invoked when it transitions from one state to another. After an alarm is invoked, it can initiate an action. Actions can be an Amazon EC2 action, an automatic scaling action, or a notification sent to Amazon Simple Notification Service (Amazon SNS).
223
What are the THREE possible states of an alarm?
OK The metric is within the defined threshold. Everything appears to be operating like normal. ALARM The metric is outside the defined threshold. This might be an operational issue. INSUFFICIENT_DATA The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state.
224
What does CloudWatch use to turn the log data into metrics that you can graph or set an alarm on?
CloudWatch Logs uses metric filters 1 Set up a metric filter For the employee directory application, suppose you set up a metric filter for HTTP 500 error response codes. 2 Define an alarm Then, you define which metric alarm state should be invoked based on the threshold. With this example, the alarm state is invoked if HTTP 500 error responses are sustained for a specified period of time. 3 Define an action Next, you define an action that you want to take place when the alarm is invoked. Here, it makes sense to send an email or text alert to you so you can start troubleshooting the website. Hopefully, you can fix it before it becomes a bigger issue. After the alarm is set up, you know that if the error happens again, you will be notified promptly.
225
What is Amazon EC2 Auto Scaling service?
A service that automatically increase or decrease EC2 instances based on the rules that you define.
226
When using Amazon EC2 Auto Scaling service how do you access the multiple EC2 instances?
Using Amazon Load Balancer
227
What is System availability?
The availability of a system is typically expressed as a percentage of uptime in a given year or as a number of nines. In the following table is a list of availability percentages based on the downtime per year and its notation in nines. Availability (%) Downtime (per year) 90% (one nine of availability) 36.53 days 99% (two nines of availability) 3.65 days 99.9% (three nines of availability) 8.77 hours 99.95% (three and a half nines of availability) 4.38 hours 99.99% (four nines of availability) 52.60 minutes 99.995% (four and a half nines of availability) 26.30 minutes 99.999% (five nines of availability) 5.26 minutes
228
What is needed in order to increase Availability?
To increase availability, you need redundancy. This typically means more infrastructure—more data centers, more servers, more databases, and more replication of data. You can imagine that adding more of this infrastructure means a higher cost. Customers want the application to always be available, but you need to draw a line where adding redundancy is no longer viable in terms of revenue.
229
Why improve application availability?
In the current application, one EC2 instance hosts the application. The photos are served from Amazon S3, and the structured data is stored in Amazon DynamoDB. That single EC2 instance is a single point of failure for the application. Even if the database and Amazon S3 are highly available, customers have no way to connect if the single instance becomes unavailable. One way to solve this single point of failure issue is to add one more server in a second Availability Zone.
230
Why should we add an addtional Availability Zone?
The physical location of a server is important. In addition to potential software issues at the operating system (OS) or application level, you must also consider hardware issues. They might be in the physical server, the rack, the data center, or even the Availability Zone hosting the virtual machine. To remedy the physical location issue, you can deploy a second EC2 instance in a second Availability Zone. This second instance might also solve issues with the OS and the application.
231
What are some of the challenges of having more than one EC2 instance?
* Replication process – The first challenge with multiple EC2 instances is that you need to create a process to replicate the configuration files, software patches, and application across instances. The best method is to automate where you can. * Customer redirection – The second challenge is how to notify the clients—the computers sending requests to your server—about the different servers. You can use various tools here. The most common is using a Domain Name System (DNS) where the client uses one record that points to the IP address of all available servers. However, this method isn't always used because of propagation — the time frame it takes for DNS changes to be updated across the Internet. * Another option is to use a load balancer, which takes care of health checks and distributing the load across each server. Situated between the client and the server, a load balancer avoids propagation time issues. You will learn more about load balancers in the next section. * Types of high availability – The last challenge to address when there is more than one server is the type of availability you need: active-passive or active-active.
232
What is the difference between "Active-passive" and "Active-active" systems?
Active-passive systems With an active-passive system, only one of the two instances is available at a time. One advantage of this method is that for stateful applications (where data about the client’s session is stored on the server), there won’t be any issues. This is because the customers are always sent to the server where their session is stored. Active-active systems A disadvantage of an active-passive system is scalability. This is where an active-active system shines. With both servers available, the second server can take some load for the application, and the entire system can take more load. However, if the application is stateful, there would be an issue if the customer’s session isn’t available on both servers. Stateless applications work better for active-active systems.
233
What is a Stateless application?
A stateless application is one where each request from a client is treated as an independent transaction, and the server doesn't retain any information about previous interactions. This means the server doesn't maintain any session or state data between requests. Instead, all necessary information for processing the request must be included in each request itself.
234
What components do you need to setup for Amazon Application Load Balancer (ALB)?
* Listener * Target group * Rule
235
What are Load balancers?
Load balancing refers to the process of distributing tasks across a set of resources. In the case of the Employee Directory application, the resources are EC2 instances that host the application, and the tasks are the requests being sent. You can use a load balancer to distribute the requests across all the servers hosting the application. To do this, the load balancer needs to take all the traffic and redirect it to the backend servers based on an algorithm. The most popular algorithm is round robin, which sends the traffic to each server one after the other. A typical request for an application starts from a client's browser. The request is sent to a load balancer. Then, it’s sent to one of the EC2 instances that hosts the application. The return traffic goes back through the load balancer and back to the client's browser. Although it is possible to install your own software load balancing solution on EC2 instances, AWS provides the ELB service for you.
236
What are some of the benefits of using Amazon Elastic Load Balancer (ELB)?
ELB features The ELB service provides a major advantage over using your own solution to do load balancing. Mainly, you don’t need to manage or operate ELB. It can distribute incoming application traffic across EC2 instances, containers, IP addresses, and Lambda functions. Other key features include the following: * Hybrid mode – Because ELB can load balance to IP addresses, it can work in a hybrid mode, which means it also load balances to on-premises servers. * High availability – ELB is highly available. The only option you must ensure is that the load balancer's targets are deployed across multiple Availability Zones. * Scalability – In terms of scalability, ELB automatically scales to meet the demand of the incoming traffic. It handles the incoming traffic and sends it to your backend application.
237
How does ELB ensure reliablity?
Health checks Monitoring is an important part of load balancers because they should route traffic to only healthy EC2 instances. That’s why ELB supports two types of health checks as follows: * Establishing a connection to a backend EC2 instance using TCP and marking the instance as available if the connection is successful. * Making an HTTP or HTTPS request to a webpage that you specify and validating that an HTTP response code is returned. Taking time to define an appropriate health check is critical. Only verifying that the port of an application is open doesn’t mean that the application is working. It also doesn’t mean that making a call to the home page of an application is the right way either.
238
What THREE components make up the ELB?
1) Rule 2) Listener 3) Target groups Rule To associate a target group to a listener, you must use a rule. Rules are made up of two conditions. The first condition is the source IP address of the client. The second condition decides which target group to send the traffic to. Listener The client connects to the listener. This is often called client side. To define a listener, a port must be provided in addition to the protocol, depending on the load balancer type. There can be many listeners for a single load balancer. Target group The backend servers, or server side, are defined in one or more target groups. This is where you define the type of backend you want to direct traffic to, such as EC2 instances, Lambda functions, or IP addresses. Also, a health check must be defined for each target group.
239
What are the THREE types of ELB (Elastic Load Balancers)?
1) Application Load Balancer (ALB) 2) Network Load Balancer (NLB) 3) Gateway Load Balancer (GLB) Application Load Balancer (ALB): * User authorization * Rich metrics and logging * Redirects * Fixed response Network Load Balancer (NLB): * TCP and User Datagram Protocol (UDP) connection based * Source IP preservation * Low latency Gateway Load Balancer (GLB): * Health checks * Gateway Load Balancer Endpoints * Higher availability for third-party virtual appliances
240
What is an Application Load Balancer?
Application Load Balancer For our Employee Directory application, we are using an Application Load Balancer. An Application Load Balancer functions at Layer 7 of the Open Systems Interconnection (OSI) model. It is ideal for load balancing HTTP and HTTPS traffic. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply. It then routes traffic to targets based on the request content. FEATURES Routes traffic based on request data: An Application Load Balancer makes routing decisions based on the HTTP and HTTPS protocol. For example, the ALB could use the URL path (/upload) and host, HTTP headers and method, or the source IP address of the client. This facilitates granular routing to target groups. Sends responses directly to the client: An Application Load Balancer can reply directly to the client with a fixed response, such as a custom HTML page. It can also send a redirect to the client. This is useful when you must redirect to a specific website or redirect a request from HTTP to HTTPS. It removes that work from your backend servers. Uses T:S offloading: An Application Load Balancer understands HTTPS traffic. To pass HTTPS traffic through an Application Load Balancer, an SSL certificate is provided one of the following ways: * Importing a certificate by way of IAM or ACM services * Creating a certificate for free using ACM This ensures that the traffic between the client and Application Load Balancer is encrypted. Authenticates users: An Application Load Balancer can authenticate users before they can pass through the load balancer. The Application Load Balancer uses the OpenID Connect (OIDC) protocol and integrates with other AWS services to support protocols, such as the following: * SAML * Lightweight Directory Access Protocol (LDAP) * Microsoft Active Directory * Others Secures traffic: To prevent traffic from reaching the load balancer, you configure a security group to specify the supported IP address ranges. Supports sticky sessions: If requests must be sent to the same backend server because the application is stateful, use the sticky session feature. This feature uses an HTTP cookie to remember which server to send the traffic to across connections.
241
What is a Network Load Balancer?
A Network Load Balancer is ideal for load balancing TCP and UDP traffic. It functions at Layer 4 of the OSI model, routing connections from a target in the target group based on IP protocol data. FEATURES Sticky sessions Routes requests from the same client to the same target. Low latency Offers low latency for latency-sensitive applications. Source IP address Preserves the client-side source IP address. Static IP support Automatically provides a static IP address per Availability Zone (subnet). Elastic IP address support Lets users assign a custom, fixed IP address per Availability Zone (subnet). DNS failover Uses Amazon Route 53 to direct traffic to load balancer nodes in other zones.
242
What is a Gateway Load Balancer?
Gateway Load Balancer A Gateway Load Balancer helps you to deploy, scale, and manage your third-party appliances, such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. It provides a gateway for distributing traffic across multiple virtual appliances while scaling them up and down based on demand. FEATURES High availability Ensures high availability and reliability by routing traffic through healthy virtual appliances. Monitoring Can be monitored using CloudWatch metrics. Streamlined deployments Can deploy a new virtual appliance by selecting it in the AWS Marketplace. Private connectivity Connects internet gateways, virtual private clouds (VPCs), and other network resources over a private network.
243
What are the TWO types of AWS scaling?
1) Vertical 2) Horizontal Vertical Increase the instance size. If too many requests are sent to a single active-passive system, the active server will become unavailable and, hopefully, fail over to the passive server. But this doesn’t solve anything. With active-passive systems, you need vertical scaling. This means increasing the size of the server. With EC2 instances, you select either a larger type or a different instance type. This can be done only while the instance is in a stopped state. In this scenario, the following steps occur: * Stop the passive instance. This doesn’t impact the application because it’s not taking any traffic. * Change the instance size or type, and then start the instance again. * Shift the traffic to the passive instance, turning it active. * Stop, change the size, and start the previous active instance because both instances should match. Horizontal Add additional instances. As mentioned, for the application to work in an active-active system, it’s already created as stateless, not storing any client sessions on the server. This means that having two or four servers wouldn’t require any application changes. It would only be a matter of creating more instances when required and shutting them down when traffic decreases. The Amazon EC2 Auto Scaling service can take care of that task by automatically creating and removing EC2 instances based on metrics from Amazon CloudWatch. We will learn more about this service in this lesson. You can see that there are many more advantages to using an active-active system in comparison with an active-passive system. Modifying your application to become stateless provides scalability.
244
What is the difference between the between Traditional scaling and auto scaling?
With a traditional approach to scaling, you buy and provision enough servers to handle traffic at its peak. However, this means at nighttime, for example, you might have more capacity than traffic, which means you’re wasting money. Turning off your servers at night or at times when the traffic is lower only saves on electricity. The cloud works differently with a pay-as-you-go model. You must turn off the unused services, especially EC2 instances you pay for on-demand. You can manually add and remove servers at a predicted time. But with unusual spikes in traffic, this solution leads to a waste of resources with over-provisioning or a loss of customers because of under-provisioning.
245
What is Automatic scaling?
Automatically scales in and out based on demand.
246
What is Scheduled scaling?
Scales based on user-defined schedules.
247
What is Fleet management?
Automatically replaces unhealthy EC2 instances.
248
What is Predictive scaling?
Uses machine learning (ML) to help schedule the optimum number of EC2 instances.
249
What is Purchase options?
Includes multiple purchase models, instance types, and Availability Zones.
250
What is Amazon EC2 availability?
Comes with the Amazon EC2 service.
251
What are the characteristics of the Elastic Load Balancer (ELB) with Amazon EC2 Auto Scaling?
The ELB service integrates seamlessly with Amazon EC2 Auto Scaling. As soon as a new EC2 instance is added to or removed from the Amazon EC2 Auto Scaling group, ELB is notified. However, before ELB can send traffic to a new EC2 instance, it needs to validate that the application running on the EC2 instance is available.
252
What are the THREE components of Amazon EC2 Auto Scaling?
* Launch template or configuration: Which resources should be automatically scaled? * Amazon EC2 Auto Scaling groups: Where should the resources be deployed? * Scaling policies: When should the resources be added or removed?
253
What is used to launch the new EC2 instance used by Auto Scaling to ensure that they have the same configuration?
Launch templates and configurations Multiple parameters are required to create EC2 instances—Amazon Machine Image (AMI) ID, instance type, security group, additional Amazon EBS volumes, and more. All this information is also required by Amazon EC2 Auto Scaling to create the EC2 instance on your behalf when there is a need to scale. This information is stored in a launch template. You can use a launch template to manually launch an EC2 instance or for use with Amazon EC2 Auto Scaling. It also supports versioning, which can be used for quickly rolling back if there's an issue or a need to specify a default version of the template. This way, while iterating on a new version, other users can continue launching EC2 instances using the default version until you make the necessary changes.
254
What are the THREE different ways you can create a launch template?
* Use an existing EC2 instance. All the settings are already defined. * Create one from an already existing template or a previous version of a launch template. * Create a template from scratch. These parameters will need to be defined: AMI ID, instance type, key pair, security group, storage, and resource tags. Another way to define what Amazon EC2 Auto Scaling needs to scale is by using a launch configuration. It’s similar to the launch template, but you cannot use a previously created launch configuration as a template. You cannot create a launch configuration from an existing Amazon EC2 instance. For these reasons, and to ensure that you get the latest features from Amazon EC2, AWS recommends you use a launch template instead of a launch configuration.
255
What are EC2 Auto Scaling groups?
An Auto Scaling group helps you define where Amazon EC2 Auto Scaling deploys your resources. This is where you specify the Amazon Virtual Private Cloud (Amazon VPC) and subnets the EC2 instance should be launched in. Amazon EC2 Auto Scaling takes care of creating the EC2 instances across the subnets, so select at least two subnets that are across different Availability Zones. With Auto Scaling groups, you can specify the type of purchase for the EC2 instances. You can use On-Demand Instances or Spot Instances. You can also use a combination of the two, which means you can take advantage of Spot Instances with minimal administrative overhead.
256
To specify how many instances Amazon EC2 Auto Scaling should launch, what are the THREE capacity settings to configure for the group size?
1) Minimum capacity 2) Desired capacity 3) Maximum capacity Minimum capacity This is the minimum number of instances running in your Auto Scaling group, even if the threshold for lowering the number of instances is reached. When Amazon EC2 Auto Scaling removes EC2 instances because the traffic is minimal, it keeps removing EC2 instances until it reaches a minimum capacity. When reaching that limit, even if Amazon EC2 Auto Scaling is instructed to remove an instance, it does not. This ensures that the minimum is kept. Desired capacity The desired capacity is the number of EC2 instances that Amazon EC2 Auto Scaling creates at the time the group is created. This number can only be within or equal to the minimum or maximum. If that number decreases, Amazon EC2 Auto Scaling removes the oldest instance by default. If that number increases, Amazon EC2 Auto Scaling creates new instances using the launch template. Maximum capacity This is the maximum number of instances running in your Auto Scaling group, even if the threshold for adding new instances is reached. When traffic keeps growing, Amazon EC2 Auto Scaling keeps adding EC2 instances. This means the cost for your application will also keep growing. That’s why you must set a maximum amount to ensure it doesn’t go above your budget.
257
What are Scaling policies used for?
Scaling policies By default, an Auto Scaling group will be kept to its initial desired capacity. While it’s possible to manually change the desired capacity, you can also use scaling policies. In the Monitoring lesson, you learned about CloudWatch metrics and alarms. You use metrics to keep information about different attributes of your EC2 instance, such as the CPU percentage. You use alarms to specify an action when a threshold is reached. Metrics and alarms are what scaling policies use to know when to act. For example, you can set up an alarm that states when the CPU utilization is above 70 percent across the entire fleet of EC2 instances. It will then invoke a scaling policy to add an EC2 instance.
258
What are the THREE types of available Scaling Policies?
1) Simple Scaling Policy 2) Step Scaling Policy 3) Target Tracking Scaling Policy Simple Scaling Policy With a simple scaling policy, you can do exactly what’s described in this module. You use a CloudWatch alarm and specify what to do when it is invoked. This can include adding or removing a number of EC2 instances or specifying a number of instances to set the desired capacity to. You can specify a percentage of the group instead of using a number of EC2 instances, which makes the group grow or shrink more quickly. After the scaling policy is invoked, it enters a cooldown period before taking any other action. This is important because it takes time for the EC2 instances to start, and the CloudWatch alarm might still be invoked while the EC2 instance is booting. For example, you might decide to add an EC2 instance if the CPU utilization across all instances is above 65 percent. You don’t want to add more instances until that new EC2 instance is accepting traffic. However, what if the CPU utilization is now above 85 percent across the Auto Scaling group? Adding one instance might not be the right move. Instead, you might want to add another step in your scaling policy. Unfortunately, a simple scaling policy can’t help with that. This is where a step scaling policy helps. Step Scaling Policy Step scaling policies respond to additional alarms even when a scaling activity or health check replacement is in progress. Similar to the previous example, you might decide to add two more instances when CPU utilization is at 85 percent and four more instances when it’s at 95 percent. Deciding when to add and remove instances based on CloudWatch alarms might seem like a difficult task. This is why the third type of scaling policy exists—target tracking. Target Tracking Scaling Policy If your application scales based on average CPU utilization, average network utilization (in or out), or request count, then this scaling policy type is the one to use. All you need to provide is the target value to track, and it automatically creates the required CloudWatch alarms.
259
1. Which AWS service is used to decouple components of a microservices architecture? A. EC2 B. S3 C. SQS D. RDS
C. SQS ✅
260
2. What does Amazon S3 provide? A. Structured data storage B. Object storage C. Block storage D. File system storage
B. Object storage ✅
261
3. Which AWS database service is best suited for storing key-value pairs? A. Amazon Aurora B. Amazon RDS C. Amazon Redshift D. Amazon DynamoDB
D. Amazon DynamoDB ✅
262
4. What does an AWS IAM policy define? A. Who can manage billing B. Permissions for users and roles C. EC2 pricing D. Data replication strategy
B. Permissions for users and roles ✅
263
5. Which AWS service provides a content delivery network (CDN)? A. Route 53 B. CloudWatch C. CloudFront D. Elastic Beanstalk
C. CloudFront ✅
264
6. How can you automatically scale EC2 instances? A. CloudTrail B. Auto Scaling C. CloudWatch D. ECS
B. Auto Scaling ✅
265
7. Which AWS service enables infrastructure as code? A. CloudTrail B. CloudFormation C. OpsWorks D. Systems Manager
B. CloudFormation ✅
266
8. What is the default limit for IAM users per AWS account? A. 100 B. 500 C. Unlimited D. 5000
D. 5000 ✅
267
9. Which feature of Amazon RDS supports automatic failover? A. Read Replica B. Multi-AZ Deployment C. RDS Snapshot D. Backups
B. Multi-AZ Deployment ✅
268
10. What is the maximum object size allowed in S3? A. 1 GB B. 5 GB C. 5 TB D. Unlimited
C. 5 TB ✅
269
11. Which service lets you run Docker containers without managing servers? A. Amazon EC2 B. AWS Lambda C. AWS Fargate D. Amazon EBS
C. AWS Fargate ✅
270
12. What AWS service can be used to perform real-time log analysis? A. CloudTrail B. CloudWatch Logs C. S3 D. CloudFormation
B. CloudWatch Logs ✅
271
13. What service allows you to register domain names in AWS? A. Route 53 B. CloudFront C. ACM D. ElastiCache
A. Route 53 ✅
272
14. Which AWS storage class offers the lowest cost for rarely accessed data? A. S3 Standard B. S3 Intelligent-Tiering C. S3 Glacier Deep Archive D. S3 One Zone-IA
C. S3 Glacier Deep Archive ✅
273
15. What is the purpose of Amazon EC2 Spot Instances? A. Long-term consistent workloads B. Temporary capacity with lower cost C. Reserved pricing model D. Persistent data backups
B. Temporary capacity with lower cost ✅
274
16. Which AWS service allows hosting relational databases? A. S3 B. DynamoDB C. RDS D. Redshift
C. RDS ✅
275
17. What does AWS Shield protect against? A. Malware B. Network intrusions C. DDoS attacks D. SQL injections
C. DDoS attacks ✅
276
18. What AWS feature ensures even distribution of traffic across multiple resources? A. CloudTrail B. Load Balancer C. CloudWatch D. Auto Scaling
B. Load Balancer ✅
277
19. What is the durability of S3 Standard storage class? A. 99.9% B. 99.99% C. 99.999999999% (11 9's) D. 100%
C. 99.999999999% (11 9's) ✅
278
20. Which database is best for data warehousing on AWS? A. RDS B. DynamoDB C. ElastiCache D. Redshift
D. Redshift ✅
279
21. How can you provide temporary access to an S3 bucket? A. IAM policy B. ACL C. Pre-signed URL D. S3 Lifecycle Policy
C. Pre-signed URL ✅
280
22. Which AWS service provides global DNS? A. CloudFront B. Route 53 C. ElastiCache D. CloudWatch
B. Route 53 ✅
281
23. What does Amazon CloudWatch do? A. Monitors AWS resources and applications B. Manages EC2 instances C. Provides DNS D. Manages S3 buckets
A. Monitors AWS resources and applications ✅
282
24. Which service allows event-driven code execution? A. EC2 B. Lambda C. SQS D. CloudFormation
B. Lambda ✅
283
25. Which tool helps estimate AWS costs? A. Billing Console B. Cost Explorer C. Pricing Calculator D. Budgets
C. Pricing Calculator ✅
284
26. What kind of storage is Amazon EBS? A. File B. Object C. Block D. Relational
C. Block ✅
285
27. What does an EC2 security group act as? A. VPN B. Firewall C. Load balancer D. Encryption key
B. Firewall ✅
286
28. What service provides push-based messaging to users? A. SNS B. SQS C. Lambda D. CloudTrail
A. SNS ✅
287
29. How can you make EC2 instance failover automatic? A. Elastic IP B. Placement Group C. Auto Scaling with health checks D. EBS snapshot
C. Auto Scaling with health checks ✅
288
30. Which type of RDS storage is best for high IOPS? A. Magnetic B. General Purpose C. Provisioned IOPS D. Cold HDD
C. Provisioned IOPS ✅
289
31. What can you use to automate patching in AWS? A. CloudTrail B. Systems Manager C. OpsWorks D. CloudWatch
B. Systems Manager ✅
290
32. What feature allows cross-region S3 replication? A. Bucket Policy B. Lifecycle Policy C. Versioning + Replication D. Static Website Hosting
C. Versioning + Replication ✅
291
33. Which service is used for automated backups of on-prem servers? A. Snowball B. Backup C. CloudFormation D. DMS
B. Backup ✅
292
34. Which is a managed NoSQL database? A. RDS B. DynamoDB C. Aurora D. Redshift
B. DynamoDB ✅
293
35. What is required for cross-account S3 access? A. IAM Role with trust policy B. S3 policy C. EC2 metadata D. Glacier Vault
A. IAM Role with trust policy ✅
294
36. Which service allows orchestration of container services? A. Lambda B. ECS C. EC2 D. Elastic Beanstalk
B. ECS ✅
295
37. What is the role of NAT Gateway? A. Translate private IPs for outbound access B. Provide DNS C. Allow VPN D. Provide TLS encryption
A. Translate private IPs for outbound access ✅
296
38. What service allows secure access to EC2 instances without SSH? A. Systems Manager Session Manager B. Bastion Host C. VPN D. Direct Connect
A. Systems Manager Session Manager ✅
297
39. What does AWS Artifact provide? A. Container registry B. Compliance reports C. Code deployment D. Encryption
B. Compliance reports ✅
298
40. What is a VPC peering connection? A. Public network B. VPN C. Private connection between VPCs D. IAM role
C. Private connection between VPCs ✅
299
41. Which service allows you to deploy applications quickly? A. CloudFormation B. Elastic Beanstalk C. EC2 D. CloudFront
B. Elastic Beanstalk ✅
300
42. What is an Availability Zone? A. A region B. A data center C. An S3 bucket D. A VPC
B. A data center ✅
301
43. How can you isolate resources in AWS? A. By Region B. By Availability Zone C. By VPC D. By Subnet
C. By VPC ✅
302
44. What service can automatically restart EC2 instances? A. Auto Scaling B. Lambda C. Systems Manager D. CloudWatch
A. Auto Scaling ✅
303
45. What does AWS WAF protect against? A. DDoS B. SQL injection C. Data loss D. Packet sniffing
B. SQL injection ✅
304
46. Which service supports Blue/Green deployments? A. EC2 B. CodeDeploy C. CloudWatch D. S3
B. CodeDeploy ✅
305
47. Which service is ideal for long-term cold storage? A. S3 Standard B. EBS C. S3 Glacier D. RDS
C. S3 Glacier ✅
306
48. What helps you identify unused resources in AWS? A. Trusted Advisor B. CloudWatch C. CloudTrail D. Billing Dashboard
A. Trusted Advisor ✅
307
49. How can you encrypt data at rest in RDS? A. Using IAM B. KMS C. ACL D. CloudTrail
B. KMS ✅
308
50. Which AWS feature reduces latency by using edge locations? A. Route 53 B. VPC C. CloudFront D. EC2 Placement Group
C. CloudFront ✅
309
51. What helps in disaster recovery for EC2 instances? A. Elastic IP B. EBS Snapshot C. Auto Scaling D. NAT Gateway
B. EBS Snapshot ✅
310
52. Which service provides event-driven serverless architecture? A. Lambda B. EC2 C. SQS D. VPC
A. Lambda ✅
311
53. What allows private access to AWS services from a VPC? A. NAT Gateway B. Internet Gateway C. VPC Endpoint D. Route 53
C. VPC Endpoint ✅
312
54. What helps ensure compliance and governance in AWS? A. CloudTrail B. Config C. CloudWatch D. Lambda
B. Config ✅
313
55. What defines a virtual network in AWS? A. EC2 B. VPC C. Subnet D. Route Table
B. VPC ✅
314
56. What is the maximum number of VPCs per region per account (default)? A. 5 B. 10 C. 20 D. 100
A. 5 ✅
315
57. What is a benefit of Reserved Instances? A. Flexibility B. Pay-as-you-go C. Cost savings for steady-state usage D. On-demand elasticity
C. Cost savings for steady-state usage ✅
316
58. How can you restrict S3 access to specific IPs? A. IAM Role B. S3 ACL C. Bucket Policy D. Glacier Vault
C. Bucket Policy ✅
317
59. Which tool helps visualize AWS architecture? A. CloudFormation B. Trusted Advisor C. AWS Architecture Diagram Tool D. Systems Manager
C. AWS Architecture Diagram Tool ✅
318
60. What AWS service simplifies deployment of ML models? A. SageMaker B. Lambda C. Redshift D. Athena
A. SageMaker ✅
319
61. What is AWS Direct Connect used for? A. Internet access B. Data replication C. Private network connection to AWS D. Route optimization
C. Private network connection to AWS ✅
320
62. What AWS service can run SQL queries on S3 data? A. DynamoDB B. Athena C. Redshift D. RDS
B. Athena ✅
321
63. What is a key benefit of cloud computing? A. Manual scaling B. Fixed pricing C. On-demand scalability D. Single-tenant isolation
C. On-demand scalability ✅
322
64. What does AWS DMS do? A. Data lake analysis B. Migrate databases C. Monitor RDS D. Move EC2s
B. Migrate databases ✅
323
65. How can you prevent accidental S3 deletion? A. Versioning B. Encryption C. ACL D. Lifecycle policy
A. Versioning ✅
324
66. Which AWS service allows real-time analytics of streaming data? A. Kinesis B. Athena C. Redshift D. SQS
A. Kinesis ✅
325
67. What is an Elastic IP? A. Static IP for EC2 instance B. Private IP in VPC C. Public DNS name D. IPv6 address
A. Static IP for EC2 instance ✅
326
68. What is the purpose of Route 53 health checks? A. Monitor application health B. Manage S3 lifecycle C. Manage IAM policies D. Enable CloudWatch alarms
A. Monitor application health ✅
327
69. Which storage is best suited for databases requiring low latency? A. S3 B. EBS C. Glacier D. Snowball
B. EBS ✅
328
70. What is an Amazon Machine Image (AMI)? A. Backup snapshot B. Template for launching EC2 instances C. DNS record D. IAM role
B. Template for launching EC2 instances ✅
329
71. Which AWS service offers a managed Kubernetes service? A. ECS B. EKS C. Lambda D. Elastic Beanstalk
B. EKS ✅
330
72. Which AWS service provides centralized logging? A. CloudTrail B. CloudFormation C. S3 D. EC2
A. CloudTrail ✅
331
73. What protocol does AWS VPN support? A. FTP B. IPSec C. HTTP D. SSH
B. IPSec ✅
332
74. Which service helps protect AWS accounts from unintended usage? A. IAM B. AWS Organizations C. CloudWatch D. Config
B. AWS Organizations ✅
333
75. What is a CloudFormation stack? A. Set of IAM policies B. Collection of AWS resources managed as a single unit C. EC2 cluster D. S3 bucket group
B. Collection of AWS resources managed as a single unit ✅
334
76. What does Amazon Inspector do? A. Network monitoring B. Security vulnerability assessment C. Cost optimization D. Deployment automation
B. Security vulnerability assessment ✅
335
77. What is the minimum size of an S3 object? A. 0 bytes (empty object allowed) B. 1 KB C. 1 MB D. 5 MB
A. 0 bytes (empty object allowed) ✅
336
78. What type of database is Amazon Aurora? A. NoSQL B. Relational (MySQL/PostgreSQL compatible) C. Key-value D. Data warehouse
B. Relational (MySQL/PostgreSQL compatible) ✅
337
79. What is AWS CloudTrail used for? A. Monitoring resource health B. Auditing API calls and user activity C. Data migration D. Cost management
B. Auditing API calls and user activity ✅
338
80. What is an AWS IAM role? A. User account B. Set of permissions that can be assumed by entities C. EC2 instance type D. Billing policy
B. Set of permissions that can be assumed by entities ✅
339
81. Which AWS service allows automatic detection and response to security threats? A. GuardDuty B. Shield C. WAF D. Inspector
A. GuardDuty ✅
340
82. What is the difference between Security Groups and Network ACLs? A. Security Groups are stateful; Network ACLs are stateless B. Both are stateful C. Security Groups work at subnet level D. Network ACLs work only with VPC endpoints
A. Security Groups are stateful; Network ACLs are stateless ✅
341
83. Which AWS service helps migrate databases? A. Database Migration Service (DMS) B. Data Pipeline C. Athena D. Glue
A. Database Migration Service (DMS) ✅
342
84. How do you restrict access to S3 bucket only from a specific VPC? A. IAM policy B. Bucket policy with VPC condition C. Security Group D. Network ACL
B. Bucket policy with VPC condition ✅
343
85. Which AWS service can you use to manage SSL/TLS certificates? A. AWS Certificate Manager (ACM) B. CloudFront C. IAM D. WAF
A. AWS Certificate Manager (ACM) ✅
344
86. What type of load balancer supports WebSocket? A. Classic Load Balancer B. Network Load Balancer C. Application Load Balancer D. Gateway Load Balancer
C. Application Load Balancer ✅
345
87. What is an AWS Lambda execution environment? A. Physical server B. Managed runtime environment where Lambda functions run C. EC2 instance D. Container registry
B. Managed runtime environment where Lambda functions run ✅
346
88. Which AWS service provides managed Redis and Memcached? A. ElastiCache B. DynamoDB C. RDS D. S3
A. ElastiCache ✅
347
89. What is a placement group in AWS? A. Logical grouping of instances to optimize network performance B. Load balancer C. VPC subnet D. IAM role group
A. Logical grouping of instances to optimize network performance ✅
348
90. What is the default VPC in each AWS region? A. Custom VPC created by user B. Default network where instances can be launched without explicit VPC configuration C. Private network only D. Region-wide subnet
B. Default network where instances can be launched without explicit VPC configuration ✅
349
91. What is the maximum retention period of CloudWatch Logs? A. 7 days B. 30 days C. 365 days D. Indefinite (configurable)
D. Indefinite (configurable) ✅
350
92. What does Amazon Athena use to query data? A. SQL B. NoSQL C. Java
A. SQL ✅
351
You need to host a static website with low latency worldwide. Which AWS service should you use? A) Amazon EC2 B) Amazon S3 + CloudFront C) AWS Lambda D) Amazon RDS
Answer: B) Amazon S3 + CloudFront
352
Question: Your application requires a relational database with high availability across multiple Availability Zones. Which AWS service is best? A) Amazon RDS Multi-AZ Deployment B) Amazon DynamoDB C) Amazon Redshift D) Amazon Aurora Global Database
Answer: A) Amazon RDS Multi-AZ Deployment
353
You have an application that needs to process messages asynchronously with guaranteed delivery and high throughput. Which service should you use? A) Amazon SQS B) AWS Lambda C) Amazon SNS D) Amazon Kinesis
A) Amazon SQS
354
Your web application must scale automatically based on traffic. Which service and feature help you achieve this? A) Amazon RDS Multi-AZ B) AWS CloudTrail C) Amazon EC2 with Auto Scaling Groups D) Amazon S3 Lifecycle Policies
C) Amazon EC2 with Auto Scaling Groups
355
You want to restrict access to your Amazon S3 bucket so that only users from a specific VPC can access it. Which feature should you use? A) S3 Access Control List (ACL) B) IAM User Policies C) AWS WAF D) S3 Bucket Policy with VPC Endpoint Condition
D) S3 Bucket Policy with VPC Endpoint Condition
356
A company wants to migrate a large amount of data (>100 TB) to AWS securely and with minimal internet usage. Which option is best? A) AWS DataSync B) AWS Snowball C) AWS Direct Connect D) Amazon S3 Transfer Acceleration
B) AWS Snowball
357
You need to deploy an application in a VPC with private subnets and allow outbound internet access without exposing instances to inbound internet traffic. What should you configure? A) Internet Gateway B) VPN Connection C) NAT Gateway in public subnet D) VPC Peering
C) NAT Gateway in public subnet
358
Which storage service is best for a highly durable, low-cost data archive? A) Amazon EBS B) Amazon EFS C) Amazon S3 Glacier Deep Archive D) Amazon S3 Standard
C) Amazon S3 Glacier Deep Archive
359
You want to monitor API activity and log all calls made to your AWS account for security auditing. Which service should you use? A) AWS CloudTrail B) AWS Config C) Amazon CloudWatch D) AWS Trusted Advisor
A) AWS CloudTrail
360
Which AWS service provides serverless compute for running code in response to events without provisioning servers? A) Amazon EC2 B) AWS Lambda C) AWS Elastic Beanstalk D) Amazon Lightsail
B) AWS Lambda
361
You have a web app behind an Application Load Balancer (ALB) and want to block IP addresses that show malicious behavior. What service do you use? A) AWS WAF B) Amazon GuardDuty C) AWS Shield D) AWS Firewall Manager
A) AWS WAF
362
Which database service is best for a highly scalable NoSQL key-value store with single-digit millisecond latency? A) Amazon DynamoDB B) Amazon RDS C) Amazon Redshift D) Amazon ElastiCache
A) Amazon DynamoDB
363
You want to ensure your data at rest in S3 is encrypted using keys you manage yourself. What option should you select? A) SSE-S3 B) SSE-KMS C) Client-Side Encryption D) SSE-C (Server-Side Encryption with Customer-Provided Keys)
D) SSE-C
364
An application requires sub-millisecond latency for caching frequently accessed data. Which AWS service fits best? A) Amazon ElastiCache (Redis or Memcached) B) Amazon RDS C) Amazon S3 D) AWS Glue
A) Amazon ElastiCache
365
You need to manage infrastructure as code on AWS with easy provisioning and updates. What tool should you use? A) AWS CloudFormation B) AWS Config C) AWS Systems Manager D) AWS Trusted Advisor
A) AWS CloudFormation
366
Which AWS service allows you to run containerized applications without managing servers or clusters? A) Amazon EC2 B) Amazon EKS C) AWS Fargate D) Amazon ECS with EC2 launch type
C) AWS Fargate
367
You want to implement multi-factor authentication (MFA) for all users in your AWS account. How do you enforce this? A) Use IAM policies with MFA conditions B) Use AWS Shield C) Use AWS Organizations Service Control Policies D) Use Amazon Cognito
A) Use IAM policies with MFA conditions
368
Your application needs to analyze streaming data in real-time. Which AWS service would you use? A) AWS Glue B) Amazon Redshift C) Amazon Kinesis Data Analytics D) AWS Data Pipeline
C) Amazon Kinesis Data Analytics
369
You want to secure sensitive data in transit between EC2 instances in different AZs. What should you do? A) Use TLS/SSL for application communication B) Use Amazon VPC Peering C) Use IAM roles D) Use security groups
A) Use TLS/SSL for application communication
370
Your application runs on EC2 instances and needs to fetch secrets securely without hardcoding them. What should you use? A) AWS Secrets Manager B) Amazon S3 C) AWS Systems Manager Parameter Store without encryption D) Environment variables
A) AWS Secrets Manager
371
You need to transfer data over a private, dedicated network connection with consistent performance. Which AWS service is ideal? A) VPN over the internet B) AWS Direct Connect C) AWS Snowball D) Amazon CloudFront
B) AWS Direct Connect
372
Your company wants to deploy a fault-tolerant web application across multiple regions. What AWS service helps route users to the nearest healthy endpoint? A) Amazon Route 53 with latency-based routing B) Amazon CloudFront C) Elastic Load Balancer D) AWS Global Accelerator
A) Amazon Route 53 with latency-based routing
373
Which AWS service provides data warehousing and fast querying for petabyte-scale datasets? A) Amazon Redshift B) Amazon RDS C) Amazon DynamoDB D) Amazon Athena
A) Amazon Redshift
374
You want to monitor CPU utilization and memory usage of your EC2 instances in real-time. Which service(s) do you use? A) Amazon CloudWatch (with custom metrics for memory) B) AWS CloudTrail C) AWS Config D) AWS Trusted Advisor
A) Amazon CloudWatch (with custom metrics for memory)
375
Your application needs to run in a private subnet and communicate securely with Amazon S3 without using the internet. How can you enable this? A) Use NAT Gateway B) Create a VPC Endpoint for S3 C) Use an Internet Gateway D) Use VPN
B) Create a VPC Endpoint for S3
376
You need a database with fast read replicas and automatic failover for a globally distributed application. Which AWS service fits best? A) Amazon Aurora Global Database B) Amazon RDS Multi-AZ C) Amazon DynamoDB Global Tables D) Amazon Redshift
A) Amazon Aurora Global Database
377
You want to automate patch management for EC2 instances running Windows and Linux. What AWS service should you use? A) AWS Config B) AWS CloudTrail C) AWS Systems Manager Patch Manager D) AWS OpsWorks
C) AWS Systems Manager Patch Manager
378
Which AWS service provides a centralized view of security findings across multiple accounts? A) AWS Security Hub B) Amazon GuardDuty C) AWS WAF D) AWS Shield
A) AWS Security Hub
379
You want to reduce data transfer costs between AWS services within the same region. Which is the best practice? A) Use Internet Gateway B) Use NAT Gateway C) Use VPN D) Use VPC Endpoints
D) Use VPC Endpoints
380
You want to build a highly available, fault-tolerant web application using serverless components. Which architecture pattern should you choose? A) Amazon API Gateway + AWS Lambda + Amazon DynamoDB B) EC2 instances behind a load balancer C) Elastic Beanstalk with a relational database D) EC2 instances with Auto Scaling
A) Amazon API Gateway + AWS Lambda + Amazon DynamoDB
381
Your application needs to store files and have them accessible from multiple EC2 instances simultaneously with shared file system semantics. What should you use? A) Amazon S3 B) Amazon EFS C) Amazon EBS D) AWS Storage Gateway
B) Amazon EFS
382
Which IAM policy type is used to apply permissions directly to an AWS resource? A) Resource-based policy B) Identity-based policy C) Service Control Policy D) Session policy
A) Resource-based policy
383
Which of the following best describes an AWS Availability Zone? A) A geographical region B) An internet gateway C) A virtual private cloud D) A physically isolated data center within a region
D) A physically isolated data center within a region
384
Which AWS service allows you to centrally manage and enforce policies across multiple AWS accounts? A) AWS IAM B) AWS Organizations C) AWS CloudTrail D) AWS Trusted Advisor
B) AWS Organizations
385
How can you protect your EC2 instances from DDoS attacks? A) Enable CloudTrail B) Use AWS Shield and AWS WAF C) Enable Auto Scaling D) Use Amazon CloudWatch alarms
B) Use AWS Shield and AWS WAF
386
Your application requires relational database with serverless compute and auto-scaling storage. Which AWS service should you choose? A) Amazon Aurora Serverless B) Amazon RDS Multi-AZ C) Amazon DynamoDB D) Amazon Redshift
A) Amazon Aurora Serverless
387
What is the best practice to secure data in transit between your VPC and your on-premises data center? A) Use public internet with SSL B) Use an IPsec VPN or AWS Direct Connect with MACsec C) Use AWS WAF D) Use AWS Shield
B) Use an IPsec VPN or AWS Direct Connect with MACsec
388
You want to configure an Amazon RDS instance to be encrypted at rest. Which option is necessary? A) Enable encryption when creating the RDS instance (cannot be enabled after creation) B) Enable encryption in the operating system C) Use client-side encryption only D) Enable encryption in Amazon S3
A) Enable encryption when creating the RDS instance (cannot be enabled after creation)
389
You need to ensure a Lambda function can access private resources in your VPC. What should you do? A) Configure the Lambda function to access the VPC subnets and security groups B) Attach an IAM role with S3 access C) Place the Lambda function in a public subnet D) Use a NAT Gateway with the Lambda function
A) Configure the Lambda function to access the VPC subnets and security groups
390
Question: Which AWS service is best for long-term log archival and compliance? A) Amazon S3 Standard B) Amazon EBS C) AWS CloudTrail D) Amazon S3 Glacier
D) Amazon S3 Glacier
391
You want to manage software configurations and run commands across your fleet of EC2 instances. Which AWS service should you use? A) AWS Systems Manager B) AWS Config C) AWS CloudFormation D) AWS Lambda
A) AWS Systems Manager
392
Your web application needs to handle sudden spikes of traffic without manual intervention. What combination do you choose? A) Auto Scaling Group + Elastic Load Balancer B) Single EC2 instance C) Elastic Beanstalk without Auto Scaling D) Amazon S3 static hosting
A) Auto Scaling Group + Elastic Load Balancer
393
Question: Which AWS service enables you to centrally audit and manage compliance of your AWS resources? A) AWS Config B) AWS CloudTrail C) AWS CloudWatch D) AWS Trusted Advisor
Answer: A) AWS Config
394
Question: How can you ensure that objects uploaded to an S3 bucket are automatically encrypted? A) Use IAM Policies B) Use VPC Endpoint C) Use S3 Bucket Default Encryption D) Use AWS KMS only
C) Use S3 Bucket Default Encryption
395
Your application needs a relational database that supports horizontal scaling with no downtime during scaling. Which AWS service? A) Amazon Aurora with Aurora Replicas B) Amazon RDS Multi-AZ C) Amazon DynamoDB D) Amazon Redshift
A) Amazon Aurora with Aurora Replicas
396
What AWS feature allows you to control what actions users and roles can perform on AWS resources? A) IAM Policies B) Security Groups C) VPC ACLs D) Route Tables
A) IAM Policies
397
Which of the following is a global AWS service? A) Amazon EC2 B) IAM C) Amazon RDS D) Amazon VPC
B) IAM
398
Which service can you use to migrate databases to AWS with minimal downtime? A) AWS Database Migration Service (DMS) B) AWS DataSync C) AWS Snowball D) Amazon S3 Transfer Acceleration
A) AWS Database Migration Service (DMS)
399
You need to enable server-side encryption with AWS managed keys for objects in an S3 bucket. What should you configure? A) SSE-KMS B) SSE-S3 (AES-256) C) SSE-C D) Client-side encryption
B) SSE-S3 (AES-256)
400
You want to enforce encryption on data in transit between your on-premises network and your AWS VPC. Which option would you choose? A) VPN connection using IPsec B) Public internet connection C) AWS Direct Connect without encryption D) AWS Snowball
A) VPN connection using IPsec
401
Which AWS service helps you analyze and visualize your logs and metrics? A) AWS Config B) AWS CloudTrail C) Amazon CloudWatch Logs and CloudWatch Dashboards D) AWS Trusted Advisor
C) Amazon CloudWatch Logs and CloudWatch Dashboards
402
Your application requires a distributed message bus for event-driven architecture. Which service should you select? A) Amazon EventBridge B) Amazon S3 C) AWS Lambda D) Amazon RDS
A) Amazon EventBridge
403
How can you ensure that EC2 instances automatically receive the latest security patches? A) Use AWS Systems Manager Patch Manager B) Use AWS Config C) Use AWS CloudTrail D) Use AWS Trusted Advisor
A) Use AWS Systems Manager Patch Manager
404
Which AWS service allows you to deploy infrastructure in multiple AWS accounts and regions with governance? A) AWS Control Tower B) AWS CloudFormation C) AWS Organizations D) AWS Systems Manager
A) AWS Control Tower
405
You want to reduce latency for your global users accessing your S3 bucket. What is the best practice? A) Use S3 Transfer Acceleration only B) Use Amazon CloudFront CDN in front of the S3 bucket C) Use Multi-AZ RDS D) Use AWS Direct Connect
B) Use Amazon CloudFront CDN in front of the S3 bucket
406
You want to provide temporary AWS credentials to mobile app users to access AWS resources securely. Which AWS service helps with this? A) Amazon Cognito B) IAM Users C) AWS Organizations D) AWS KMS
A) Amazon Cognito
407
You want to run a containerized application that requires control over the underlying EC2 instances. Which service should you use? A) Amazon ECS with EC2 launch type B) AWS Fargate C) AWS Lambda D) Amazon S3
A) Amazon ECS with EC2 launch type
408
You want to analyze data stored in S3 using SQL without moving data. Which AWS service should you use? A) Amazon Redshift B) Amazon EMR C) AWS Glue D) Amazon Athena
D) Amazon Athena
409
How can you ensure high availability for an application deployed in a single AWS region? A) Deploy instances in multiple Availability Zones B) Use a single EC2 instance C) Use a single Availability Zone with RDS Multi-AZ D) Use AWS Snowball
A) Deploy instances in multiple Availability Zones
410
Which AWS service allows you to manage encryption keys centrally? A) AWS Key Management Service (KMS) B) AWS Secrets Manager C) AWS IAM D) AWS CloudHSM
A) AWS Key Management Service (KMS)
411
Question: You want to automatically replicate data across regions for disaster recovery. Which feature supports this in S3? A) S3 Cross-Region Replication (CRR) B) S3 Transfer Acceleration C) AWS DataSync D) AWS Snowball
Answer: A) S3 Cross-Region Replication (CRR)
412
Which AWS service provides a virtual network isolated from other AWS customers? A) AWS Direct Connect B) Amazon Route 53 C) Amazon VPC D) Amazon CloudFront
C) Amazon VPC
413
You want to restrict an IAM role’s access to only when MFA is used. How do you implement this? A) Add an MFA condition in the IAM policy B) Create a new IAM user with MFA C) Use AWS Organizations D) Use AWS WAF
A) Add an MFA condition in the IAM policy
414
You want to simplify the deployment of your application code along with infrastructure. Which AWS service should you use? A) AWS Lambda B) Amazon EC2 Auto Scaling C) AWS Elastic Beanstalk D) AWS CloudFormation
C) AWS Elastic Beanstalk
415
Question: Which AWS service allows you to inspect and filter HTTP requests to your application? A) AWS WAF B) AWS Shield C) Amazon GuardDuty D) AWS Config
A) AWS WAF
416
A company needs to design a fault-tolerant architecture for a web application. Select 3 answers. A. Deploy application servers in multiple Availability Zones B. Store static assets in Amazon S3 C. Use an Application Load Balancer D. Use EC2 Spot Instances only E. Use a single EC2 instance with an Elastic IP
✅ Answers: A, B, C
417
Which actions help reduce AWS costs? Select 3 answers. A. Use Auto Scaling B. Purchase Reserved Instances C. Store infrequently accessed data in S3 Standard D. Use Spot Instances for batch jobs E. Use S3 lifecycle policies
✅ Answers: A, B, D
418
How can a company ensure high availability for a database? Select 3 answers. A. Use Amazon RDS Multi-AZ B. Use EC2 with EBS C. Use Route 53 for DNS failover D. Enable automatic backups E. Use Aurora Global Database
✅ Answers: A, C, E
419
Which S3 features protect data? Select 3 answers. A. Versioning B. Multi-AZ replication C. Object Lock D. Server-side encryption E. Lifecycle configuration
✅ Answers: A, C, D
420
Which AWS services support a scalable mobile app backend? Select 3 answers. A. Amazon API Gateway B. AWS Lambda C. Amazon CloudFront D. AWS WAF E. AWS Snowball
✅ Answers: A, B, C
421
How can access to S3 data be restricted? Select 3 answers. A. S3 Bucket Policies B. S3 Access Points C. S3 Transfer Acceleration D. IAM policies E. AWS CloudTrail
✅ Answers: A, B, D
422
Which services help build decoupled architectures? Select 3 answers. A. Amazon SQS B. Amazon SNS C. AWS Lambda D. Amazon EC2 E. AWS CodeBuild
✅ Answers: A, B, C
423
Which services are serverless? Select 3 answers. A. Aurora B. Lambda C. S3 D. EC2 E. DynamoDB
✅ Answers: B, C, E
424
Which services are suitable for highly available web hosting? Select 3 answers. A. Amazon EC2 B. Application Load Balancer C. Amazon RDS D. Route 53 E. SQS
✅ Answers: A, B, D
425
How can you prevent deletion/modification of audit logs for 7 years? Select 3 answers. A. S3 Object Lock B. Glacier Vault Lock C. S3 Lifecycle Policies D. S3 Versioning E. AWS Backup
✅ Answers: A, B, D
426
Which services can trigger AWS Lambda? Select 3 answers. A. Amazon S3 B. Amazon RDS C. Amazon SQS D. API Gateway E. EC2 Auto Scaling
✅ Answers: A, C, D
427
Which AWS services provide global content delivery? Select 3 answers. A. Amazon CloudFront B. Route 53 C. AWS Global Accelerator D. S3 Transfer Acceleration E. AWS Direct Connect
✅ Answers: A, C, D
428
Which features help secure EC2 instances? Select 3 answers. A. Security Groups B. Network ACLs C. IAM roles D. CloudWatch E. VPC Peering
✅ Answers: A, B, C
429
Which AWS services support containerized applications? Select 3 answers. A. Amazon ECS B. Amazon EKS C. AWS Lambda D. AWS Fargate E. Amazon MQ
✅ Answers: A, B, D
430
Which AWS services are ideal for analytics? Select 3 answers. A. Amazon Athena B. AWS Glue C. Amazon Redshift D. AWS Inspector E. Amazon Aurora
✅ Answers: A, B, C
431
Which services support infrastructure automation? Select 3 answers. A. AWS CloudFormation B. AWS Systems Manager C. AWS CodePipeline D. AWS CloudTrail E. IAM
✅ Answers: A, B, C
432
Which services improve VPC network security? Select 3 answers. A. Network ACLs B. Security Groups C. AWS WAF D. VPC Flow Logs E. AWS Budgets
✅ Answers: A, B, D
433
Which services offer encryption at rest? Select 3 answers. A. S3 B. RDS C. Lambda D. EBS E. CloudTrail
✅ Answers: A, B, D
434
Which options are best for temporary credentials? Select 3 answers. A. IAM roles B. AWS STS C. IAM groups D. EC2 Instance Profiles E. Access keys
✅ Answers: A, B, D
435
Which are managed database services? Select 3 answers. A. Amazon Aurora B. Amazon DynamoDB C. Amazon RDS D. Amazon ElastiCache E. EC2 + MySQL
✅ Answers: A, B, C
436
Which services support horizontal scaling? Select 3 answers. A. EC2 with Auto Scaling B. Amazon DynamoDB C. Amazon RDS Multi-AZ D. AWS Lambda E. AWS Backup
✅ Answers: A, B, D
437
Which options help ensure secure API access? Select 3 answers. A. API Gateway with IAM authorization B. API Gateway with Lambda authorizers C. S3 signed URLs D. AWS CodeDeploy E. API Gateway with usage plans
✅ Answers: A, B, E
438
Which features improve read performance in Amazon RDS? Select 3 answers. A. Read replicas B. Multi-AZ deployments C. Caching with ElastiCache D. Aurora Global Database E. Increase IOPS on EBS
✅ Answers: A, C, D
439
Which services allow real-time processing of data streams? Select 3 answers. A. Amazon Kinesis B. AWS Lambda C. Amazon Athena D. Amazon MSK E. Amazon EC2
✅ Answers: A, B, D
440
Which services can distribute incoming traffic across resources? Select 3 answers. A. Application Load Balancer B. Network Load Balancer C. Route 53 D. CloudFront E. Amazon SNS
✅ Answers: A, B, C
441
Which AWS services support monitoring and observability? Select 3 answers. A. Amazon CloudWatch B. AWS X-Ray C. AWS CloudTrail D. Amazon Macie E. AWS Firewall Manager
✅ Answers: A, B, C
442
Which AWS features are commonly used for cost optimization? Select 3 answers. A. AWS Budgets B. AWS Cost Explorer C. AWS Trusted Advisor D. IAM Access Analyzer E. AWS Direct Connect
✅ Answers: A, B, C
443
Which AWS services support DevOps automation? Select 3 answers. A. AWS CodePipeline B. AWS CodeDeploy C. AWS CloudFormation D. AWS Glue E. Amazon MQ
✅ Answers: A, B, C
444
Which solutions help implement a hybrid cloud? Select 3 answers. A. AWS Direct Connect B. AWS Storage Gateway C. EC2 Auto Scaling D. Amazon VPC E. AWS VPN
✅ Answers: A, B, E
445
Which storage classes are ideal for infrequently accessed data? Select 3 answers. A. S3 Standard-IA B. S3 Glacier C. S3 Glacier Deep Archive D. S3 One Zone-IA E. S3 Standard
✅ Answers: A, B, C
446
Which AWS services help manage secrets and credentials? Select 3 answers. A. AWS Secrets Manager B. AWS Systems Manager Parameter Store C. IAM Access Analyzer D. AWS KMS E. AWS Config
✅ Answers: A, B, D
447
Which services can be used to store unstructured data? Select 3 answers. A. Amazon S3 B. Amazon DynamoDB C. Amazon EFS D. Amazon RDS E. Amazon EBS
✅ Answers: A, B, C
448
Which services offer automated backups? Select 3 answers. A. Amazon RDS B. Amazon DynamoDB C. Amazon EC2 with AWS Backup D. CloudTrail E. CloudWatch
✅ Answers: A, B, C
449
Which services can host containerized applications? Select 3 answers. A. Amazon ECS B. AWS Lambda C. AWS Fargate D. Amazon EKS E. Amazon SQS
✅ Answers: A, C, D
450
Which AWS services are global by default? Select 3 answers. A. IAM B. Amazon Route 53 C. Amazon CloudFront D. Amazon VPC E. EC2
✅ Answers: A, B, C
451
Which services provide managed message queuing? Select 3 answers. A. Amazon SQS B. Amazon SNS C. Amazon MQ D. Amazon RDS E. AWS KMS
✅ Answers: A, B, C
452
Which AWS services are commonly used for serverless web applications? Select 3 answers. A. AWS Lambda B. Amazon API Gateway C. Amazon S3 D. EC2 E. Amazon RDS
✅ Answers: A, B, C
453
Which AWS services support data warehousing? Select 3 answers. A. Amazon Redshift B. Amazon RDS C. AWS Glue D. Amazon Athena E. Amazon Aurora
✅ Answers: A, C, D
454
Which services offer data encryption in transit and at rest? Select 3 answers. A. Amazon S3 B. Amazon RDS C. AWS Lambda D. Amazon EC2 E. Amazon DynamoDB
✅ Answers: A, B, E
455
Which services are useful for edge computing? Select 3 answers. A. AWS Greengrass B. AWS IoT Core C. Amazon CloudFront D. AWS DataSync E. AWS Glue
✅ Answers: A, B, C
456
Which services support automatic scaling? Select 3 answers. A. AWS Lambda B. EC2 Auto Scaling C. DynamoDB D. Amazon EBS E. Amazon RDS Multi-AZ
✅ Answers: A, B, C
457
Which services support event-driven architectures? Select 3 answers. A. AWS Lambda B. Amazon SNS C. Amazon SQS D. Amazon CloudFront E. Amazon Route 53
✅ Answers: A, B, C
458
Which AWS services support data archiving? Select 3 answers. A. S3 Glacier B. S3 Glacier Deep Archive C. AWS Snowball D. Amazon EC2 E. Amazon CloudFront
✅ Answers: A, B, C
459
Which AWS tools assist with security auditing? Select 3 answers. A. AWS CloudTrail B. AWS Config C. Amazon Inspector D. AWS AppSync E. AWS Glue
✅ Answers: A, B, C
460
Which features help meet compliance requirements? Select 3 answers. A. AWS Artifact B. AWS Organizations C. AWS Config D. IAM Policies E. Amazon CloudWatch
✅ Answers: A, B, C
461
Which AWS services are managed compute options? Select 3 answers. A. AWS Fargate B. AWS Lambda C. Amazon EC2 D. Amazon EKS (with Fargate) E. Amazon CloudFront
✅ Answers: A, B, D
462
Which services can help secure a VPC? Select 3 answers. A. AWS Network Firewall B. AWS WAF C. Security Groups D. Amazon CloudFront E. Amazon Inspector
✅ Answers: A, B, C
463
Which options support multi-region high availability? Select 3 answers. A. Route 53 with health checks B. S3 Cross-Region Replication C. Aurora Global Database D. EC2 Placement Groups E. Amazon VPC
✅ Answers: A, B, C
464
Which services support infrastructure as code (IaC)? Select 3 answers. A. AWS CloudFormation B. AWS CDK C. Terraform D. AWS Shield E. AWS Config
✅ Answers: A, B, C
465
Which AWS services can be used to migrate data? Select 3 answers. A. AWS Snowball B. AWS DataSync C. AWS Transfer Family D. AWS Glue E. AWS CloudTrail
✅ Answers: A, B, C