Study Guide--AWS Boot Camp Flashcards

1
Q

Why do customers move to AWS?

A

Customers move to AWS to increase agility.

  • Accelerate time to market – By spending less time acquiring and managing infrastructure, you can focus on developing features that deliver value to your customers.
  • Increase innovation – You can speed up your digital transformation by using AWS, which provides tools to more easily access the latest technologies and best practices. For example, you can use AWS to develop automations, adopt containerization, and use machine learning.
  • Scale seamlessly – You can provision additional resources to support new features and scale existing resources up or down to match demand.

Customers also move to AWS to reduce complexity and risk.

  • Optimize costs – You can reduce costs by paying for only what you use. Instead of paying for on-premises hardware, which you might not use at full capacity, you can pay for compute resources only while you’re using them.
  • Minimize security vulnerabilities – Moving to AWS puts your applications and data behind the advanced physical security of the AWS data centers. With AWS, you have many tools to manage access to your resources.
  • Reduce management complexity – Using AWS services can reduce the need to maintain physical data centers, perform hardware maintenance, and manage physical infrastructure.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are key test concerns for Lambda?

A

Lambda is lightest workload; but has limitations as it can’t run more than 15 minutes or 10 Gigs; If Lambda is in the answer, then check the question for any time limitations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A group of one or more data centers is called _________?

A

an Availability Zone.
An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What 4 factors do you use to determine the right Region for your services, applications, and data?

A

Governance and legal requirements – Consider any legal requirements based on data governance, sovereignty, or privacy laws.
Latency – Close proximity to customers means better performance.
Service availability – Not all AWS services are available in all Regions.
Cost – Different Regions have different costs. Research the pricing for the services that you plan to use and compare costs to make the best decision for your workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When should you consider using Local Zones?

A

You can use AWS Local Zones for highly demanding applications that require single-digit millisecond latency to end users. Examples include:
Media and entertainment content creation – Includes live production, video editing, and graphics-intensive virtual workstations for artists in geographic proximity
Real-time multiplayer gaming – Includes real-time multiplayer game sessions, to maintain a reliable gameplay experience
Machine learning hosting and training – For high-performance, low latency inferencing
Augmented reality (AR) and virtual reality (VR) – Includes immersive entertainment, data driven insights, and engaging virtual training experiences

NOTE: exam, if low latency is the driver, then local zone might be the best option; deploying subnets close to resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are edge locations used for?

A

Edge locations are in major cities around the world. They receive requests and cache copies of your content for faster delivery.
To deliver content to end users with lower latency, you use a global network of edge locations that support AWS services. CloudFront delivers customer content through a worldwide network of point of presence (PoP) locations, which consists of edge locations and Regional edge cache servers.
Regional edge caches, used by default with CloudFront, are used when you have content that is not accessed frequently enough to remain in an edge location. Regional edge caches absorb this content and provide an alternative to the need to retrieve that content from the origin server.

exam – also edge locations are associated with caching while local zones have some storage, compute, db, etc.; improves performance… such as caching content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

One common use for edge locations is to ___________________.

A

serve content closer to your customers

exam – any question mentioning caching implies edge location; watch for keywords on the exam!!! use local for low latency/millisecond access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The ______________ helps cloud architects build secure, high-performing, resilient, and efficient application infrastructures.

A

AWS Well-Architected Framework

With the tool, you can gather data and get recommendations to:
* Minimize system failures and operational costs.
* Dive deep into business and infrastructure processes.
* Provide best practice guidance.
* Deliver on the cloud computing value proposition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the 6 well architected framework pillars?

A
  • Security – Use AWS security best practices to build policies and processes to protect data and assets. Allow auditing and traceability. Monitor, alert, and audit actions and changes to your environment in real time.
  • Cost optimization – Achieve cost efficiency while considering fluctuating resource needs.
  • Reliability – Meet well-defined operational thresholds for applications. This includes support to recover from failures, handling increased demand, and mitigating disruption.
  • Performance efficiency – Deliver efficient performance for a set of resources like instances, storage, databases, space, and time.
  • Operational excellence – Run and monitor systems that deliver business value. Continually improve supporting processes and procedures.
  • Sustainability – Minimize and understand your environmental impact when running cloud workload
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

As a best practice, what should you require for your root user?

A
  1. multi-factor authentication (MFA)
  2. set up an admin user that you normally use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

___________ is a web service that helps you securely control access to AWS resources.

And what is it used for?

A

AWS Identity and Access Management (IAM)

Use IAM to control who is authenticated (signed in) and authorized (has permissions)

exam - IAM users sign in with credentials and permissions… not email(?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A ___________ is an entity that can request an action or operation on an AWS resource

A

principal

Exam: users & principals don’t have any privileges by default; also, best to grant permissions to groups and assign users to groups; IAM roles for short lived needs

Exam - if you see “temporary permissions” then it’s a ROLE

Exam- set up users for “long term” needs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

With IAM, Each user has their own ___________.

A

credentials

NOTE: by default, no access until granted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Programmatic access gives your IAM user the credentials to make API calls in the AWS CLI or AWS SDKs. AWS provides an SDK for programming languages such as Java, Python, and .NET.
When programmatic access is granted to your IAM user, it creates _______________________ ?

A

a unique key pair that comprises an access key ID and a secret access key. Use your key pair to configure the AWS CLI, or make API calls through an AWS SDK.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An IAM _____________ is a collection of IAM users.

A

An IAM user group

NOTE: minimizes admin load; cumulative effect with privileges: A user can be a member of more than one user group. Example, Richard is a member of the Analysts group and the Billing group. Richard gets permissions from both IAM user groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

IAM _________ deliver temporary AWS credentials.

A

roles; Use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources.

Exam - Roles are temporary; when a user assumes a role, they only have the permissions that are granted to the role and do not follow their group’s inherited permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is used to give roles access to resources?

A

IAM Policy assignments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

_______ are attached to an identity or resource to define its permissions. AWS evaluates these when a principal, such as a user, makes a request.

A

policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the 4 security policy types?

A

Policy types
* Identity-based policies – Attach managed and inline policies to IAM identities. These identities include users, groups to which users belong, and roles.
* Resource-based policies – Attach inline policies to resources. The most common examples of resource-based policies are Amazon S3 bucket policies and IAM role trust policies.
* AWS Organizations service control policies (SCPs) – Use Organizations SCPs to define the maximum permissions for account members of an organization or organizational unit (OU).
* IAM permissions boundaries – AWS supports permissions boundaries for IAM entities (users or roles). Use IAM permissions boundaries to set the maximum permissions that an IAM entity can receive

Exam - know the difference … resources refers to AWS service policies; permissions boundaries are guardrails; don’t have permissions to SCPs; to give access you Grant permissions (IAM identity-based policy & IAM resource-based policy); you set maximum permissions through IAM permission boundaries and AWS org service control policies (SCPs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

_____________ policies are JSON permissions policy documents that control:
* Which actions an IAM identity (users, groups of users, and roles) can perform
* On which resources they can perform these actions
* Under what conditions they can perform these actions

A

Identity-based

Exam: know permission boundaries; know roles are for valid short live credentials; have 2 options: identity & resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

When granting permissions:

A
  • Identity-based policies are assigned to users, groups, and roles.
  • Resource-based policies are assigned to resources.

NOTE:
* Resource-based policies are checked when someone tries to access the resource.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Given the following Identity-based policy example, what access would you have?

A

you can attach the example policy statement to your IAM user. Then, that user is allowed to stop and start EC2 instances in your account if the condition is met. Here, the EC2 instances that your IAM user can control must have a tag with key Owner and value equal to the IAM user name.
In the Resource element, the policy lists an Amazon Resource Name (ARN) with a wildcard (asterisk) character. Wildcards are used to apply a policy element to more than one resource or action. This policy applies for resources in any account number and Region with any resource ID. It can be reused in multiple accounts without having to rewrite the policy with your AWS account ID.

Exam - Policy “EAR” to remember, Effect-Action-Resource … know the resource can be a bucket or ???… know how to recognize the EAR from JSON for the exam but don’t need to write JSON

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How are IAM policies evaluated?

A

AWS evaluates all policies that are applicable to the request context. The following list summarizes the AWS evaluation logic for policies within a single account:
* By default, all requests are implicitly denied with the exception of the AWS account root user, which has full access. This policy is called an implicit deny.
* An explicit allow in an identity-based policy or resource-based policy overrides this default. There are additional security controls that can override an explicit allow with an implicit deny, such as permissions boundaries and SCPs.
* An explicit deny in any policy overrides any allows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

___________ is a strategy that is focused on creating multiple layers of security.

A

Defense in depth

Apply a defense-in-depth approach with multiple security controls to all layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A ______________ is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity and act as a filter.

A

permissions boundary

NOTE: explicitly grant to prevent implicit deny

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

AWS supports permissions boundaries for which IAM entities?

A

users or roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are several reasons that you might want to create a multi-account structure in your organization?

A

NOTE: Improves overhead and simplified billing

  • To group resources for categorization and discovery
  • To improve your security posture with a logical boundary
  • To limit potential impact in case of unauthorized access
  • To simplify management of user access to different environments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are benefits of using AWS Organizations?

A

AWS Organizations provides these key features:
* Centralized management of all your AWS accounts
* Consolidated billing for all member accounts

helps with consolidating the costs to get discounts

Create a hierarchy by grouping accounts into organizational units (OUs). Apply service control policies (SCPs) to control maximum permissions in every account under an organization unit (OU).

NOTE: anywhere on the exam that an option is manual work, then it’s incorrect. always go with automation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

_______ is a type of organization policy that you can use to manage permissions in your organization.

A

SCP (service control policy)

Attaching an SCP to an Organizations entity (root, OU, or account) defines a guardrail. SCPs set limits on the actions that the IAM users and roles in the affected accounts can perform. To grant permissions, you must attach identity-based or resource-based policies to IAM users, or to the resources in your organization’s accounts. When an IAM user or role belongs to an account that is a member of an organization, the SCPs limit the user’s or role’s effective permissions.

SCP doesn’t grant but does control

Keyword: guardrail

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Are all IPv6 addresses public only?

A

yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the smallest supported CIDR? & why?

A

/28 because you lose 5 from each subnet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Can a VPC be in more than one region? Can a subnet be in more than one AZ?

A

No

No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are the 5 reserved IPs that you can’t use from every subnet that you create?

A

The first four IP addresses and the last IP address in each subnet CIDR block are not available and cannot be assigned to an instance. For example, in a subnet with CIDR block 10.0.0.0/24, the following five IP addresses are reserved:
* 10.0.0.0: Network address.
* 10.0.0.1: Reserved by AWS for the VPC router.
* 10.0.0.2: Reserved by AWS. The IP address of the DNS server is always the base of the VPC network range plus 2.
* 10.0.0.3: Reserved by AWS for future use.
* 10.0.0.255: Network broadcast address. AWS does not support broadcast in a VPC; therefore, we reserve this address.

NOTE: Consider larger subnets over smaller ones (/24 and larger). You are less likely to waste or run out of IPs if you distribute your workload into larger subnets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are the 3 required items for each public subnet?

A

A public subnet requires the following:
* Internet gateway: The internet gateway allows communication between resources in your VPC and the internet.
* Route table: A route table contains a set of rules (routes) that are used to determine where network traffic is directed. It can direct traffic to the internet gateway.
* Public IP addresses

associate a route table with a subnet; a router table can be associated with multiple subnets but subnets only with one route table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

T/F: IGW can only be associated with one VPC at a time

A

T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

When you create a VPC, it automatically has a ______________.

A

main route table

every route table is associated with a VPC mapped to a local VPC route; every table has a local route that can’t be deleted providing inter VPC connectivity as a result

This local route permits communication for all the resources within the VPC. You can’t modify the local route in a route table

A subnet can be associated with only one route table at a time, but you can associate multiple subnets with the same route table. Use custom route tables for each subnet to permit granular routing for destinations.

Each AWS account comes with a default Amazon VPC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

______________ is a static, public IPv4 address that is designed for dynamic cloud computing.

A

Elastic IP address

You are limited to five Elastic IP addresses. To help conserve them, you can use a NAT device. We encourage you to use an Elastic IP address primarily to be able to remap the address to another instance in the case of instance failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

You can associate a/an _________ with any instance or network interface for any VPC in your account.

A

Elastic IP address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

With a/an ____________, you can mask the failure of an instance by rapidly remapping the address to another instance in your VPC.

A

Elastic IP address

You can move an Elastic IP address from one instance to another. The instance can be in the same VPC or another VPC. An Elastic IP address is accessed through the internet gateway of a VPC. If you set up a VPN connection between your VPC and your network, the VPN traffic traverses a virtual private gateway, not an internet gateway. Therefore, it cannot access the Elastic IP address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

________________ is a logical networking component in a VPC that represents a virtual network card

A

elastic network interface

When moved to a new instance, the network interface maintains its public and Elastic IP address, private IP and Elastic IP address, and MAC address. The attributes of a network interface follow it.

When you move a network interface from one instance to another, network traffic is redirected to the new instance. Each instance in a VPC has a default network interface (the primary network interface).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What port does a bastion host interface on?

A

bastion host associated with port 22

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

______________ communicate between instances in your VPC and the internet. They are horizontally scaled, redundant, and highly available by default and, provide a target in your subnet route tables for internet-routable traffic.

A

NAT gateways

NAT gateways come in 2 versions now: public and private; use private to inter-VPC or to connect to on-premise but can’t get to the Internet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

You can use _____________ for a one-way connection between private subnet instances and the internet or other AWS services. This type of connection prevents external traffic from connecting with your private instances.

A

a NAT gateway

exam: need to get rid of a SPOF, so use multiple instances of NAT gws

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Regarding VPCs and HA, what’s a key consideration?

A

Deploying a VPC across multiple Availability Zones creates an architecture that achieves high availability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

___________ receives inbound traffic and routes it to the application servers in the private subnets of both Availability Zones.

A

Elastic Load Balancing

LBs can be used for internal and external facing workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

___________ is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. Every VPC automatically comes with a one allowing all inbound and outbound IPv4 traffic.

A

network ACL

You can create a custom network ACL and associate it with a subnet. By default, custom network ACLs deny all inbound and outbound traffic until you add rules.

exam - ports of interest are 80, 443, & 22 (SSH)

Firewall at the subnet level with default deny alls; a subnet may only have one nacl but nacls can be associated with multiple subnets; stateless so need return entry to allow traffic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

_______________ acts as a virtual firewall for your instance to control inbound and outbound traffic.

A

A security group

The default group allows inbound communication from other members of the same group and outbound communication to any destination. Traffic can be restricted by any IP protocol, by service port, and by source or destination IP address (individual IP address or CIDR block).

exam – SGs are on the boundary of the instance; instance level firewalls; no numerical priority and no default deny

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

_______________ act at the network interface level, not the subnet level, and they support Allow rules only.

A

Security groups

The default group allows inbound communication from other members of the same group and outbound communication to any destination. Traffic can be restricted by any IP protocol, by service port, and by source or destination IP address (individual IP address or CIDR block).

exam – SGs are on the boundary of the instance; instance level firewalls; no numerical priority and no default deny

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

______________ contains a numbered list of rules, which are evaluated in order, starting with the lowest numbered rule. If a rule matches traffic, the rule is applied even if any higher-numbered rule contradicts it.

A

A network ACL

Each network ACL has a rule whose number is an asterisk. This rule denies a packet that doesn’t match any of the numbered rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What are 2 key properties of security groups?

A

Security groups in default VPCs allow all outbound traffic.

Custom security groups have no inbound rules, and they allow outbound traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

AWS customers typically use ______________ as their primary method of network packet filtering.

A

security groups

They are more versatile than network ACLs because of their ability to perform stateful packet filtering and to use rules that reference other security groups. However, network ACLs can be effective as a secondary control for denying a specific subset of traffic or providing high-level guard rails for a subnet.
By implementing both network ACLs and security groups as a defense-in-depth means of controlling traffic, a mistake in the configuration of one of these controls will not expose the host to unwanted traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Why do you use security group chaining?

A

to provide depth of protection… only allow the minimum required to pass a boundary

The inbound and outbound rules are set up so that traffic can only flow from the top tier to the bottom tier and back up again. The security groups act as firewalls to prevent a security breach in one tier from automatically providing subnet-wide access of all resources to the compromised client

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

________________ acts as a firewall for associated EC2 instances, controlling both inbound and outbound traffic at the instance level. ____________ act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level.

A

A security group

Network ACLs

A security group acts as a firewall for associated EC2 instances, controlling both inbound and outbound traffic at the instance level. Network ACLs act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level.
Both can have different default configurations depending on how they are created.
Security groups
* Security groups in default VPCs allow all traffic.
* New security groups have no inbound rules and allow outbound traffic.

Network ACLs
* Network ACLs in default VPCs allow all inbound and outbound IPv4 traffic.
* Custom network ACLs deny all inbound and outbound traffic, until you add rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

SSD-backed volumes are optimized for transactional workloads that involve frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS. What volume type is used in EBS?

A

io – iops SSDs

Specific use cases for io2 Block Express include:
* Sub-millisecond latency
* Sustained IOPS performance
* More than 64,000 IOPS or 1,000 MiB/s of throughput

exam – for this one the numbers aren’t needed but will be for the sysops; just know io is for high io needs

Exam:anywhere you see throughput and low cost think st

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

An ______________ provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer

A

instance store

An instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content. It is also good for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.

exam; – attached to host but not persistent but very fast; use it for cases where fast processing and fault tolerance is able to retry; Reclaimed when the instance is stopped or terminated ==> data will be lost when restarted, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

What are 3 EC2 purchase options?

A
  1. On-demand
  2. Savings plans
  3. Spot instances

most flexible will be the most expensive, so on demand will be the most expensive; watch the key words such as spiky or temporary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

With ____________, you can run code without provisioning or managing servers. The service runs your code on a high-availability compute infrastructure and performs all administration of the compute resources.

A

Lambda

These resources include: * Server and OS maintenance * Capacity provisioning and automatic scaling * Code monitoring and logging

serverless (Lambda) is always a great answer for “saving money” questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What are the three types of cloud storage? Each storage option has a unique combination of performance, durability, cost, and interface

A

object, file, and block

Block storage – Enterprise applications like databases or enterprise resource planning (ERP) systems often require dedicated, low-latency storage for each host. This storage is similar to direct-attached storage (DAS) or a storage area network (SAN). Block-based cloud storage solutions like Amazon Elastic Block Store (Amazon EBS) are provisioned with each virtual server and offer the ultra-low latency required for high-performance workloads.
File storage – Many applications must access shared files and require a file system. This type of storage is often supported with a Network Attached Storage (NAS) server. File storage solutions like Amazon Elastic File System (Amazon EFS) are ideal for use cases such as large content repositories, development environments, media stores, or user home directories.
Object storage – Applications developed in the cloud need the vast scalability and metadata of object storage. Object storage solutions like Amazon Simple Storage Service (Amazon S3) are ideal for building modern applications. Amazon S3 provides scale and flexibility. You can use it to import existing data stores for analytics, backup, or archive.

  • Amazon EBS for block storage
  • Amazon EFS and Amazon FSx for file storage
  • Amazon S3 and Amazon S3 Glacier for object storage

EXAM: S3 with cloudfront is always a great answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What type of storage would be used for WORM to provide legal/hold needs?

A

EBS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What storage is typically used with Linux systems to provide NFS file sharing?

A

EFS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What storage is typically used with Windows systems to provide SMB file sharing?

A

EFSx

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

What storage provides SMB, NFS, and iSCSI for Windows, Linux, & MacOS?

A

EFSx on NetApp ONTAP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

What storage is used with high performance computing?

A

EFSx Luster

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

what is Amazon S3?

A

Amazon S3 is object-level storage. An object includes file data, metadata, and a unique identifier. Object storage does not use a traditional file and folder structure.

Great for static content such as a website providing documents, videos, etc.

exam – bucket name must be globally unique; buckets “live” in a region and can’t be replicated outside the region by AWS but can be by the customer; s3 is the cheapest so it’s why logs are stored there

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Name 5 use cases for S3 object storage.

A
  • Backup and restore – You can use Amazon S3 to store and retrieve any amount of data, at any time. You can use Amazon S3 as the durable store for your application data and file-level backup and restore processes. Amazon S3 is designed for 99.999999999 percent durability, or 11 9’s of durability.
  • Data lakes for analytics – Run big data analytics, artificial intelligence (AI), machine learning (ML), and high-performance computing (HPC) applications to unlock data insights.
  • Media storage and streaming – You can use Amazon S3 with Amazon CloudFront’s edge locations to host videos for on-demand viewing in a secure and scalable way. Video on demand (VOD) streaming means that your video content is stored on a server, and viewers can watch it at any time. You’ll learn more about Amazon CloudFront later in this course.
  • Static website – You can use Amazon S3 to host a static website. On a static website, individual webpages include static content. They might also contain client-side scripts. Amazon S3’s object storage makes it easier to manage data access, replications, and data protection for static files.
  • Archiving and compliance – Replace your tape with low-cost cloud backup workflows, while maintaining corporate, contractual, and regulatory compliance requirements.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

_________ are resource-based policies for your S3 buckets.

A

Bucket policies

Access control for your data is based on policies, such as IAM policies, S3 bucket policies, and AWS Organizations service control policies (SCPs).

“EAR” in the JSON –
{
“Version”: “2012-10-17”, “Statement”: [
{
“Effect”: “Allow”, “Principal”: “”, “Action”: [
“s3:ListBucket”, “s3:GetObject”
], “Resource”: [
“arn:aws:s3:::doc-example-bucket”, “arn:aws:s3:::doc-example-bucket/

]
}
]
}

exam – be able to recognize “Principal” in that it is different from identity that lives with the account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

___________ are used to encrypt your data at rest.

A

Cryptographic keys

Amazon S3 offers three options + customer provided for encrypting your objects:
* Server-side encryption (SSE) with Amazon S3-managed keys (SSE-S3) – When you use SSE-S3, each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a primary key that it regularly rotates. Amazon S3 server-side encryption uses 256-bit Advanced Encryption Standard (AES-256) to encrypt your data.
* Server-side encryption with AWS KMS keys stored in AWS Key Management Service (AWS KMS) (SSE-KMS) – KMS keys stored in SSE-KMS are similar to SSE-S3, but with some additional benefits and charges. There are
separate permissions for the use of a KMS key that provides added protection against unauthorized access of your objects in Amazon S3. SSE-KMS also provides you an audit trail that shows when your KMS key was used, and by whom.
* Dual-layer server-side encryption with AWS KMS keys (DSSE-KMS) – Using DSSE-KMS applies two individual layers of object-level encryption instead of one layer. Each layer of encryption uses a separate cryptographic implementation library with individual data encryption keys.
* Server-side Encryption with Customer-Provided Keys (SSE-C) – With SSE-C, you manage the encryption keys and Amazon S3 manages the encryption as it writes to disks. Also, Amazon S3 manages decryption when you access your objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What are the 6 S3 storage classes?

A
  • S3 Standard for general-purpose storage of frequently accessed data. * S3 Standard-Infrequent Access (S3 Standard-IA) for long-lived, but less frequently accessed data.
  • S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, less frequently accessed data that can be stored in a single Availability Zone.
  • S3 Glacier Instant Retrieval for archive data that is rarely accessed but requires a restore in milliseconds. * S3 Glacier Flexible Retrieval for the most flexible retrieval options that balance cost with access times ranging from minutes to hours. Your retrieval options permit you to access all the archives you need, when you need them, for one low storage price. This storage class comes with multiple retrieval options:
    ** Expedited retrievals (restore in 1–5 minutes).
    ** Standard retrievals (restore in 3–5 hours).
    ** Bulk retrievals (restore in 5–12 hours). Bulk retrievals are available at no additional charge.
  • S3 Glacier Deep Archive for long-term cold storage archive and digital preservation. Your objects can be restored in 12 hours or less.

S3 has a storage class associated with it. All storage classes offer high durability (99.999999999 percent durability)

exam – very important for the exam! the 6 classes that you need to know, the use case, & the timing; to the right is less expensive with time increasing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

__________________ is the only storage class that delivers automatic storage cost savings when data access patterns change

A

Amazon S3 Intelligent-Tiering

When you assign an object to S3 Intelligent-Tiering, it is placed in the Frequent Access tier which has the same storage cost as S3 Standard. Objects not accessed for 30 days are then moved to the Infrequent Access tier where the storage cost is the same as S3 Standard-IA. After 90 days of no access, an object is moved to the Archive Instant Access tier, which has the same cost as S3 Glacier Instant Retrieval.
S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes, data analytics, new applications, and user-generated content.

NOTE: know this and understand that life-cycle policies are different that this… I-T is both ways & uses machine learning to know that something is getting accessed frequently/infrequently to move

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

What are the Amazon S3 Glacier storage class benefits?

A

1 Cost-effective storage Lowest cost for specific data access patterns
2 Flexible data retrieval Three storage classes with variable access options
3 Secure and compliant Encryption at rest, AWS CloudTrail integration, and retrieval policies
4 Scalable and durable Meets needs from gigabytes to exabytes with 11 9s of durability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

What S3 option is used for WORM?

A

Use S3 Object Lock for data retention or protection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

What does Amazon S3 Versioning provide?

A

Buckets that use versioning can help you recover objects from accidental deletion or overwrite:
* If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.
* If you overwrite an object, it results in a new object version in the bucket. When S3 Versioning is turned on, you can restore the previous version of the object to correct the mistake.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

What are Lifecycle policies used for?

A

Use S3 Lifecycle polices to transition objects to another storage class. S3 Lifecycle rules take action based on object age.

With S3 Lifecycle policies, you can delete or move objects based on age. You should automate the lifecycle of your data that is stored in Amazon S3. Using S3 Lifecycle policies, you can have data cycled at regular intervals between different Amazon S3 storage types.
In this way, you reduce your overall cost because you are paying less for data as it becomes less important with time. In addition to being able to set lifecycle rules per object, you can also set lifecycle rules per bucket.
Amazon S3 supports a waterfall model for transitioning between storage classes. Lifecycle configuration automatically changes data storage tiers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

What is S3 multipart and when is it automatically used?

A

With a multipart upload, you can consistently upload large objects in manageable parts. This process involves three steps: * Initiating the upload * Uploading the object parts * Completing the multipart upload
When the multipart upload request is completed, Amazon S3 will recreate the full object from the individual pieces.

exam – anything over 100MB will be uploaded in multipart

mprove the upload process of larger objects with the following features:
* Improved throughput – You can upload parts in parallel to improve throughput. * Quick recovery from any network issues – Smaller part sizes minimize the impact of restarting a failed upload due to a network error.
* Pausing and resuming object uploads – You can upload object parts over time. When you have initiated a multipart upload, there is no expiration. You must explicitly complete or cancel the multipart upload.
* Beginning an upload before you know the final object size – You can upload an object as you are creating it. * Uploading large objects – Using the multipart upload API, you can upload large objects, up to 5 TB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

When are Amazon S3 Event Notifications typically used?

A

Event driven architectures

With Amazon S3 Event Notifications, you can receive notifications when certain object events happen in your bucket. Event-driven models like this one mean that you no longer need to build or maintain server-based polling infrastructure to check for object changes. You also don’t pay for idle time of that infrastructure when there are no changes to process.
Amazon S3 can send event notification messages to the following destinations: * Amazon Simple Notification Service (Amazon SNS) topics * Amazon Simple Queue Service (Amazon SQS) queues * AWS Lambda functions
You specify the Amazon Resource Name (ARN) value of these destinations in the notification configuration.

In the example, you have a JPEG image uploaded to the images bucket that your website uses. Your website needs to be able to show smaller thumbnail preview images of each uploaded file. When the image object is added to the S3 bucket, an event notification is sent to invoke a series of AWS Lambda functions. The output of your Lambda functions is a smaller version of the original JPEG image and puts the object in your thumbnails bucket. S3 Event Notifications manage the activity in the bucket for you and automate the creation of your thumbnail.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

What are some factors to consider with costs of S3 storage?

A
  • Storage – Per-gigabyte cost to hold your objects. You pay for storing objects in your S3 buckets. The rate that you’re charged depends on your objects’ size, how long you stored the objects during the month, and the storage class. You incur per-request ingest charges when using PUT, COPY, or lifecycle rules to move data into any S3 storage class.
  • Requests and retrievals – The number of API calls: PUT and GET requests. You pay for requests that are made against your S3 buckets and objects. S3 request costs are based on the request type, and are charged on the quantity of requests. When you use the Amazon S3 console to browse your storage, you incur charges for GET, LIST, and other requests that are made to facilitate browsing.
  • Data transfer – Usually no transfer fee for data-in from the internet and, depending on the requester location and medium of data transfer, different charges for data-out.
  • Management and analytics – You pay for the storage management features and analytics that are activated on your account’s buckets. These features are not discussed in detail in this course.
    S3 Replication and S3 Versioning can have a big impact on your AWS bill. These services both create multiple copies of your objects, and you pay for each PUT request in addition to the storage tier charge. S3 Cross-Region Replication also requires data transfer between AWS Regions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

For high throughput changes to files of varying sizes, a file system will be superior to an object store system. ___________ and _________ are ideal for this use case.

A

Amazon Elastic File System (Amazon EFS) and Amazon FSx

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

___________ provides a scalable, elastic file system for Linux-based workloads for use with AWS Cloud services and on-premises resources.

A

Amazon EFS

exam – only for Linux; it’s is server less as well; fast; scales; pay for only what you need

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Name 2 benefits of using EFS

A
  1. Amazon EFS uses burst throughput mode to scale throughput based on your storage use.
  2. Amazon EFS automatically grows and shrinks file storage without provisioning.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

__________ for Windows File Server provides fully managed Microsoft Windows file servers that are backed by a native Windows file system. Built on Windows Server, it delivers a wide range of administrative features such as data deduplication, end-user file restore, and Microsoft Active Directory.

A

Amazon FSx

exam – windows: FSx; fully managed built on windows file server; but can be used by Windows, Linux, & MacOS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

What files systems provides high-performance and is generally used with HPC?

A

FSx Lustre

exam – HPC & machine learning, big data analytics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

What 2 AWS data migration tools are used with hybrid architectures?

A

AWS Storage Gateway: Sync files with SMB, NFS, and iSCSI
protocols from on-premises to AWS. AWS Storage Gateway connects an on-premises software appliance with cloud-based storage

AWS DataSync: Sync files from on-premises file storage to Amazon EFS,
Amazon FSx, and Amazon S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

What storage migration tools provide offline migration support for large volumes of data?

A

AWS Snow Family: Move terabytes to petabytes of data to AWS by using
appliances that are designed for secure, physical transport.

AWS Snow Family is a group of edge computing, data migration, or edge storage devices that are designed for secure, physical transport.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

SFTP is typically used with what data migration service?

A

AWS Transfer Family permits the transfer of files into and out of Amazon S3 or Amazon Elastic File System (EFS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

What are the 4 Storage Gateway types and uses?

A
  1. Amazon S3 File Gateway presents a file interface that you can use to store files as objects in Amazon S3. You use the industry-standard NFS and SMB file protocols. Access your files through NFS and SMB from your data center or Amazon EC2, or access those files as objects directly in Amazon S3. exam – associate with s3, archiving, data lakes, etc.
  2. Amazon FSx File Gateway provides fast, low-latency, on-premises access to fully managed, highly reliable, and scalable file shares in Amazon FSx for Windows File Server. It uses the industry-standard SMB protocol. You can store and access file data in Amazon FSx with Microsoft Windows features, including full New Technology File System (NTFS) support, shadow copies, and ACLs.
  3. Tape Gateway presents an iSCSI-based virtual tape library (VTL) of virtual tape drives and a virtual media changer to your on-premises backup application.
  4. Volume Gateway presents block storage volumes of your applications by using the iSCSI protocol. You can asynchronously back up data that is written to these volumes as point-in-time snapshots of your volumes. Then, you can store it in the cloud as Amazon EBS snapshots. exam – associate with EBS & your “boot” device
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

What are the 4 storage gateway modes?

A

Amazon S3 File Gateway, Amazon FSx File Gateway (use for your home directories), Tape Gateway, or Volume Gateway (iSCSI).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

What is a typical use for AWS DataSync?

A

Reduce on-premises storage infrastructure by shifting SMB-based data stores and content repositories from file servers and NAS arrays to Amazon S3 and Amazon EFS for analytics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

What is AWS Snowcone?

A

AWS Snowcone
Snowcone is a small, rugged, edge computing and data storage product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

What is AWS Snowball Edge?

A

AWS Snowball Edge
Snowball Edge is an edge computing and data transfer device that the AWS Snowball service provides.

Snowball Edge is a petabyte-scale data transport option that doesn’t require you to write code or purchase hardware to transfer data.

exam – has compute with it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

What database service provides key value NoSQL?

A

DynamoDB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

What are 2 relational DB services and which one provides PostgreSQL?

A

RDS & Aurora (provides PostgreSQL)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

What database service issued for MemCached & Redis?

A

ElastiCache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

What database service is used for data warehouses?

A

Redshift

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

what database services are used for speed and agility providing key-value pairs or document storage while using dynamic schemas?

A

DynamoDB & ElastiCache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

What databases run on physical servers while providing fixed schemas?

A

RDS & Aurora

Relational databases such as Oracle, IBM DB2, SQL Server, MySQL, and PostgreSQL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

What relational database service would you use for a serverless option that is fully managed?

A

Aurora

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

What relational database service would you use when you want to avoid controlling the resources such as when you want install it on an Amazon Elastic Compute Cloud (Amazon EC2) instance?

A

RDS

Amazon RDS is a web service that helps you to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity, while managing time-consuming database administration tasks. By using Amazon RDS, you can focus on your applications and business. Amazon RDS provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and Microsoft SQL Server. Therefore, most of the code, applications, and tools that you already use with your existing databases can be used with Amazon RDS.
Amazon RDS automatically patches the database software and backs up your database. It stores the backups for a user-defined retention period and provides point-in-time recovery. You benefit from the flexibility of scaling the compute resources or storage capacity associated with your relational DB instance with a single API call.

EXAM: serverless option is Aurora; RDS is managed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

What sits between your application and your relational database to efficiently manage connections to the database and improve scalability of the application?

A

Amazon RDS Proxy

also know what RDS proxy is (connection pooling, improved scale, improved resiliency, etc.)

99
Q

What do you set up when you want to improve the resiliency of RDS?

A

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for database (DB) instances, which makes them a natural fit for production database workloads. When you provision a Multi-AZ DB instance, Amazon RDS synchronously replicates the data to a standby instance in a different Availability Zone.
You can modify your environment from Single-AZ to Multi-AZ at any time. Each Availability Zone runs on its own physically distinct, independent infrastructure and is engineered to be highly reliable.

by default is not HA; multi AZ cluster provides read replica access while the single AZ can’t

100
Q

What is a strategy to improve RDS performance?

A

Read replicas – for offloading reads; improves performance; takes load off primary

With Amazon RDS, you can create read replicas of your database. Amazon automatically keeps them in sync with the primary DB instance. Read replicas are available in Amazon RDS for Aurora, MySQL

101
Q

Is RDS data encrypted at rest?

A

No not by default

Amazon RDS provides encryption of data at rest by using the AWS Key Management Service (AWS KMS). AWS KMS is a managed service (Always associate with encryption)

102
Q

What relational database provides serverless capabilities?

A

Aurora

103
Q

What relational database is compatible with MySQL and PostgreSQL relational databases, is up to five times faster than standard MySQL databases, is up to three times faster than standard PostgreSQL databases, and helps to reduce your database costs by reducing unnecessary I/O operations, while ensuring that your database resources remain reliable and available?

A

Aurora

104
Q

What db service should you presume on the exam any time you see MySQL & PostgreSQL?

A

Aurora

105
Q

What DB type should you consider for MySQL & PostgreSQL if your workloads require high availability?

A

Aurora – It replicates six copies of your data across three Availability Zones and continuously backs up your data to Amazon Simple Storage Service (Amazon S3).

106
Q

What is an on-demand, auto scaling configuration for Amazon Aurora?

A

Aurora Serverless v2

107
Q

What DB service is a fully managed NoSQL database service so that software developers can focus on building applications instead of managing infrastructure?

A

DynamoDB

Key value store

108
Q

What DB service is commonly used in gaming applications?

A

DynamoDB because it uses Key-value stores.

Key-value databases are good for use cases where the requested data can be associated with a single primary key. Consider this example of a video game’s user profile database. Each item has a collection of key-value pairs, including keys such as TopScore, UserID, and Level. All key-value pairs for an item are associated with the primary key, GamerTag. You can rapidly retrieve any of these key-value pairs by locating their GamerTag.

109
Q

Game makers can support simple player profile pages with what DB service that is optimized for small payloads? In addition, when a new game launches, it can scale rapidly to provide enough storage and throughput to support spikes in traffic.

A

DynamoDB

110
Q

What DB service is good choice when your application has seasonal peaks and you need to display accurate inventory along with supporting concurrent read and write operations while maintaining the accuracy of stored data?

A

DynamoDB

111
Q

What database service is used for small data wit lots of transactions typical of IoT?

A

DynamoDB

112
Q

What are DynamoDB’s two options for managing capacity?

A

On-demand
Provisioned

113
Q

When should you use DynamoDB’s on-demand mode?

A

On-demand capacity mode is best when you:
* Have unknown workloads
* Have unpredictable traffic
* Prefer to pay for only what you use

On-demand capacity mode is a pay-per-request model.

EXAM: no autoscaling

114
Q

When should you use DynamoDB’s provisioned mode?

A

Provisioned capacity mode is best when you:
* Have predictable application traffic
* Have traffic that is consistent or changes gradually
* Can forecast capacity requirements to control costs

With provisioned capacity mode, you set a maximum number of RCUs and WCUs. When traffic exceeds those limits, DynamoDB throttles those requests to control your costs. You can adjust your provisioned capacity by using auto scaling.

EXAM: takes advantage of autoscaling

115
Q

What feature of DynamoDB provides multi-region writes with the click of a button?

A

DynamoDB global tables

Global tables automate replication across Regions

exam – performance and DR

116
Q

A ________ is a collection of one or more DynamoDB tables, which are all owned by a single AWS account, identified as replica tables. A ________ is a single DynamoDB table that functions as part of a ________.

A

global table

replica table (or replica, for short)

global table

NOTE: Each replica stores the same set of data items. Any given global table can only have one replica table per Region, and every replica has the same table name and the same primary key schema.

117
Q

To determine whether your application should use database caching, what 3 points should you consider?

A
  • Speed and expense – Some database queries are inherently slower and more expensive than others. For example, queries that perform joins on multiple tables are significantly slower and more expensive than simple, single-table queries. If requesting data requires a slow and expensive query, it’s a candidate for caching.
  • Data and access pattern – Determining what to cache also involves understanding the data itself and its access patterns. For example, it doesn’t make sense to cache data that is rapidly changing or is seldom accessed. For caching to provide meaningful benefit, the data should be relatively static and frequently accessed, such as a personal profile on a social media site.
  • Cache validity – Data can become out-of-date while it is stored in cache. Writes that occur on that data in the database might not be reflected in that cached data. To determine whether your data is a candidate for caching, you must determine your application’s tolerance for occasionally inaccurate cached data.

exam - temp storage and performance; on any exam question, if the answer involves caching it’s almost always the right answer; faster & reduces cost

118
Q

What are 2 common caching strategies?

A

lazy loading and write-through

Lazy loading can have stale data, but doesn’t fail with empty nodes. Write-through maintains fresh data, but can fail with empty nodes and can populate the cache with superfluous data. By adding a time to live (TTL) value to each write to the cache, you can maintain fresh data without cluttering the cache with extra data.

In lazy loading, updates are made to the database without updating the cache. In the case of a cache miss, the information that is retrieved from the database can be subsequently written to the cache. Lazy loading loads data that the application needs into the cache, but it can result in high cache-miss-to-cache-hit ratios in some use cases.

An alternative strategy is to write through to the cache every time the database is accessed. This approach results in fewer cache misses. The result is improved performance, but it requires additional storage for data that the applications might not need.
The best strategy depends on your use case. It is critical to understand the impact of stale data on your use case. If the impact is high, then consider maintaining freshness with write-throughs. If the impact is low, then lazy loading might be sufficient. It also helps to understand the frequency of change of the underlying data because it affects the performance and cost tradeoffs of the caching strategies.

NOTE: file/object caching works the same but is cloud front specific implementation

119
Q

_____________ is a web service that facilitates setting up, managing, and scaling a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution.

A

Amazon ElastiCache

120
Q

ElastiCache support what 2 open-source in-memory engines (in-memory as compared to disk)?

A
  • Redis
  • Memcached
121
Q

What ElastiCache engine offers a simple caching model with multi-threading. The service is a popular choice for use cases such as web, mobile apps, gaming, ad tech, and ecommerce. It also supports Auto Discovery.

A

With ElastiCache for Memcached, you can build a scalable caching tier for data-intensive apps. The service works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times. ElastiCache for Memcached is fully managed, scalable, and secure, which makes it an ideal candidate for use cases where frequently accessed data must be in memory.

122
Q

What ElastiCache engine offers a feature rich caching model that provides sub-millisecond latency at internet scale? It combines speed, simplicity, and versatility of the open-source version with manageability, security, and scalability.

A

ElastiCache for Redis

It can power the most demanding real-time applications in gaming, ad tech, ecommerce, healthcare, financial services, and Internet of Things (IoT).

123
Q

What is A fully managed, highly available cache for DynamoDB that can take transactions from milliseconds to microseconds?

A

DynamoDB Accelerator (DAX)

DAX is a caching service compatible with DynamoDB that provides fast in-memory performance for demanding applications.
You create a DAX cluster in your Amazon VPC to store cached data closer to your application. You install a DAX client on the EC2 instance that is running your application in that VPC. At runtime, the DAX client directs all of your application’s DynamoDB requests to the DAX cluster. If DAX can process a request directly, it does so. Otherwise, it passes the request through to DynamoDB.

exam – in region aimed at EC2 performance improvements; TAKES FROM MILLISECONDS TO MICROSECONDS

124
Q

What DB service can do schema conversions and take you from one db type to a different one?

A

AWS Database Migration Service

With AWS DMS, you can also use a Snowball Edge device as a migration target. You would use this method if your environment has poor internet connectivity or if the source database is too large to move over the internet. You would also use it if your organization has privacy or security requirements.

125
Q

What DB service can help you solve the big challenge of migrating a DB while in use?

A

AWS Database Migration Service

126
Q

What DB service provides support for Heterogeneous database migrations, Database consolidation, & Continuous data replication?

A

AWS Database Migration Service

127
Q

What utility makes heterogeneous database migrations predictable? It automatically converts the source database schema and a majority of the database code objects including views, stored procedures, and functions.

A

AWS Schema Conversion Tool (AWS SCT)

keywords: discover, convert, migrate, etc.; go from on-prem to managed

128
Q

What service collects near real-time metrics and logs? It is the primary tool for metrics and monitoring.

A

CloudWatch

129
Q

What stores data about a metric as a series of data points?

A

CloudWatch metrics

Note: Each data point has an associated timestamp. You can publish your own metrics. data points & time; cloudwatch means both metrics & logs

130
Q

What provides event history of your account activity, including actions taken through the console, AWS SDK, command line interface (CLI), and AWS services?

A

CloudTrail

This event history simplifies security analysis, resource change tracking, and troubleshooting. CloudTrail facilitates governance, compliance, and operational and risk auditing. With CloudTrail, you can log, continuously monitor, and retain account activity that is related to actions across your AWS infrastructure

exam – use this for APIs

131
Q

What captures information about the IP traffic that goes to and from network interfaces in your virtual private cloud (VPC)?

A

VPC Flow Logs

exam – packet capture

132
Q

What helps you understand events in your accounts? It can provide insight into who did what and when by tracking user activity and API usage.

A

CloudTrail

With CloudTrail, you can get a history of AWS API calls in your account. Store your CloudTrail API usage logs in an S3 bucket.

exam – event triggering and useful for security

133
Q

What is used to capture IP traffic information to and from VPC network interfaces?

A

VPC Flow Logs

exam – this is for packet capture… any question dealing with stuff moving or not connecting eg network diagnostics; if it’s a problem with permissions would be cloudwatch

134
Q

__________ watches a single CloudWatch metric. It performs one or more actions based on the value of the metric relative to a threshold over a number of time periods.

A

CloudWatch alarm

An alarm has three possible states:
* OK – The metric is within the defined threshold.
* ALARM – The metric is outside the defined threshold.
* INSUFFICIENT_DATA – The alarm has started, the metric is not available, or not enough data is available for the metric to determine the alarm state.

exam – trick question measuring memory util is not available as part of the based metrics; will need to use an agent to send log data

135
Q

What is the preferred way to manage your events that are captured in CloudWatch?

A

EventBridge

Amazon EventBridge removes the friction of writing point-to-point integrations. You can access changes in data that occur in both AWS and software as a service (SaaS) applications through a highly scalable, central stream of events.
EventBridge is the preferred way to manage your events that are captured in CloudWatch. CloudWatch Events and EventBridge are the same underlying service and API, but EventBridge provides more features. Changes that you make in either CloudWatch or EventBridge will appear in each console.

NOTE: can react and do something such as send an alarm to a specific target; gives you more services

136
Q

What provides the following:

  • Automatically distributes traffic across multiple targets
  • Provides high availability
  • Incorporates security features
  • Performs health checks
A

Elastic Load Balancing

exam – key area; works with services in multiple AZs to provide HA

137
Q

What are the 3 key load balancer types?

A

App, network, & gateway

  • Application Load Balancer – This load balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. Application Load Balancer supports content-based routing, applications that run in containers, and open standard protocols (WebSocket and HTTP/2). This type of balancer is ideal for advanced load balancing of HTTP and HTTPS traffic.
  • Network Load Balancer – This load balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra-low latency. Network Load Balancer operates at the transport layer (Layer 4), and routes connections to targets based on IP protocol data. Targets include EC2 instances, containers, and IP addresses. It is ideal for balancing TCP and User Diagram Protocol (UDP) traffic.
  • Gateway Load Balancer – You can use this load balancer to deploy, scale, and manage your third-party virtual appliances. It provides one gateway for distributing traffic across multiple virtual appliances, and scales them up or down, based on demand. This distribution reduces potential points of failure in your network and increases availability. Gateway Load Balancer passes all Layer 3 traffic transparently through third-party virtual appliances. It is invisible to the source and destination

NOTE: could use multiple ones together.

138
Q

This load balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. It supports content-based routing, applications that run in containers, and open standard protocols (WebSocket and HTTP/2). This type of balancer is ideal for advanced load balancing of HTTP and HTTPS traffic.

A

Application Load Balancer

NOTE: considered “smart” but at a speed cost

139
Q

This load balancer is the only one that support Lambda.

A

ALB

140
Q

This load balancer stops SLQi or XSS, supports certificate management/TLS offloading, incognito, and WAF.

A

ALB

141
Q

This load balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra-low latency. Network Load Balancer operates at the transport layer (Layer 4), and routes connections to targets based on IP protocol data. Targets include EC2 instances, containers, and IP addresses. It is ideal for balancing TCP and User Diagram Protocol (UDP) traffic.

A

Network Load Balancer

NOTE: does support certificate manager (ACM) TLS offloading as well; you put NLBs in front of ALBs

**Elastic Load Balancing now supports forwarding traffic directly from Network Load Balancer to Application Load Balancer. With this feature, you can use AWS PrivateLink and expose static IP addresses for applications that are built on Application Load Balancer.

142
Q

You can use this load balancer to deploy, scale, and manage your third-party virtual appliances. It provides one gateway for distributing traffic across multiple virtual appliances, and scales them up or down, based on demand. This distribution reduces potential points of failure in your network and increases availability. It passes all Layer 3 traffic transparently through third-party virtual appliances. It is invisible to the source and destination.

A

Gateway Load Balancer

EXAM: speed; used with virtual appliances (keywords) such as firewall, IDS, etc.

143
Q

_______________ monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost.

A

AWS Auto Scaling

Provides application scaling for multiple resources across services, in short intervals

EC2 & ASG (autoscaling group) supporting HA but you don’t have to use a LB with ASG;

144
Q

_________ Launches or terminates your AWS resources based on specified conditions and Registers new instances with load balancers, when specified

A

AWS EC2 Auto Scaling

145
Q

__________ provides application scaling for multiple resources such as Amazon EC2, Amazon DynamoDB, Amazon Aurora, and many more across multiple services in short intervals.

A

AWS Auto Scaling

146
Q

_____________________ helps you to have the correct number of Amazon EC2 instances available to handle application load. You create collections of EC2 instances, called ______________.

A

Amazon EC2 Auto Scaling

Auto Scaling groups

147
Q

What are the 3 EC2 Auto scaling components?

A

Launch templates: What resources do you need?

Amazon EC2 Auto Scaling group: Where and how many do you need?

Auto scaling policy: When and for how long do you need them?

NOTE: You are strongly encouraged to create Auto Scaling groups from launch templates to get the latest features from Amazon EC2. A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you must specify information about the EC2 instances to launch. Include the AMI, instance type, key pair, security groups, and block device mapping. Launch or terminate instances to meet capacity demands.

148
Q

What are 3 Ways to scale with Amazon EC2 Auto Scaling?

A

scheduled scaling: you can scale your application before known load changes. For example, every week, the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the known traffic patterns of your web application.

dynamic scaling: you define how to scale the capacity of your Amazon EC2 Auto Scaling group in response to changing demand. For example, suppose that you have a web application that currently runs on two instances. You want the CPU utilization of the group to stay at about 50 percent when the load on the application changes. As a result, you have extra capacity to handle traffic spikes without maintaining an excessive number of idle resources.

predictive scaling: to increase the number of EC2 instances in your group in advance of daily and weekly
patterns in traffic flows.

NOTE: it is recommended that you scale out early and fast, while you scale in slowly over time.

149
Q

What are the 4 benefits of using IaC through CloudFormation?

A

IaC has the following benefits: * Speed and safety – Your infrastructure is built programmatically, which makes it faster than manual deployment and makes errors less likely.
* Reusability – You can organize your infrastructure into reusable modules. * Documentation and version control – Your templates document your deployed resources, and version control provides a history of your infrastructure over time. You can also roll back to a previous working version of your infrastructure in the event of error.
* Validation – You perform code review on your templates, which decreases the chances of errors.

150
Q

How are CloudFormation templates specified?

A
  • Written as JSON or YAML
  • Describes the resources to be created or modified
  • Treated as source code: * Code reviewed * Version controlled

JSON
{
“Resources” : {
“HelloBucket” : {
“Type” : “AWS::S3::Bucket”
}
}
}

YAML
Resources:
HelloBucket:
Type: AWS::S3::Bucket

exam – Make sure you know how to look at the events; some parameter info; know that there are 2 languages and difference between

151
Q

What are Stacks in CloudFormation?

A
  • A collection of AWS resources that are managed as a single unit
  • Can deploy and delete resources as a unit
  • Can update resources and settings on running stacks
  • Supports nested stacks and cross-stack references

All resources in a stack are defined by the stack’s CloudFormation template. You can manage a collection of resources by creating, updating, or deleting stacks

152
Q

___________ integrates with developer tools and provides a one-stop experience to manage the application lifecycle. This service provisions and manages application infrastructure to support your application.

A

Elastic Beanstalk

exam -=– generates a cloud formation script

The goal of Elastic Beanstalk is to help developers deploy and maintain scalable web applications and services in the cloud without worrying about underlying infrastructure. Elastic Beanstalk configures each EC2 instance in your environment with the components necessary to run applications for the selected application type.

153
Q

What is a suite of services that provides a central place to view and manage your AWS resources, so you can have complete visibility and control over your operations?

A

AWS Systems Manager

exam – suite of servicees

With Systems Manager, you can: * Create logical groups of resources such as applications, different layers of an application stack, or development and production environments.
* Select a resource group and view its recent API activity, resource configuration changes, related notifications, operational alerts, software inventory, and patch compliance status.
* Take action on each resource group depending on your operational needs. * Centralize operational data from multiple AWS services and automate tasks across your AWS resources.

154
Q

______________ provides artificial intelligence (AI) powered code suggestions in your IDE.

A

Amazon CodeWhisperer

Key is that integrates with IDEs

AI coding companion:
* Generates code suggestions based on comments and existing code
* Offers real-time support for code authoring directly within your integrated development environment (IDE)

Also, AI security scanner:
* Helps identify hard-to-find vulnerabilities

155
Q

What is a container Orchestration tool that is a managed service that you can use to run the Kubernetes container orchestration software on AWS? You might choose this option if you require additional control over your configurations.

A

Amazon EKS

exam – ec2 runs under both; ECS is easier least effort; eks has more features

156
Q

What is a container Orchestration tool that is a managed container orchestration service that offers a more managed model for deploying your containers. Amazon ECS features additional integrations with other AWS services.

A

Amazon ECS

exam – ec2 runs under both; ECS is easier least effort; eks has more features

157
Q

____________ is a managed Docker container registry. You push your container images to Amazon ECR and can then pull those images to launch containers. With it, you can compress, encrypt, and control access to your container images.

A

Amazon Elastic Container Registry (Amazon ECR)

158
Q

________ is a highly scalable, high-performance container management service that supports Docker containers. It manages the scaling, maintenance, and connectivity for your containerized applications. As a result, teams can focus on building the applications, not the environment.

A

Amazon Elastic Container Service (Amazon ECS)

159
Q

Which load balance supports path based routing such as the following targets?

/api/users
/api/topics
/api/messages

A

Application Load Balancer

This diagram shows an application load balancer that is sending web traffic based on the path of APIs in the request for each service. You register the user service, topic service, and message service with different target groups. When Amazon ECS starts a task for your service, it registers the container and port combination with the service’s target group. The Application Load Balancer routes traffic to and from that container.

160
Q

_________________ is a certified conformant, managed Kubernetes service helping you by providing highly available and secure clusters while automating key tasks such as patching, node provisioning, and updates.

A

Amazon Elastic Kubernetes Service (Amazon EKS)

161
Q

What is a technology for Amazon ECS and Amazon EKS that you can use to run containers without having to manage servers or clusters? Being serverless, you no longer need to provision, configure, and scale clusters of VMs to run containers. Thus, it removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing.

A

AWS Fargate

exam – fargate manages the raw compute resources fully; you just run and manage the apps

Fargate eliminates the need for you to interact with or think about servers or clusters. With Fargate, you can focus on designing and building your applications instead of managing the infrastructure that runs them.

162
Q

When deciding on the container service to use, what would you choose if you want a solution with less manual configuration and easier integration with other AWS services?

A

ECS

163
Q

When deciding on the container service to use, what would you choose if you want the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on premises without installing and operating your own Kubernetes clusters. This option is good for organizations that work with open source tools.

A

Amazon EKS

Amazon EKS requires more configuration, but offers more control over your environment

164
Q

Ultimately if you want the least effort and easiest option for managing the hosting environment for your containers, what should you choose?

A

ECS with Fargate

165
Q

What would you choose if you wanted to Access AWS services without an internet gateway, NAT gateway, or public IP address?

A

VPC Endpoint

166
Q

Without _____________, a VPC requires an internet gateway and a NAT gateway, or a public IP address, to access serverless services outside the VPC.

A

VPC endpoints

A VPC endpoint provides a reliable path between your VPC and supported AWS services. You do not need an internet gateway, a NAT device, a VPN connection, or an AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service.
Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. They permit communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

167
Q

A ________________ is for you to specify as a target for a route in your route table. It provides a target for traffic that is destined for a supported AWS service. The following AWS services are supported: Amazon S3 and Amazon DynamoDB.

A

gateway (VPC) endpoint or gateway endpoint

EXAM: Must have routes and only supports 2 services; know that S3 & DynamoDB are for GW only; if question has no “on premise” points to it, then it’s gateway as the answer

168
Q

What 2 services are only supported by gateway endpoints?

A

Amazon S3 and Amazon DynamoDB

EXAM: for S3 decision (interface v. gateway), then see if it’s needed from on-prem the it’s interface; if not and want the cheapest, then it’s gateway

169
Q

__________________ is an elastic network interface with a private IP address from the IP address range of your subnet. The network interface serves as an entry point for traffic that is destined to a supported service.

A

An interface (VPC) endpoint or interface endpoint

EXAM: Interface will have ENIs along with supporting more services; exam – useful for sites on outside of VPC; if you want to access something from a public subnet in your VPC to a private resource on the exam question, then it’s interface even if it’s S3 or DynamoDB

170
Q

With what endpoint can you privately connect your VPC to services as if they were in your VPC. When this endpoint is created, traffic is directed to the new endpoint without changes to any route tables in your VPC.

A

Interface endpoint

171
Q

A ________ connection is a one-to-one relationship between two VPCs. You can have only one between any two VPCs. You can create multiple of these types of connections for each VPC that you own.

A

VPC peering

VPC peering limitations and rules include: * There is a limit on the number of active and pending VPC peering connections that you can have per VPC. * You can have only one VPC peering connection between the same two VPCs. * The maximum transmission unit (MTU) across a VPC peering connection is 1,500 bytes.

exam: can be cross regions but No transitive peering relationships meaning a to b to c doesn’t allow a to talk to c & vice versa

172
Q

What are the Benefits of VPC peering?

A

*Bypasses the internet gateway or virtual private gateway
* Provides highly available connections — no single point of failure
*Avoids bandwidth bottlenecks
*Uses private IP addresses to direct traffic between VPCs

173
Q

T or F: You can create a full mesh network design by using VPC peering.

A

T

each VPC must have a one-to-one connection with each VPC with which it is approved to communicate.

174
Q

An ___________ connection offers two VPN tunnels. These tunnels go between a virtual private gateway (or a transit gateway) on the AWS side, and a customer gateway on the on-premises side

A

AWS Site-to-Site VPN

exam: always 2 GWs to ensure failover; but this has limited bandwidth due to VPN overhead; that’s why direct connect might be better option

175
Q

Which side of a site-to-site VPN must initiate the tunnels?

A

customer gateway device must bring up the tunnels for your AWS Site-to-Site VPN connection by generating traffic and initiating the Internet Key Exchange (IKE) negotiation process

176
Q

_______________ sets up a a fiber link from your data center to your AWS resources.

A

AWS Direct Connect links

your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to a Direct Connect router. This connection is called the cross-connect. With this connection, you can create virtual interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing internet service providers (ISPs) in your network path.

exam: know that it can be up to 100GB; it’s a single point of failure so not highly available–key point!; secure physical connection; for private connectivity

177
Q

What is a common architecture practices with AWS Site-to-Site VPN and Direct Connect?

A

common architecture practice to have both; ensures redundancy/bc

178
Q

How are priced AWS Site-to-Site VPN and Direct Connect?

A

Site-to-Site VPN
* Connection fee (per hour)
* Data transfer out (DTO)
** Measured per gigabyte (GB)
** First 100 GB are at no charge

Direct Connect
* Capacity (Mbps)
* Port hours
** Time that a port is provisioned for your use in the data center
* Data transfer out (DTO)
** Measured per gigabyte (GB)

DTO is egress out charge

179
Q

When would you choose AWS Site-to-Site VPN over Direct Connect?

A

Choose AWS VPN solutions when you:
* Need a way to quickly establish a network connection between your on-premises networks and your VPC
* Need to stay within a small budget
* Require encryption in transit
* Faster to configure than Direct Connect

NOTE: Limited to 1.25 Gbps connection maximum

180
Q

When would you choose Direct Connect over AWS Site-to-Site VPN?

A

Consider Direct Connect when you:
* Need faster connectivity options than what AWS Site-to-Site VPN can provide
* Are already in a collocation that supports Direct Connect
* Need predictable network performance

NOTE: Sub-1, 1, 10, or 100 Gbps connection options
Requires special agreements and physical cabling to the data center
Pay for port hours whether the connection is active or not
Not encrypted by default, but a private, dedicated connection

181
Q

What is used when you need to Connect up to 5,000 VPCs and on-premise?

A

Transit Gateway

182
Q

A ________ is a network transit hub that you can use to interconnect your VPCs and on-premises networks, and it scales elastically, based on traffic while supporting multicast?

A

transit gateway

A transit gateway acts as a cloud router to simplify your network architecture.

183
Q

With ____________, you can monitor your VPCs and edge connections from a central console. Integrated with popular software-defined wide area network (SD-WAN) devices, this service helps you identify issues in your global network.

A

Transit Gateway Network Manager

184
Q

T or F: Traffic between a VPC and transit gateway travels across the Internet unencrypted. Therefore, you must establish some type of encryption.

A

F

Traffic between a VPC and transit gateway remains on the AWS global private network and is not exposed to the public internet. Transit Gateway inter-Region peering encrypts all traffic. With no single point of failure or bandwidth bottleneck, it protects you against DDoS attacks and other common exploits.

185
Q

Transit Gateway is made up of what two important components?

A

attachments and route tables

186
Q

A transit gateway attachment is comprised of what 2 items?

A

a source and a destination of packets.

You can attach one or more of the following resources if they are in the same Region as the transit gateway:
* VPC
* VPN connection
* Direct Connect gateway
* Transit Gateway Connect
* Transit Gateway peering connection

You can use VPN connections and Direct Connect gateways to connect your on-premises data centers to transit gateways. With a transit gateway, you can connect with VPCs in the AWS Cloud, which creates a hybrid network.

187
Q

By default, transit gateway attachments are associated with what?

A

default transit gateway route table

if only one route table/gw, then everything in there is meshed; operates at layer 3

188
Q

T or F: Each transit gateway attachment is associated with exactly one route table. Each route table can be associated with zero to many attachments.

A

T

and this provides one central routing point

189
Q

T or F: Network attachments must be in the same Region as the transit gateway.

A

T

190
Q

You can use _____________ to share your transit gateway with other accounts.

A

You can use AWS Resource Access Manager to share your transit gateway with other accounts. After you share a transit gateway with another AWS account, the account owner can attach their VPCs to your transit gateway. A user from either account can delete the attachment at any time.

191
Q

What is the relationship of a VPC to a Transit Gateway connection called?

A

Attachment

192
Q

What 4 types of resources can you attach to a transit gateway?

A

A transit gateway attachment is a source and a destination of packets. You can attach the following resources to
your transit gateway:
* One or more VPCs
* One or more VPN connections
* One or more Direct Connect gateways
* One or more transit gateway peering connections

193
Q

What can be used to connect VPCs, VPNS, & DC connections together in a fully connected set up?

A

Transit Gateway is the central hub that helps you control communication between attached resources. It is controlled through route tables.

194
Q

What helps to decouple and shift resources to new features and services?

A

serverless

exam: decoupling from an architect concern

195
Q

What are advantages of serverless?

A
  • No infrastructure to provision or manage
  • No servers to provision, operate, or patch
  • Scales automatically by unit of consumption, rather than by server unit
  • Pay-for-value billing model (pay for the unit, rather than by server unit)
  • Built-in availability and fault tolerance
  • No need to architect for availability because it is built into the service
196
Q

What is Lambda used for? Fargate?

A

serverless code versus serverless containers

197
Q

compare SQS & SNS

A

SQS–serverless queues which is always 1-1; SNS–use when you need more than one recipient; pub/sub; 1-many

exam: SQS is persistent; SNS isn’t; SQS is 1-1, serverless, & uses a polling model

198
Q

What’s the key with Kinesis?

A

real time need it now; streaming

199
Q

What does Athena provide?

A

SQL queries; serverless

200
Q

What service can be used to hose your API environment?

A

API gateway

With Amazon API Gateway, you can create, publish, maintain, monitor, and secure APIs

exam: can be in front of SQS, Lambda, etc.

201
Q

What are some features / capabilities of API Gateway?

A

Creates a unified API frontend for multiple microservices
Provides distributed denial of service (DDoS) protection and throttling for your backend
Authenticates and authorizes requests to a backend
Throttles, meters, and monetizes API usage by third-party developers

202
Q

What service is a fully managed service that requires no administrative overhead and little configuration and Stores messages until they are processed and deleted?

A

SQS

Note: it’s FIFO

push/pull-polling/pull; DLQ is an error handler of sorts

203
Q

How is Loose coupling implemented with Amazon SQS?

A

In this example, a producer application creates customer orders and sends them to an Amazon SQS queue. A consumer application processes orders from the producer application tier. The consumer application polls the queue and receives messages. It then records the messages in an Amazon Relational Database Service (Amazon RDS) database and deletes the processed messages from the SQS queue. Amazon SQS sends messages that cannot be processed to a dead-letter queue where they can be processed later.

exam: know that producer/consumer is associated with SQS

204
Q

What are the 2 SQS types of message queues?

A

Standard & FIFO

Standard queues support at-least-once message delivery and provide best-effort ordering. Messages are generally delivered in the same order in which they are sent. However, because of the highly distributed architecture, more than one copy of a message might be delivered out of order. Standard queues can handle a nearly unlimited number of API calls per second. You can use standard message queues if your application can process messages that arrive more than once and out of order.

FIFO queues are designed to enhance messaging between applications when the order of operations and events is critical or where duplicates can’t be tolerated. FIFO queues also provide exactly-once processing, but have a limited number of API calls per second

205
Q

What SQS queue type supports at-least-once message delivery and provide best-effort ordering?

A

Standard

Standard queues support at-least-once message delivery and provide best-effort ordering. Messages are generally delivered in the same order in which they are sent. However, because of the highly distributed architecture, more than one copy of a message might be delivered out of order. Standard queues can handle a nearly unlimited number of API calls per second. You can use standard message queues if your application can process messages that arrive more than once and out of order.

206
Q

What SQS queue type is designed to enhance messaging between applications when the order of operations and events is critical or where duplicates can’t be tolerated?

A

FIFO queues are designed to enhance messaging between applications when the order of operations and events is critical or where duplicates can’t be tolerated. FIFO queues also provide exactly-once processing, but have a limited number of API calls per second.

207
Q

What are some use cases for message queues?

A

-Service-to-service communication
-Asynchronous work items
-State change notifications

Not good for:
-Selecting specific messages
-Large messages (256K & up)

exam: anywhere there are 2 services, think decoupling; know these use cases

208
Q

What service that helps you to set up, operate, and send notifications from the cloud? The service follows the publish-subscribe (pub-sub) messaging paradigm, with notifications being delivered to clients using a push mechanism

A

Amazon SNS is a web

one-to-many; push algorithm

exam: publisher / subscriber (pub-sub) are key terms here versus SQS

209
Q

Describe the one-to-many SNS push algorithm.

A

Instead of including a specific destination address in each message, a publisher sends a message to the topic. Amazon SNS matches the topic to a list of subscribers for that topic, and delivers the message to each subscriber

210
Q

What are some common use cases for Amazon SNS?

A

You can use Amazon SNS notifications in many ways, for example, the following:
* You can receive immediate notification when an event occurs, such as a specific change to your Auto Scaling group.
* You can push targeted news headlines to subscribers by email or SMS. Upon receiving the email or SMS text, interested readers can choose to learn more by visiting a website or launching an application.
* You can send notifications to an app, indicating that an update is available. The notification message can include a link to download and install the update.

exam: know use cases; use this with cloud watch notifications

211
Q

What are the characteristics of SNS?

A

-Single published message
-No recall options (When a message is delivered successfully, there is no way to recall it. )
-HTTP or HTTPS retry
-Standard or FIFO topics

exam: no durability … once it expires it’s gone

212
Q

What is the fan-out scenario with messaging?

A

This example uses the observer type and illustrates a fan-out scenario using Amazon SNS and Amazon SQS. In a fan-out scenario, a message is sent to an SNS topic and is then replicated and pushed to multiple SQS queues, HTTP endpoints, or email addresses. This allows for parallel asynchronous processing.
In this example, Amazon SNS fans out orders to two different SQS queues. Two Amazon EC2 instances each observe a queue. One of the instances handles the processing or fulfillment of the order, while the other is attached to a data warehouse for analysis of all orders received.

exam: know fan-out term and that use both SNS & SQS; one message with many destinations

213
Q

What service supports Collecting, processing, and analyzing data streams in real time? It has the capacity to process streaming data at any scale.

A

Kinesis

214
Q

What service supports ingesting real-time data, such as video, audio, application logs, website clickstreams, and Internet of Things (IoT) telemetry data? The ingested data can be used for machine learning, analytics, and other applications.

A

Kinesis

215
Q

What is the difference between Kinesis data streams and firehose?

A

exam: this is the key to know;

Kinesis: recognize data streams (analytics even though it collects & stores); associate with large amounts of real time data consumption; ex: IoT processing

vs.

firehose (loading data & ETL it); no intermediary; focuses on back end processing; Kinesis Data Firehose can send records to Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and any HTTP endpoint that you own.

216
Q

To get started using Kinesis Data Streams, create a stream and specify the number of __________. Each is a uniquely identified sequence of data records in a stream.

A

shards

exam: streams breaks up data to move through the stream to have “manageable chunks”; shard determines throughput for bandwidth required

Producers write data into the stream. A producer might be an EC2 instance, a mobile client, an on-premises server, or an IoT device. You can send data such as infrastructure logs, application logs, market data feeds, and web clickstream data.

Consumers read the streaming data that the producers generate. A consumer might be an application running on an EC2 instance or AWS Lambda. An application on an EC2 instance will need to scale as the amount of streaming data increases.

217
Q

With _________, you can focus on defining the component interactions rather than writing all of the software to make the interactions work.

A

Step Functions

known as a state engine (vending machine example)

allows you to visually build the end-to-end coordination of services

218
Q

AWS WAF

A

exam: higher level protection at layer 7; associate AWS Shield with DDoS

219
Q

__________ provides a DNS, domain name registration, and health checks. It was designed to give developers and businesses a reliable and cost-effective way to route end users to internet applications.

A

Route 53

exam: DNS; used for load balancing; HA across AZs; can load balance across regions with weighting routing

It translates names like example.com into the numeric IP addresses that computers use to connect to each other.
With Route 53, you can purchase and manage domain names, such as example.com, and automatically configure DNS settings for your domains.
Route 53 effectively connects user requests to infrastructure running in AWS, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, Elastic Load Balancing (ELB) load balancers, or Amazon Simple Storage Service (Amazon S3) buckets. You can also use it to route users to infrastructure outside of AWS.
You can configure an Amazon CloudWatch alarm to check on the state of your endpoints. Combine your DNS with health check metrics to monitor and route traffic to healthy endpoints.

220
Q

Compare Route 53 public and private DNS

A

Public hosted zone
* Route to internet-facing resources
* Resolve from the internet
* Use global routing policies

Private hosted zone
* Route to VPC resources
* Resolve from inside the VPC
* Integrate with on-premises private zones using forwarding rules and endpoints

exam: know the difference

221
Q

___________________ contain records that specify how you want to route traffic in your Amazon Virtual

A

Private hosted zones; used for internal subdomains

222
Q

What are the 7 routing policies that determine how Route 53 responds?

A

exam: allows you to load balance with failover (uses health check such as a server, app perf, or other resource; can use cloud watch alarm) and weighted routing; know each of these policies

  • Simple routing policy – Use for a single resource that performs a given function for your domain. An example is a web server that serves content for the example.com website.
  • Failover routing policy – Use when you want to configure active-passive failover. * Geolocation routing policy – Use when you want to route traffic based on the location of your users. * Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
  • Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the Region that provides the lowest latency with less round-trip time.
  • Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.
  • Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify.
223
Q

__________________ is a global CDN service that accelerates delivery of your websites, APIs, video content, or other web assets. It integrates with other AWS products to give developers and businesses a straightforward way to accelerate content to users. There are no minimum usage commitments

A

CloudFront

improves security because you can add additional services such as WAF and Shield

224
Q

What can you use to help avoid egress costs?

A

CloudFront caching

225
Q

What are 2 methods that WS provides features that improve the performance of your content delivery?

A
  • TCP optimization – CloudFront uses TCP optimization to observe how fast a network is already delivering
    your traffic and the latency of your current round trips. It then uses that data as input to automatically improve performance.
  • TLS 1.3 support – CloudFront supports TLS 1.3, which provides better performance with a simpler handshake process that requires fewer round trips. It also adds improved security features
226
Q

What should a customer expect from Shield?

A

everyone gets standard; get extra services with Advanced – SRT, WAF, Firewall Manager, etc.

227
Q

_________________ is a firewall that helps protect your web applications or APIs against common web exploits and bots. It gives you control over how traffic reaches your applications. Create security rules that control bot traffic and block common attack patterns, such as SQL injection (SQLi) or cross-site scripting (XSS).

A

AWS WAF

228
Q

______ simplifies the administration and maintenance tasks of your AWS WAF and Amazon VPC security groups. Set up your rules, Shield protections, and Amazon VPC security groups once. The service automatically applies the rules and protections across your accounts and resources, even as you add new resources.

A

AWS Firewall Manager

  • Centrally set up baseline security.
  • Consistently enforce the protections.
  • Seamlessly manage multiple accounts.
229
Q

What can be used to Host AWS services on premises in your office or data center?

A

AWS Outposts family

You need a solution to securely store and process customer data that must remain on premises or in countries outside of an AWS Region. You need to run data-intensive workloads and process data locally, or when you want closer controls on data analysis, backup, and restore

230
Q

You can create resources on _____ to support low-latency workloads that must run in close proximity to on-premises data and applications.

A

your Outpost

exam: know these services

231
Q

What is a common reason DR plans fail?

A

Lack of testing

232
Q

What is the number one requirements for high availability?

A

more than one availability zone

233
Q

______ is often confused with high availability, but this concept refers to the built-in redundancy of an application’s components to prevent service interruption. However, it is at a higher cost.

A

Fault tolerance

234
Q

______________________ is the acceptable amount of data loss measured in time.

A

Recovery Point Objective (RPO)

235
Q

_______ is the time it takes after a disruption to restore a business process to its service level, as defined by the operational level agreement (OLA).

A

Recovery Time Objective (RTO)

236
Q

________ is a family of data transport solutions that move terabytes (TB) to petabytes of data into and out of AWS using storage devices. It is designed to be secure for physical transport. These devices help you retrieve a large quantity of data stored in Amazon S3 much quicker than using high-speed internet.

A

AWS Snow Family

237
Q

_________ is a fully managed backup service that helps you centralize and automate the backup of data across AWS services. It also helps customers support their regulatory compliance obligations and meet business continuity goals.

A

AWS Backup

central place to build your backup policy

AWS Backup works with AWS Organizations. It centrally deploys data protection policies to configure, manage, and govern your backup activity. It works across your AWS accounts and resources, including Amazon EC2 instances and Amazon EBS volumes. You can back up databases such as DynamoDB tables, Amazon DocumentDB and Amazon Neptune graph databases, and Amazon RDS databases, including Amazon Aurora database clusters. You can also back up Amazon EFS, Amazon S3, AWS Storage Gateway volumes, and all versions of Amazon FSx, including FSx for Lustre and FSx for Windows File Server.

238
Q

What are 3 benefits of AWS Backup?

A

Simplicity
Compliance
Control costs

Simplicity: Policy-based and tag-based
backup solution; Backup access policies; provides a central place to manage and monitor your backups

Compliance: Automated backup scheduling; Monitoring and logs of centralized backup activity; Encrypted backups; enforce backup policies, encrypt your backups, protect your backups from manual deletion, and prevent changes to your backup lifecycle settings

Control costs: Automated management of backup retention; No added cost for orchestration; reduces the risk of downtime; reduce operating costs by reducing time spent on manual configuration and by automating backups.

239
Q

What feature of AWS Backups helps with compliance?

A

Vault

When you create a backup vault, you assign an AWS Key Management Service (AWS KMS) encryption key to encrypt backups that do not have their own encryption methods.

Also, it’s a best practice to use Tags for backup – You specify tags that will be assigned to backups created by this plan.

240
Q

What are the four DR recovery scenarios?

A

From cheapest to most expensive:
(sorted from highest to lowest RTO and RPO), 1. Backup and restore: RPO-RTO: Hours
2. Pilot light: RPO-RTO: 10s of minutes
3. Warm standby: RPO-RTO: minutes
4. Multi-site active/active: RPO-RTO: Real time

241
Q

____________ is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the performance of inter-instance communications, which is critical to scaling these applications.

A

Elastic Fabric Adapter (EFA)

242
Q

What service provide secure, frictionless customer identity and access management that scales?

A

Amazon Cognito processes more than 100 billion authentications per month. The service helps you implement customer identity and access management (CIAM) into your web and mobile applications. You can quickly add user authentication and access control to your applications in minutes.

  • Frictionless, customizable customer IAM
  • Advanced security features for sign-up and sign-in
  • High-performing, scalable user directory
  • Standards-based, federated sign-in capabilities

As an alternative to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use an Amazon Cognito user pool to control who can access your API in Amazon API Gateway.

243
Q
A