test questions Flashcards
Cold Attach
Warm Attach
Hot Attach
You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
Changing the Tenancy of an Instance
Dedicated - hardware** that’s dedicated to a **single* customer
Host- Dedicated Hosts give you additional visibility and control over how instances are placed on a physical server, and you can reliably use the same physical server over time.*
Dedicated to host, can happen after its stopped after launching. vice versa same. It will change at next launch. WARM approach
Warm standby
Pilot Light
Multi-site/Hot Standby
backup and restore
Pilot light only provision the critical part in the backup site, e.g a slave db instance.
Pilot Light: This method keeps “critical applications” no copy to this and data at the ready so that it can be quickly retrieved if needed.
———
Warm Standby:
“smaller scale version” of resources dedicated to this, once failover occurs this will scale up. This method keeps a duplicate version of your business’ core elements running on standby at all times, which makes for a little downtime and an almost seamless transition.
Little Downtime
———
Multi-Site Solution:
NO DOWNTIME; Also known as a Hot Standby, this method “fully replicates” your company’s
data/applications between two or more active locations and splits your traffic/usage between them. If a disaster strikes, everything is simply rerouted to the unaffected area, which means you’ll suffer almost zero downtime. However, by running two separate environments simultaneously, you will obviously incur much higher costs.
IOPS AND LEVELS EBS (gp2 EBS (io1) EBS ST1 EBS SC1 EBS MAX
EBS (gp2) 16000 iops 3 IOPS/GiB EBS (io1) 64000 iops 50:1 IOPS to GiB EBS ST1 250mbps 500mbps EBS SC1 250mbps EBS MAX 80k per instance!!!! Max 2375 MB/s per instance, 1000 MiB/s (vol) (io1)
EBS VS INSTANCE STORE IOPS
EBS
Require up to 64,000 IOPS and 1,000 MiB/s per volume
Require up to 80,000 IOPS and 2,375 MB/s per instance
When to use Instance Store
Great value, they’re included in the cost of an instance.
More than 80,000 IOPS and 2,375 MB/s
If you need temporary storage, or can handle volatility.
instance vs ebs general
Instance Store
Direct (local) attached storage
Super fast
Ephemeral storage or temporary storage
Elastic Block Store (EBS)
Network attached storage
Volumes delivered over the network
Persistent storage lives on past the lifetime of the instance
Creating a Canary
CloudWatch
The purpose of a canary deployment is to reduce the risk of deploying a new version that impacts the workload. The method will incrementally deploy the new version, making it visible to new users in a slow fashion.
CloudWatch Synthetics (announced at AWS re:Invent 2019) allows you to monitor your sites, API endpoints, web workflows, and more. … as you create your canaries, you can set CloudWatch alarms so that you are notified when thresholds based on performance, behavior, or site integrity are crossed.Apr 23, 2020
aws import export
AWS Import/Export is a service you can use to transfer large amounts of data from physical storage devices into AWS. You mail your portable storage devices to AWS and AWS Import/Export transfers data directly off of your storage devices using Amazon’s high-speed internal network.
Application Load Balancer
A listener checks for connection requests from clients, using the protocol and port that you configure.
Each rule consists of a priority, one or more actions, and one or more conditions. When the conditions for a rule are met, then its actions are performed. You must define a default rule for each listener, and you can optionally define additional rules.
seventh layer of the Open Systems Interconnection (OSI) model
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
benefits of application load balancer
Benefits of migrating from a Classic Load Balancer
Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
Support for redirecting requests from one URL to another.
Support for returning a custom HTTP response.
Support for registering targets by IP address, including targets outside the VPC for the load balancer.
Support for registering Lambda functions as targets.
Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
Access logs contain additional information and are stored in compressed format.
Improved load balancer performance.
TLS and SSL with load balancer
only for layer 7, which is classic or more recently ALS application
DAX vs elasticache
Elasticache is a cache engine based on Memcached or Redis, and it’s usable with RDS engines and DynamoDB.
DAX is AWS technology and it’s usable only with DynamoDB.
Amazon ElastiCache is categorized as Data Replication, Database as a Service (DBaaS), and Key Value Databases
Cache frequently accessed data in-memory.
Amazon DynamoDB Accelerator (DAX) is categorized as Web Server Accelerator
Delivers up to 10x performance improvement from milliseconds to microseconds or even at millions of requests per second.
dax
Correct. Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way. No tuning required.
ElastiCache
Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases. There are two types of ElastiCache available: Memcached and Redis. Here is a good overview and comparison between them: https://aws.amazon.com/elasticache/redis-vs-memcached/
vCPU limit On-Demand Instances
There is a limit on the number of running On-Demand Instances per AWS account per Region. On-Demand Instance limits are managed in terms of the number of virtual central processing units (vCPUs) that your running On-Demand Instances are using, regardless of the instance type
- before you had limits for each EC2 instance type. That’s a nightmare to manage if you run different types of instances for different types of load. At scale, all you care about is computing power
- each instance type comes with a certain number of vCPU (see here: https://ec2instances.info/)
- now, instead of so many limits for the so many types of EC2 instances, you get just one limit to manage your entire EC2 fleet, and that’s the vCPU limit, which is computed thanks to mapping the instance type you’re currently using to the number of vCPU. This allows you to run mixed workloads of on-demand with different instance types without shooting yourself in the foot and hitting some random instance limit
Amazon Redshift clusters
Amazon Redshift is a data warehouse product
An Amazon Redshift cluster consists of nodes. Each cluster has a leader node and one or more compute nodes. The leader node receives queries from client applications, parses the queries, and develops query execution plans. The leader node then coordinates the parallel execution of these plans with the compute nodes and aggregates the intermediate results from these nodes. It then finally returns the results back to the client applications.
Compute nodes execute the query execution plans and transmit data among themselves to serve these queries. The intermediate results are sent to the leader node for aggregation before being sent back to the client applications. For more information about leader nodes and compute nodes, see Data warehouse system architecture in the Amazon Redshift Database Developer Guide.
security group limits
5 per instance
You can have 60 inbound and 60 outbound rules per security group (making a total of 120 rules). This quota is enforced separately for IPv4 rules and IPv6 rules; for example, a security group can have 60 inbound rules for IPv4 traffic and 60 inbound rules for IPv6 traffic.
CloudWatch default metrics
CPU
DISK
NETWORK
CRON JOBS
Scheduled tasks
Amazon ECS supports the ability to schedule tasks on either a cron-like schedule or in a response to CloudWatch Events. This is supported for Amazon ECS tasks using both the Fargate and EC2 launch types.
SES VS SNS
SES is BULK EMAIL
SNS is for automation in working on decoupled servies
SNS can do phones, sqs, mobile, http etc
Amazon SES belongs to “Transactional Email” category of the tech stack, while Amazon SNS can be primarily classified under “Mobile Push Messaging”.
What is Amazon SES? Bulk and transactional email-sending service. Amazon SES eliminates the complexity and expense of building an in-house email solution or licensing, installing, and operating a third-party email service. The service integrates with other AWS services, making it easy to send emails from applications being hosted on services such as Amazon EC2.
What is Amazon SNS? Fully managed push messaging service. Amazon Simple Notification Service makes it simple and cost-effective to push to mobile devices such as iPhone, iPad, Android, Kindle Fire, and internet connected smart devices, as well as pushing to other distributed services. Besides pushing cloud notifications directly to mobile devices, SNS can also deliver notifications by SMS text message or email, to Simple Queue Service (SQS) queues, or to any HTTP endpoint.
RDS compability with failover
MariaDB, MySQL, Oracle, and PostgreSQL
Amazon RDS uses several different technologies to provide failover support. Multi-AZ deployments for MariaDB, MySQL, Oracle, and PostgreSQL DB instances use Amazon’s failover technology. SQL Server DB instances use SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs).
What can an EBS volume do when snapshotting the volume is in progress
The volume can be used normally while the snapshot is in progress.
You can create a point-in-time snapshot of an EBS volume and use it as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental; the new snapshot saves only the blocks that have changed since your last snapshot. Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html
ENI attachments time
Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves.
when to use instance store over EBS
past 2000mbps and 80000 iops
TEMPORARY
STATELESS
NEEDS HIGH IOPS