Dynamo DB
fast and flexible NoSQL database service for all application that need consistent.
- Stored on DDS storage
- Spread across 3 geographically distinct data centers
- Eventual consistent reads
- Strong consistent read
Eventual Consistent Reads
consistence across all copies of data is usually reached within a second. Repeating a read after a short time should return updated data (best READ performance)
Strongly Consistent Reads
A strong consistent read returns a resutls that reflects all writes that received a successful response prior to the read
Dynamo DB pricing
write throughput $0.0065 per hours for every 10 units
wread throughput $0.0065 per hours for every 50 units
storage is 25 cents per gig per per months
Good for reads!!!
Redshift
fast powerful, fully manages, petabyte-scale data warehouse service in the cloud. Data warehouse.
Redshift configuration
single node (160Gb)
multi-Node
- leader Node (manages client connection and receives queries)
- Compute note (store and perfom queires and computation) up to 128 compute nodes.
Redshift - Columnar Data Storage
only columns are only involved in the queries are processed and columnar
Redshift- advance compression
massive parallel processing or MPP pace so misson red redshift automatically distributes
data and query load across all nodes and ASM redshift makes it easy to add nodes to your data warehouse
and enables you to maintain phos query performance as your data warehouse grows.
Redshif Priced
The first is compute node hours so this is the total number of hours you run across all your computer
nodes for the billing period.
one unit per node per hour.
So 300 data warehouse clustering running persistently for an entire month would incur 2160 instance
.
Elasticashe
its a web service that makes it east to delopy, operate and scale an in-memory in the cloud.
used to improve latency and compute intensive.
two different Elasticache
- Memcached
2. Redis
Memcached
memory object caching system.
Redis
open-source in memory key-value store that supports data structures such as sorted sets and lists.
EXAM TIPS** ElasticCache Exam Tips
ElastiCache supports Master/Slave replication and Multi-AZ which can be used to achieve cross AZ redundancy
typically you will be given a scenario where a particular database is under a lot of stress/load. you may be asked which service you should use to alleviate this.
ElasticCache is a good choice if your database is particularly read heavy and not prone to frequent changing.
Redshits is a good answer if the reason your database is feeling stress is because management keep running (online transaction analytics)OLAP transaction on it etc.
Aurora
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed of availability of high-end commercial database with a the simplicity and cost effectiveness of open source database. it is 5 times better performance than MySQL at a price point one tenth that of a commercial database.
Aurora Scaling
starts with 10 GB scales in 10 GB increments to 64T
compute resources can scale up to 32vCPU an 244GB of memory
maintains 2 copies of your data in contained in each AZ with minimum of 3 AZ. 6 copies of your data.
Aurora is designed to transparently handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability.
storage is self-healing.
only available in certain regions.
Aurora Replicas
2 types of Replicas are available
Aurora Replicas (currently 15)
MySQL Read Replicas (currently 5)
Tier 0 is right than T1 and so on
AWS Database Types Summary
RDS_OLTP - used for online transaction -sql -mysql -PostgresSQL -Oracle -Aurora -MariaDB DynamoDB - No Sql Reshift - OLAP (warehousing) Elasticache: -memcached -Redis
RDS Mulit AZ?
uses DNS endpoint when create ADS once you turn on multi AZ
failover- you can reboot to test failover
Read Replica
up to 5 read replicas
Aurora Scaling
2 copies of your data is contained in each availability zone with minimum of 3 availability zones. 6 copies.
aurora is designed to transparently loss of 2 copies without affecting read and 3 without affecting write.
2 types of replica Aurora Replica (15 replicas) and mysql read replica of aurora database up to 5.
DynamoDB vs RDS
DynamoDB offers “push button” scaling. meaning that you can scale your database on the fly, without any downtime.
RDS is not so easy and you usually have to use a bigger instance size of to add a read replica
DynamoDB
stored on ssd storage
spread across 3 geograpgically distinc data center
eventual consistent read (defauly around second)
consistenet read (under second)
Redshift configuration
single Node (160GB)
multi-node
- leader node (manages client connection and receives queries)
- compute nodes(stored and perform queires and computations). up to 128 compute nodes.
Elasticache
webservice that makes it easy to deploy,operate , and cale an in memory cache in the cloud. the service improves the preformance of web application by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases. ElasticCache supports two open-source in-memomy caching engines :
- memcached
- redis
study FAQ RDS*
VPS Overview
know it well
virtual data center in the cloud
amazon virtual private cloud (amazon VPC) lets you provision a logically isolated section of the amazon web service cloud where you can launch aws resources in a virtual network that you define. you have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.
you can easily customize the network configuration for your amazon virtual cloud. for example , you can create public-facing subnet for your webservers that has access to the internet, and place your backend systems such as databases or application servers in a private-facing subnet with no internet access. you can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.
additionally can create a Hardware Virtual Private Network(VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.
what can you do in a VPC?
Launch instances into a subnet of your choosing
assign custom IP address ranges in each subnet
Configure route tables between subnets
create internet gateway and attach in to our vpc ( one internet gateway to a vpc)
much better security control over your AWS resources(use subnet to block specific IP addresses or netword acl to block Ip addresses
instance secruity group(across AZ and mutliple subnets)
subnet network access contol lists (ACLS) - block IP addresses
default VPC vs Custom VPC
Default vpc is user friendly, allowing you to immediately deploy instances.
all subnets in default VC havea route out to the internet (dont get private subnet in VPC)
each EC2 instance has both a public and private IP address (if custome we wont get both unless we set them up)
VPC Peering
allows you to connect one VPC with another via direct network route using private IP addresses
instances behave as if they were on the same private network
you can peer VPC’s with other AWS accounts as well as with other VPCs in the same account
peering is in a star configuration ie 1 central VPC peers with 4 other NO TRANSATIVE PEERING
EXAM tips for VPC
Think of a VPC as logical datacenter in AWS
consist of IGWs (or virtual private gateway), Route Tables, Netweork access control list (NACL), subnets, and security groups
1 subnet - 1 AZ
Security Groups are Stateful, Network access control lists are stateless
NO TRANSITIVE PEERING