MIGs & Storage in GCP Flashcards
What is an Unmanaged Instance group?
Unlike managed instance groups, unmanaged instance groups are just collections of distinct VMs that DO NOT share a common instance template. You simply create a group, and add individual VMs to the group. In the Google Cloud console, go to the Instance groups page. Click Create instance group.
In what scenario should we use unmanaged instance groups?
Use unmanaged instance groups if you need to apply load balancing to groups of heterogeneous instances, or if you need to manage the instances yourself. You can add up to 2,000 VMs to a group. If you want to add more than 2,000 VMs to the group, contact support.
How are VPCs and Subnets managed when creating MIGs?
Network and subnet
When you create a managed instance group, you must reference an existing instance template. The instance template defines the VPC network and subnet that member instances use. For auto mode VPC networks, you can omit the subnet; this instructs Google Cloud to select the automatically-created subnet in the region specified in the template. If you omit a VPC network, Google Cloud attempts to use the VPC network named default.**
Can I use MIGs for Containers?
Containers
You can simplify application deployment by deploying containers to instances in managed instance groups.
When you specify a container image in an instance template and then use that template to create a managed instance group, each VM is created with a container-optimized OS that includes Docker, and your container starts automatically on each VM in the group.
4 Questions: Can we use MIG’s with preemptible instances? How much do these last? What if they shutdown? Will I be able to use them again?
YES WE CAN. Groups of preemptible instances
For workloads where minimal costs are more important than speed of execution, YOU CAN REDUCE THE COST of your workload BY USING PREEMPTIBLE VM instances in your instance group.
Preemptible instances last up to 24 hours, and are preempted gracefully—your application has 30 seconds to exit correctly. Preemptible instances can be deleted at any time, but autohealing will bring the instances back when preemptible capacity becomes available again.
What are MIG? Types…
An instance group is a collection of virtual machine (VM) instances that you can manage as a single entity.
Compute Engine offers two kinds of VM instance groups, managed and unmanaged:
- Managed instance groups (MIGs) let you operate apps on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating.
- Unmanaged instance groups let you load balance across a fleet of VMs that you manage yourself.
MIG Use Cases
Use a managed instance group (MIG) for scenarios like these:
Stateless serving workloads, such as a website frontend
Stateless batch, high-performance, or high-throughput compute workloads, such as image processing from a queue
Stateful applications, such as databases, legacy applications, and long-running batch computations with checkpointing
Benefits of MIGs
Benefits
MIGs offer the following advantages:
- High availability.
-
Automatically repairing failed VMs. If a VM in the group stops, crashes, gets preempted (Spot VMs), or is deleted by an action not initiated by the MIG, the MIG automatically recreates that VM based on its original configuration (same VM name, same template) so that the VM can resume its work.
* Application-based autohealing. You can also set up an application-based health check, which periodically verifies that your application responds as expected on each of the MIG’s instances. If an application is not responding on a VM, the MIG automatically recreates that VM for you. Checking that an application responds is more precise than simply verifying that a VM is up and running. - Regional (multiple zone) coverage. Regional MIGs let you spread app load across multiple zones. This replication protects against zonal failures. If that happens, your app can continue serving traffic from instances running in the remaining available zones in the same region.
-
Load balancing. MIGs work with load balancing services to distribute traffic across all of the instances in the group.
Scalability. When your apps require additional compute resources, autoscaled MIGs can automatically grow the number of instances in the group to meet demand. If demand drops, autoscaled MIGs can automatically shrink to reduce your costs. - Automated updates. The MIG automatic updater lets you safely deploy new versions of software to instances in your MIG and supports a flexible range of rollout scenarios, such as rolling updates and canary updates. You can control the speed and scope of deployment as well as the level of disruption to your service.
- Support for stateful workloads. You can use MIGs for building highly available deployments and automating operation of applications with stateful data or configuration, such as databases, DNS servers, legacy monolith applications, or long-running batch computations with checkpointing. Stateful MIGs preserve each instance’s unique state (instance name, attached persistent disks, and metadata) on machine restart, recreation, auto-healing, and update events.
What is MIG Autohealing?
Automatic repair and autohealing
Managed instance groups maintain high availability of your applications by proactively keeping your instances available. A MIG automatically repairs failed instances by recreating them.
You might also want to repair instances when an application freezes, crashes, or runs out of memory. Application-based autohealing improves application availability by relying on a health checking signal that detects application-specific issues such as freezing, crashing, or overloading. If a health check determines that an application has failed on a VM, the group automatically recreates that VM instance.
For more information, see About repairing VMs in a MIG.
Health checking
The health checks used to monitor MIGs are similar to the health checks used for load balancing, with some differences in behavior. Load balancing health checks help direct traffic away from non-responsive instances and toward healthy instances; these health checks do not cause Compute Engine to recreate instances. On the other hand, managed instance group health checks proactively signal to delete and recreate instances that become UNHEALTHY.
For the majority of scenarios, use separate health checks for load balancing and for autohealing. Health checking for load balancing can and should be more aggressive because these health checks determine whether an instance receives user traffic. Because customers might rely on your services, you want to catch non-responsive instances quickly so you can redirect traffic if necessary. In contrast, health checking for autohealing causes MIGs to proactively replace failing instances, so this health check should be more conservative than a load balancing health check.
For more information, see Set up an application health check and autohealing.
How does Load Balancing work to serve traffic between VMs?
Google Cloud load balancing uses instance groups, both managed and unmanaged, to serve traffic. Depending on the type of load balancer you are using, you can add instance groups to a target pool or backend service.
For workloads that need primary storage with high performance, what should I use?
use local SSDs, Persistent Disks, or Hyperdisks depending on your requirements.
Requirement –> Recommendation
Fast scratch disk or cache –> Use local SSD disks (ephemeral).
Sequential IOPS –> Use Persistent Disks with the pd-standard disk type.
IOPS-intensive workload –> Use Persistent Disks with the pd-extreme or pd-ssd disk type.
Balance between performance and cost –> Use Persistent Disks with the pd-balanced disk type.
Scalable performance and capacity dynamically –> Use Hyperdisk.
NOTE: Choose a suitable Hyperdisk type:
Hyperdisk Throughput is recommended for scale-out analytics, data drives for cost-sensitive apps, and for cold storage.
Hyperdisk Extreme is recommended for workloads that need high I/O, such as high-performance databases.
Depending on your redundancy requirements, choose between zonal and regional disks.
Requirement Recommendation
Redundancy within a single zone in a region –> Use zonal Persistent Disks or Hyperdisks.
Redundancy across multiple zones within a region –> Use regional Persistent Disks.
Fast scratch disk or cache
Use local SSD disks (ephemeral).
Sequential IOPS –>
Use Persistent Disks with the pd-standard disk type.
IOPS-intensive workload –>
Use Persistent Disks with the pd-extreme or pd-ssd disk type.
Balance between performance and cost –>
Use Persistent Disks with the pd-balanced disk type.