Module 7 Flashcards
Learning Unit 4 (55 cards)
1
Q
Network Abstraction
A
- Modern networking abstracts the concept of connection away from physical hardware.
- Decision-making can be separated from hardware (e.g., in VMs or software).
- Examples:
o Devices controlled remotely.
o Devices exist as VMs or logical constructs. - Supports traffic prioritization, filtering, redirection, and error correction.
- Enables flexible, scalable, and software-defined network architectures.
2
Q
Physical Network Architecture
A
- A network’s architecture includes devices, configurations, services, and how devices are connected.
- In enterprise networks, managed and Layer 3 switches play critical roles.
- Security of switches becomes more complex and important.
3
Q
Managed vs Unmanaged Switches
A
- Unmanaged switches: plug-and-play, no config, no IP, cheap, limited capability.
- Managed switches: configurable via CLI or GUI, often with IP addresses, more expensive, used in enterprise environments.
4
Q
Layer 2 / 3 / 4 Switches
A
- Layer 2 switch: standard switch; processes MAC addresses.
- Layer 3 switch:
o Interprets layer 3 (network layer) data.
o Supports routing protocols.
o Faster and cheaper than routers for large LANs.
o Hardware resembles routers but optimized for LAN performance. - Layer 4 switch (Content/Application switch):
o Interprets data from layers 4–7.
o Enables advanced filtering, statistics, and security.
o Expensive, backbone use, varies by vendor.
5
Q
Switch Redundancy & Loops
A
- Redundancy improves fault tolerance (e.g., switch fails → alternate path).
- Redundant paths can cause broadcast storms (loops of repeated frames).
- Storm control: vendor feature that drops traffic exceeding thresholds; e.g., Cisco storm-control, Huawei storm control.
6
Q
Spanning Tree Protocol (STP)
A
- Prevents switching loops and broadcast storms.
- Works at Layer 2. Defined in IEEE 802.1D.
- Selects optimal, loop-free paths and blocks redundant ones.
- Adapts to topology changes (e.g., switch failure).
7
Q
STP Path Selection Process
A
- Root Bridge Election:
o One per network.
o Chosen by lowest Bridge ID (priority + MAC address). - Path Calculation:
o Finds least-cost path from each switch to root bridge.
o Each switch gets one Root Port (RP) for forwarding to root. - Loop Prevention:
o Enables only one Designated Port (DP) per link between bridges.
o Blocks all other redundant links.
8
Q
STP Components (RP, DP)
A
- Root Port (RP): On a switch, port that forwards frames to the root bridge.
- Designated Port (DP): Port on a link selected to forward traffic; other ports blocked.
- STP adapts if paths fail by recalculating tree.
9
Q
STP Security Features
A
- BPDU Guard:
o Blocks BPDUs on host ports (e.g., workstations).
o Prevents rogue devices from influencing STP. - BPDU Filter:
o Disables STP on specific ports.
o Used at ISP demarcation to prevent topology mixing. - Root Guard:
o Prevents connected switches from becoming root bridge.
o Used by ISPs to secure STP boundary.
10
Q
STP Variants
A
- Original STP is slow (~2 minutes to detect link failure).
- Modern alternatives:
o RSTP (Rapid STP): IEEE 802.1w.
o MSTP (Multiple STP): Originally IEEE 802.1s.
o These improve speed and allow VLAN-specific paths.
11
Q
Three-Tier Network Architecture
A
- A hierarchical switch architecture designed for redundancy, performance, and scalability in enterprise networks.
- Access Layer (Edge Layer):
o Connects directly to hosts (e.g., servers, workstations, printers).
o Typically Layer 2 switching (Ethernet/MAC-based). - Distribution Layer (Aggregation Layer):
o Redundant mesh of multilayer switches or routers.
o Handles routing, traffic filtering, and WAN/internet connections.
o Ideally operates at Layer 3. - Core Layer (Backbone):
o Fast, minimal-switching Layer 3 infrastructure.
o Avoids filtering/routing for high-speed traffic forwarding. - Benefits:
o Efficient handling of east-west traffic (within network segment) and north-south traffic (leaving/entering network).
o Mesh topology provides redundancy for fault tolerance.
12
Q
East-West vs. North-South Traffic
A
- East-West Traffic:
o Flows within a local segment or between servers in the same data center.
o Example: Web server querying a database server nearby.
o Often stays within access or distribution layer. - North-South Traffic:
o Flows between a local network and external networks or data centers.
o Example: Client on Internet accessing internal web server.
o Passes through all three layers (access → distribution → core). - Modern networks need better east-west traffic optimization due to technologies like virtualization and SDN.
13
Q
Spine-and-Leaf Architecture
A
- Modern alternative to three-tier design optimized for east-west traffic.
- Spine Switches (Backbone):
o High-speed Layer 3 devices in a mesh topology.
o Each spine connects to all leaf switches (not to other spines). - Leaf Switches:
o Layer 2 or Layer 3 switches connected to servers and devices.
o Often placed in the same rack as supported devices. - Spine-leaf uses mesh topology between spine and leaf switches for minimal hops.
14
Q
Spine-and-Leaf Architecture – Benefits
A
- Redundancy: Full mesh means no single point of failure.
- Lower Latency: Fewer hops between any two devices.
- Better Performance: Replaces STP with path management protocols like TRILL and SPB.
- Scalability: More available paths allow more host devices.
- Security: All traffic (incl. east-west) can be monitored.
- Cost-Efficient: Hardware often cheaper than 3-tier equivalents.
- Built to support technologies like SDN, cloud, and virtualization.
15
Q
Rack-Based Leaf Switch Architectures
A
- ToR (Top of Rack) Switching:
o One switch per rack, typically positioned at the top.
o Connects all rack devices to the wider network.
o Simplifies cable management with shorter cable runs. - EoR (End of Row) Switching:
o Fewer switches placed at the end or middle of a row of racks (MoR = Middle of Row).
o Requires longer cabling across racks but reduces switch count and rack space. - Benefits of both:
o Shorter cable runs = support for high-speed connections (10 GbE, 40 GbE, 100 GbE).
o Increased design flexibility and reduced cabling overall.
16
Q
Software-Defined Networking (SDN) Overview
A
- SDN is a centralized approach to networking that shifts decision-making away from network hardware to software control.
- A central device called the SDN controller oversees configuration and management of all network devices (physical & virtual).
- Instead of configuring each device individually, the controller configures devices collectively and can adapt automatically to changing conditions.
- SDN offers a “bird’s eye view” of the network, enabling more intelligent and efficient traffic management.
- This central control allows coordinated networking rules, akin to assigning dishes by group for a balanced potluck (vs. everyone bringing random items).
17
Q
SDN Architecture – Planes
A
- Infrastructure plane (Data Plane):
o Physical/virtual devices (routers, switches, firewalls, etc.) that forward data.
o Performs decapsulation, MAC-to-port mapping, NAT, and re-encapsulation.
o The “brawn” of the network – handles message movement. - Control plane:
o The “brain” – makes decisions (e.g., MAC table building, STP path optimization).
o Abstracted to the SDN controller, which sends rules to devices.
o If a message doesn’t match a rule, the device forwards it to the controller.
o Uses APIs and protocols like OpenFlow for communication. - Application plane:
o Interfaces with network applications (DNS, VoIP, analytics, etc.) via APIs.
o Allows SDN to dynamically respond to app-specific needs (e.g., breach detection). - Management plane:
o Not a traditional communication plane, but allows admins to monitor, manage, and analyze the network.
o Uses protocols like SSH, Telnet, SNMP, HTTP.
o Often considered part of the control plane.
18
Q
SDN Interfaces and Communication
A
- Southbound Interface (SBI):
o Communication between SDN controller and network devices.
o Uses protocols like OpenFlow. - Northbound Interface (NBI):
o Communication between SDN controller and applications.
o Uses APIs to allow app-level interaction with the network. - The SDN controller acts as a middleman, translating app requirements into network configurations.
19
Q
Disaggregation in SDN
A
- Disaggregation = Separation of network functions into distinct layers.
- Like an assembly line: devices specialize in specific tasks instead of performing all roles.
- In SDN, disaggregation improves efficiency and quality of network operations.
- Layers:
o Infrastructure plane handles raw data movement.
o Control plane makes decisions.
o Application & management planes handle external interfaces and monitoring. - Benefits: Easier updates, centralized control, and specialized handling of tasks.
20
Q
SDN Controller Characteristics & Vendor Solutions
A
- Controller software varies in:
o Level of virtualization support
o Number of supported switches
o WAN functionality
o Scalability
o Security features
o Centralized monitoring abilities - Major vendors: Cisco, HP, IBM, Juniper, VMware
- Open source options:
o ODL (OpenDayLight)
o ONOS (Open Network Operating System)
o OpenKilda - Some vendors (e.g., Cisco) partially centralize control functions, keeping some decision-making at hardware level.
21
Q
SDN Benefits and Efficiency Gains
A
- Centralized configuration = faster response to network changes.
- Reduces need for expensive, intelligent hardware—uses low-cost “white box switches” instead.
- Supports multi-vendor environments and centralized monitoring.
- Enables complex rule sets (tables within tables, conditional logic).
- Simplifies management of both physical and virtual devices.
- Boosts performance, scalability, and cost-effectiveness despite added complexity.
22
Q
SDN & Virtualization
A
- SDN can manage fully virtualized network resources, including those outside of local infrastructure (e.g., cloud-based services).
- Admins use SDN software to manage both on-prem and remote/cloud networks.
- SDN forms the foundation for advanced IT domains like virtualization and cloud computing.
- This abstraction moves beyond physical limitations and supports scalable, flexible network infrastructure.
23
Q
Storage Area Network (SAN) Overview
A
- Specialized high-speed network dedicated to storage devices
- Separate from the LAN but can be connected to it
- Abstracts storage from compute resources, centralizing data access
- Provides centralized storage management across multiple servers
- Storage devices typically configured in RAID arrays for redundancy
- Offers high fault tolerance, fast data access, and massive scalability
- Suitable for enterprise environments and data centers
- Ideal for virtual machine (VM) hosting, backup systems, and high-demand applications
24
Q
SAN Architecture & Benefits
A
- Storage abstraction: Separates storage from servers to reduce redundancy and simplify management.
- Centralized control: Eases access control and scaling.
- RAID arrays: Provide redundancy and fault tolerance.
- Multipathing: Uses multiple connections between storage and network to ensure failover and load balancing.
- High speed: Designed to match or exceed direct-attached storage (DAS) speeds via fast network connections.
- Scalability: Easily expanded with more devices.
- Location flexibility: Can be located remotely, including in an ISP data center for improved security and management outsourcing.
25
SAN Use Cases
* High-volume, always-on data access
* Large enterprise environments
* Hosting VMs at scale
* Centralized, fault-tolerant storage
* Backup solutions and disaster recovery systems
26
SAN Connection Types
1. Fibre Channel (FC)
o Dedicated network for storage traffic, separate from Ethernet
o Uses fiber-optic cables (sometimes copper) and HBAs (Host Bus Adapters)
o Requires special FC switches
o Very fast: up to 128 GFC per lane, 512 GFC (quad-lane), 1 TFC in development
o Expensive and requires specialized IT training
2. Fibre Channel over Ethernet (FCoE)
o Encapsulates FC frames inside Ethernet
o Works over standard Ethernet hardware
o Uses CNAs (Converged Network Adapters)
o Retains many FC speed advantages with lower cost
o Integrates SAN and LAN infrastructure for cost-efficiency
3. iSCSI (Internet SCSI)
o Runs SCSI commands over TCP/IP
o Uses standard Ethernet with iSCSI initiator software
o Low cost, minimal training, no special hardware
o Can use jumbo frames to optimize performance
o Slower than FC: ~10 Gbps (currently), with 40 Gbps on the horizon
o Primary benefit: easy deployment on existing Ethernet networks
4. InfiniBand (IB)
o Very fast, but complex and expensive
o Requires special hardware
o Niche use cases, not widely adopted
o Suited for high-performance computing environments
27
Virtualization Overview
* Virtualization = logical version of a system, not physical
* Host = physical computer
* Guest = VM running on host
* Guest OS is unaware of host
* Hypervisor = software that creates/manages VMs
* All VMs share host resources (CPU, disk, memory, NICs)
* VMs can run different OSes and emulate different hardware
* VM appears and acts like a physical machine to users
28
Types of Hypervisors
* Type 1 (bare-metal)
o Installs before any OS
o Minimal OS (often Linux-based)
o Communicates with hardware via firmware
o Secure and efficient
o Examples: Citrix Hypervisor, VMware ESXi, Microsoft Hyper-V
* Type 2 (hosted)
o Installs within existing OS
o Dependent on host OS for resources
o Less secure and slower than Type 1
o Examples: VirtualBox, VMware Player
* Hyper-V
o Appears like Type 2 (runs in Windows)
o Actually creates a virtual layer underneath OS → Type 1
o Windows OS retains privileged access to hardware
* KVM (Linux)
o Converts Linux OS into Type 1
o Original OS still usable
29
VM Characteristics
* VM specs (RAM, CPU, disk, NIC) defined at creation
* Customizable hardware/software features
* Not limited by host specs (within total hardware limits)
* VMs function differently from host or other VMs
* Logical layer operates independently of physical hardware
30
Virtual Networking Basics
* Each VM has at least one vNIC
o Virtual NIC = functions like a physical NIC
o Operates at Data Link Layer
o Assigned MAC address on creation
o Up to 8 vNICs per VM in VirtualBox
* vSwitch / virtual switch
o Connects vNICs and physical NICs
o Operates at Data Link Layer
o Allows communication between VMs & physical LAN
o Multiple vSwitches can exist on a single host
* Type 1: Host + guests all use vSwitch
* Type 2: Host uses physical NIC, guests use vSwitch
31
Networking Modes (vNIC Configurations)
Bridged Mode
* vNIC uses host’s NIC to connect to physical LAN
* VM gets IP/gateway/subnet from LAN DHCP
* VM appears as normal node on network
* Use for: VMs needing static IPs (e.g. web/mail servers)
NAT Mode
* vNIC uses host as NAT router
* IP info assigned by host’s hypervisor (acts as DHCP)
* VM is invisible to physical LAN
* Other nodes communicate via host IP
* Use for: isolated VMs, testing, apps not needing inbound access
Host-Only Mode
* VMs can talk to each other and host
* No communication beyond host
* vNIC never touches host's physical NIC
* IPs assigned by host’s virtualization DHCP
* Use for: internal-only communication or sandboxing
32
Virtualization – Advantages
* Efficient use of resources
o Consolidates multiple services on one powerful machine
o Reduces underutilized hardware
o Caution: creates a single point of failure if not duplicated
* Cost and energy savings
o Fewer physical machines needed
o Lower electricity and cooling costs
o Thin clients use centralized server for processing
* Fault and threat isolation
o Guest system errors do not affect others
o Safer testing of beta software
o Limited hardware access for guests reduces security risks
* Simple backups, recovery, and replication
o Save guest machine images for quick recovery or migration
o Create VM clones easily
o Some software supports cross-platform VM imports
33
Virtualization – Disadvantages
* Compromised performance
o VM resource contention may slow critical applications
o Hypervisor requires overhead
o Not ideal for real-time systems (e.g., factory control, hospital systems)
* Increased complexity
o Admins must master virtualization platforms
o Complex VM switching and addressing
o Easy VM creation leads to clutter and unmanaged instances
* Increased licensing costs
o Each VM requires a separate software license
o Costly when return on investment is low
o Alternatives: multiple user accounts or reusing VM images
* Single point of failure
o Host failure crashes all hosted guest machines
o Mitigation: clustering, automatic failover
34
NFV (Network Functions Virtualization)
Replaces physical network devices with virtual equivalents
E.g., virtual firewalls, routers, load balancers, storage servers
* Advantages
o Fast and sometimes automatic VM migration on failure
o Better hardware, energy, and space efficiency
o Easy scalability of network services
* Considerations
o Requires licenses for each virtual device and the hypervisor
o Small latency added by hypervisor interactions (usually negligible)
o Virtual firewalls may not be reliable for full network protection
Better suited for virtual segments or as physical backups
35
Cloud Computing Features (NIST)
* On-demand self-service: Services/apps/storage available at any time on request
* Broad network access: Accessible via any device (smartphones, desktops, etc.) with Internet
* Resource pooling: Shared hardware/services (e.g., websites hosted on same servers)
o Multitenancy = Multiple customers share hardware
* Measured service: Usage tracked (e.g., bandwidth, storage, processing, connections)
* Rapid elasticity: Dynamic scaling up/down without disrupting service
o Scalability = Can grow to meet increased workload
o Elasticity = Can grow quickly and automatically
36
Cloud Computing Use Case Example
* Company with globally distributed developers
* CSP hosts test platforms on cloud-accessible servers
* Developers load/test software remotely
* Storage scales dynamically (adds/releases space as needed)
* CSP handles hardware security & backups
* Frees internal IT staff from hardware management
37
Cloud Concepts & Benefits
* Cloud = abstraction of IT from data center
* Virtualization used for multiple platforms (e.g., Xen by Citrix)
* Services range: website hosting → virtual servers for dev & collaboration
* Flexibility in choosing/configuring resources
* Common platforms: Dropbox, Google Drive, OneDrive, Gmail
* Cloud removes need to manage underlying infrastructure
38
Cloud Service Models – Overview
* Vary by level of customer/vendor responsibility
* More vendor-managed = higher abstraction & ease of use
* Pyramid model:
o SaaS = most user-accessible
o IaaS = most admin control
o PaaS = developer-focused
o XaaS = catch-all term for customizable service needs
39
On-Premises (Traditional IT)
* Full ownership & control of hardware/software
* Local infrastructure, storage, apps
* Example: Microsoft Office installed on laptop (no Internet needed)
* Pizza analogy: make your own pizza from scratch at home
40
IaaS (Infrastructure as a Service)
* Vendor provides virtual hardware (servers, DNS, etc.)
* Customer manages OS, apps, data, backups
* Example: AWS EC2 = user creates VMs, installs OS, runs services
* Pizza analogy: take-and-bake → assembled by restaurant, baked at home
* Least abstracted, most admin control
41
PaaS (Platform as a Service)
* Vendor provides platform (OS + libraries + hardware)
* Used for app dev/testing without managing full environment
* Supports containers, serverless compute (e.g., AWS Lambda, GCP)
* Customer manages applications & data (incl. backups)
* Pizza analogy: pizza delivered to door, customer eats & cleans up
* Example: Alexa (runs on AWS Lambda), GCP
42
SaaS (Software as a Service)
* Fully managed apps via web interface
* Compatible with many devices & OSs
* Vendor manages infrastructure, platform, app
* Examples: Gmail, Office 365, Google Docs, Salesforce
* Pizza analogy: dine-in at restaurant → full service provided
43
XaaS (Anything as a Service)
* Catch-all model (monitoring, storage, apps, virtual desktops)
* DaaS (Desktop as a Service) = virtual desktop in browser
* Includes OS + apps based on config/licenses
* Similar to VDI but CSP hosts back end
44
Cloud Service Model Comparison
* SaaS
o Max vendor responsibility
o Easiest for end users
o Minimal setup
* PaaS
o Balanced vendor/customer responsibility
o Developer-focused
* IaaS
o Customer manages most infrastructure
o Requires technical expertise
* XaaS
o Custom service configurations
o Flexible for enterprise needs
45
Cloud Service Pyramid Summary
* Responsibility Pyramid (top-down):
o SaaS: Least customer involvement
o PaaS: Moderate involvement
o IaaS: Most technical control
* User Access Pyramid (bottom-up):
o SaaS: Widest user base, minimal setup
o PaaS: Devs/testers
o IaaS: Network architects/admins
46
Cloud Deployment Models
* Public Cloud
o Service over public transmission (e.g., Internet)
o Most examples operate here
* Private Cloud
o On org’s own servers/data center OR virtually for one org
o Internal hosting = reuse of existing infrastructure
o Virtual hosting = scalability + accessibility
* Community Cloud
o Shared by multiple orgs with common interests
o Not public
o Can be hosted internally or by 3rd party
o Example: shared medical database
* Hybrid Cloud
o Combines public + private cloud resources
o Common in transitional or mixed solutions
o Example: private cloud for data, public for email
* Multicloud
o Uses multiple cloud providers for different services
o “Best in class” provider selection
o Example: DB from CSP A, web servers from CSP B, CRM from CSP C
o Needs tools (e.g., Aviatrix) to link services productively
47
Orchestration & Automation
* CLI over GUI
o CLI (Command Line Interface) = better traceability + consistency
o Example AWS CLI:
o aws ec2 run-instances --image-id ami-0ff8a91507f77f867 --count 1 --instance-type t2.micro
o --key-name MyKeyPair --security-groups MySG
* Infrastructure as Code (IaC)
o Uses config files/scripts to manage cloud
o Benefits:
Logs: who/when/what changed
Revert to previous state
Standardized deployment
* Automation
o Scripted response to single events
o Example: auto-scale based on usage
* Orchestration
o Combines automated tasks into complex workflows
o Goal: reduce human error + minimize manual work
o Used in well-designed, scalable cloud setups
48
Cloud Connectivity & Security Risks
* Security & Availability Risks
o ISP outage
o Bandwidth throttling by ISP
o Cloud provider outage
o Failure of cloud provider's backup/security
o Misconfigured resources exposing client data
o Unauthorized access (internal/external)
o Breach of confidentiality agreements
o Failure to meet security regulations (e.g., HIPAA, finance)
o Data/IP ownership disputes
o Data deletion if payments lapse
o BYOC (Bring Your Own Cloud) risk from personal devices
o Fines, lawsuits, loss of trust after breaches
* Mitigations
o Encryption
o Careful selection of connection method
49
Cloud Connectivity Methods
* Internet
o Cheapest
o High/unpredictable latency
o Major security concerns
* VPN
o Uses standard site-to-site or remote access VPN tech
* Remote Access Connections
o Uses tunneling/terminal emulation (e.g., SSH, RDP)
* Leased Line
o Dedicated bandwidth between customer and provider
o May involve multiple ISPs
o Often hybrid pay-per-use model
o Works with private/direct connections
* Private/Direct Connection
o Low latency, high predictability
o Expensive
o Supported by PoPs (Points of Presence) at colocation facilities
o Example services:
Amazon Direct Connect
Azure ExpressRoute
50
Monitoring, Availability & Performance
* Cloud Monitoring
o Example: AWS CloudWatch
Monitor resource metrics (CPU, memory)
Trigger alarms on threshold breaches
Track spending
Aid in performance optimization
* Availability Principles
o Similar to on-premise network strategies
o Redundancy, monitoring, failover systems still apply
o Use tools and policies to ensure uptime and stability in cloud
51
Network Availability
* Availability: % of time a network/system is accessible by authorized users
* High Availability (HA): Systems designed to operate nearly all the time
* Common metrics:
o 99.9% uptime = "three 9s"
o 99.99% uptime = "four 9s"
o 99.999% uptime = "five 9s" (example: only ~5 min 15 sec downtime/year)
* Uptime measured by system tools:
o Linux/UNIX: uptime command
o Windows 10: Task Manager
52
Fault Tolerance
* Definition: System’s ability to keep working despite hardware/software faults
* Key: Multiple data paths for rerouting if a component fails
* Fault vs Failure:
o Fault: malfunction of one system component
o Failure: system deviates from expected performance (faults can cause failures)
* Optimal fault tolerance depends on criticality of service/data
* Examples: Backup power (generators) for power outages; redundant NICs for hardware faults
53
Redundancy Concepts
* MTBF (Mean Time Between Failures): Average expected time before a device fails
* MTTR (Mean Time To Repair): Average time to fix/replace a failed device
* Redundancy: Multiple identical components/services ready to replace any failed one
* Purpose: Eliminate single points of failure for critical elements (e.g., Internet connection, file server disk)
* Costs: Redundancy adds expense but reduces risk of downtime
* Types of spare parts:
o Hot spare: Installed, takes over immediately upon failure
o Cold spare: Stored, requires manual replacement causing downtime
* Devices with failover/hot-swappable parts increase fault tolerance (NICs, power supplies, fans, processors)
54
Redundant Links & Link Aggregation
* Link Aggregation (also called NIC teaming, bonding, EtherChannel, port aggregation)
* Combines multiple physical NICs into one logical interface (LAG)
* Benefits:
o Increased total bandwidth (not per-session speed)
o Automatic failover if one NIC/link fails
o Load balancing to optimize performance and fault tolerance
* Requirements:
o All interfaces same speed, duplex, VLAN, MTU
o Configured with protocols like LACP (Link Aggregation Control Protocol, IEEE 802.1AX)
* LACP modes:
o Static: manual setup, no error compensation
o Passive: listens for LACP requests, doesn’t initiate
o Active: initiates and negotiates LACP, most common and fault tolerant
55
Load Balancing & Clustering
* Load Balancer: Device distributing network traffic intelligently among multiple servers
* Cluster: Group of redundant servers/resources appearing as a single device to clients
* Example:
o Two web servers behind one virtual IP (VIP)
o Load balancer distributes requests evenly
o If one server fails, the other takes over
* Autoscaling (in cloud/virtualized environments):
o Automatically adds/removes servers based on traffic
* CARP (Common Address Redundancy Protocol):
o Allows multiple devices to share IP addresses as a redundancy group
o One device (group master) manages incoming requests, distributes to group members
* Server Clustering with VMs:
o VMs connect via virtual switches (vSwitch) in hypervisor
o Distributed switching centralizes control of VMs across hosts
o Minimizes config errors, simplifies network management
o Examples: VMware VDS, Cisco Nexus 1000v