Transport Flashcards
(10 cards)
Why do flows with shorter RTTs get more bandwidth?
The shorter-RTT flow gets more bandwidth because its faster ACKs let it increase cwnd more quickly and react to congestion sooner, while the longer-RTT flow lags behind in both growth and recovery.
Shorter RTT = faster feedback = quicker rate adjustments = more bandwidth over time.
Why do flows with longer RTTs get less bandwidth?
Longer-RTT flows get less bandwidth because their slow feedback delays rate adjustments—they grow cwnd sluggishly and react late to congestion, leaving unused bandwidth for shorter-RTT flows to grab.
Slower ACKs → delayed cwnd growth & congestion response → less bandwidth.
Why do flows with high loss get less bandwidth?
High-loss flows get less bandwidth because frequent packet drops force TCP to aggressively reduce cwnd (multiplicative decrease), throttling their sending rate. Meanwhile, low-loss flows maintain higher cwnd and dominate the link.
More drops → more cwnd cuts → slower rate.
Key: Loss resets cwnd growth, starving high-loss flows.
Why do new flows get more bandwidth
New flows get more bandwidth because they start with slow-start, doubling cwnd aggressively until loss occurs, while old flows operate in congestion-avoidance (linear growth).
Slow-start»_space; congestion-avoidance → new flows grab bandwidth faster.
Key: Fresh flows exploit unused capacity before stabilizing.
What does TCP fairness mean in a shared bottleneck link?
It means multiple TCP flows should get roughly equal bandwidth shares when competing for the same bottleneck link.
How does RTT difference affect TCP fairness?
Shorter-RTT flows get more bandwidth because they adapt faster (faster ACKs, quicker congestion response).
Why does a shorter-RTT flow outperform a longer-RTT one?
- Faster ACKs → quicker window increases (additive increase).
- Sooner loss detection → faster recovery (multiplicative decrease).
- Higher throughput due to more frequent rate adjustments.
What happens with fairness with a flow that’s new but with high losses?
A new flow with high losses gets less bandwidth than other new flows (due to frequent cwnd resets from losses) but may still temporarily steal bandwidth from older flows during slow-start. Fairness breaks down because:
- High loss → Repeated cwnd cuts → Struggles to grow.
- Slow-start → Briefly spikes aggressively, hurting competing flows.
- Result: Unstable throughput—neither fair to older flows nor competitive with other new flows.
Key: TCP’s fairness assumes similar RTT/loss; high-loss flows suffer long-term.
What’s AIMD?
AIMD (Additive Increase Multiplicative Decrease) is a core congestion control mechanism in TCP that regulates data flow to avoid network congestion. It works by gradually increasing the transmission rate (additive increase) when the network is stable, adding a fixed amount to the window size per RTT (Round-Trip Time). However, upon detecting packet loss (a sign of congestion), TCP aggressively reduces the window size by half (multiplicative decrease) to alleviate congestion. This balanced approach ensures efficient bandwidth utilization while maintaining fairness and stability in shared networks.
What scenarios make Stop-and-Wait more appropriate?
Stop-and-wait protocols are most appropriate in the following situations due to their simplicity and reliability for specific use cases:
- Why? Stop-and-wait sends one packet at a time and waits for an acknowledgment (ACK) before transmitting the next. In networks with low bandwidth or short propagation delays (e.g., LANs), the inefficiency of waiting is minimal.
- Why? Since each packet is individually acknowledged, errors can be easily detected and corrected via retransmission. This makes it suitable for unreliable links (e.g., early wireless or satellite communications) where errors are frequent but data rates are low.
- Why? Stop-and-wait requires minimal buffer space and processing power, making it ideal for embedded systems, IoT devices, or legacy systems with limited computational resources.
- Why? In half-duplex systems (where only one party can transmit at a time), stop-and-wait naturally aligns with the communication constraints.
- Why? It’s often used in academic settings to explain fundamental concepts like flow control, error detection, and acknowledgments before introducing more complex protocols (e.g., sliding window).
- High BDP Networks (e.g., satellite links, high-speed WANs): Waiting for ACKs leads to low throughput because the channel remains idle during the round-trip time (RTT).
- Bulk Data Transfers: Sliding window protocols (e.g., TCP) are more efficient for high-speed data transmission.
Stop-and-wait is best suited for low-speed, low-latency, or error-prone environments where simplicity and reliability outweigh the need for high throughput. For modern high-speed networks, more advanced protocols (like sliding window) are preferred.