Lesson 6: Congestion Control and Streaming Flashcards Preview

CS6250 Test 2 > Lesson 6: Congestion Control and Streaming > Flashcards

Flashcards in Lesson 6: Congestion Control and Streaming Deck (47):

Goal of congestion control

fill the pipes without overflowing them aka
1. use network resources efficiently
2. fair allocation of resources
also to avoid congestion collapse


What is congestion

Different sources compete for resources in the network, sources are unaware of each other and unaware of the internet/resources they share which can result in lost packets and long delays (bottleneck link and congestion collapse)


What is congestion collapse

increase in traffic leads to a decrease of useful work done, eventually the network reaches saturation


2 causes of congestion collapse

Spurious retransmission and undelivered packets


What is spurious retransmission

When senders don't receive acknowledgement that their packets were delivered so they make copies of the same packet and these copies become outstanding in the network as well


Solution to spurious retransmission

have better times and use TCPs congestion control


Undelivered packets

consume resources and are dropped somewhere in the network


Solution to undelivered packets

apply congestion control to all traffic


Two approaches to Congestion control

End to end and network assisted


End to end approach to congestion control

No feedback from network to senders when they should slow down their rates
Congestion inferred by packet loss and increased delay
Approach taken by TCP


What is the approach taken by TCP for congestion control

End to end


Network assisted approach to congestion control

Routers provide feedback to end systems about the rates that should be sending
Single bit
Explicit rates


T/F packet loss is a sign of congestion in all networks

False (wireless networks packet loss could happen because of corruption of a packet)


How TCP Congestion control works

Senders continue to increase rates until they see packet drops in the network
Packet loss occurs: when senders are sending faster than the rate at which a router on the network can drain its buffer. Has increase algorithm and decrease


What are two approaches to adjusting rate

window based algorithm and rate based algorthm


Window based algorithm

A sender can only have a certain number of packets outstanding (in flight)
sender window is 4 packets (4 packets outstanding), meaning the sender can’t send more packets until it receives acknowledgement (“ACK”) from the receiver
If sender wants to increase rate that it is sending, increase the window size
A sender might increase the rate, every time it receives an ACK from the receiver
Success: one packet increase window per round trip (additive increase)
Failure: window size reduced by half (multiplicative decrease)


Rate based algorithm

Sender monitors loss rate
Uses timer to modulate



everyone gets fair share of network resources



network resources are used well


If RTT is 100ms and packet is 1kb (1 byte = 8 bits), window size is 10 packets. What is sending rate?

800 kbps. 100 pkts/sec * 8000 bits/packets = 800,000 bps


Phase plot and optimal operating line

represents fairness and efficiency. Left of line is underutilization, right is overutilization
Black line is efficiency and green line is fair line



additive increase, multiplicative decrease


What does AIMD converge to

fairness and efficiency


Data centers consist of ____ which are connected by ____

data centers consist of server racks which are connected by switches


'high fan in'

high bandwidth, low latency with many clients issuing requests in parallel with a small amount of data


TCP Incast problem occurs because of

combine the 1. High bandwidth, low latency application requirements, 2. Lots of parallel requests made by the servers, 3. Small buffer sizes in the switches


TCP Incase problem is

drastic reduction in application throughput that results when servers using TCP all simultaneously request data, leading to a gross underutilization of network capacity in many-to-one communication networks like a data center


Filling up buffers at the switch level results in (2)

bursty retransmissions and tcp timeouts


Bursty transmissions are caused by

TCP timeouts


TCP timeouts in a data center

can be 100s of miliseconds, but the round trip time in a data center network is extremely short (< 1 millisecond or a 100s of microseconds). So senders are bottlenecked by TCP timeouts, which they have to wait for before they retransmit (when senders wait, it is link idle time


Barrier Synchronization

when a client or an application might have many parallel threads, and no forward progress can be made until all the responses for those threads are satisfied. I.e.: So we get a response for threads 1-3, but TCP times out on the fourth, and the link is idle for a long time until the fourth connection is timed out.


solutions to Barrier Synchronization problem to help manage network load

Use fine-grained TCP retransmission timers (on the order of microseconds, not milliseconds). This reduces TCP timeouts, which increases system throughput
Have the client acknowledge every other packet, not every packet. This reduces the overall network load.


Challenges with media and streaming

Large volume of data
Data volume varies over time (because of the way data, may not be sent at a constant rate)
Users have low tolerance for delay variation
Once playout of a video starts, users don’t want the video to stop
Users have low tolerance for delay, period.
Some loss is acceptable!


Image compression

spatial redundancy (exploits aspects humans don't notice)


Compression across images

Temporal redundancy => Uses combo of static image compression on reference/anchor (I) frames and derived (P) frame. The P frame is almost the same as the I frame plus a few motion vectors


Playout buffer

Client stores data as it arrives and plays the data in a continuous fashion so that the user plays a smooth playout
Client waits a few seconds to allow data to built up in the buffer (in case of times that data arrives slower)


Playout delay

Packets are generated at a rate, but received at a different rate because of delay in the network
Playout delay allows for smooth playout
Start up delays are acceptable
Small amount of loss does not affect playout but re-transmitting a lost packet might take too long (better to lose it then try to get it again)


What pathologies can streaming audio/video tolerate

Loss and delay, but NOT variation in delay


T/F: TCP is a good fit for streaming data

False. TCP:
Reliable delivery: TCP - retransmits lost packets
Slowing down upon loss
Protocol overhead: doesn’t need ACK for every other packet


UDP for streaming

Does not retransmit packets
Smaller header
No sending rate adaptation
HIgher layer must solve these problems
Sending rate must be TCP friendly/fair


When applications compete for bandwidth, which is prioritized?

Voice over IP ( VOIP) over FTP


How is VOIP prioritized

Mark audio packets as they arrive at the router so they receive a higher priority


3 alternatives to marking packets with higher priority

Fixed bandwidth per application, scheduling, and admission control


Problem with fixed bandwidth per application

inefficient if one of the flows uses less than
not used much



Queue with green packets are served more often than red


Admission Control

Application declares needs in advance and the network may block application traffic if can’t serve needs
Con: Blocking
not used much


Commonly used for streaming audio/video

scheduling and marking packets