Lesson 6: Congestion Control and Streaming Flashcards Preview

CS6250 Test 2 > Lesson 6: Congestion Control and Streaming > Flashcards

Flashcards in Lesson 6: Congestion Control and Streaming Deck (47):
1

Goal of congestion control

fill the pipes without overflowing them aka
1. use network resources efficiently
2. fair allocation of resources
also to avoid congestion collapse

2

What is congestion

Different sources compete for resources in the network, sources are unaware of each other and unaware of the internet/resources they share which can result in lost packets and long delays (bottleneck link and congestion collapse)

3

What is congestion collapse

increase in traffic leads to a decrease of useful work done, eventually the network reaches saturation

4

2 causes of congestion collapse

Spurious retransmission and undelivered packets

5

What is spurious retransmission

When senders don't receive acknowledgement that their packets were delivered so they make copies of the same packet and these copies become outstanding in the network as well

6

Solution to spurious retransmission

have better times and use TCPs congestion control

7

Undelivered packets

consume resources and are dropped somewhere in the network

8

Solution to undelivered packets

apply congestion control to all traffic

9

Two approaches to Congestion control

End to end and network assisted

10

End to end approach to congestion control

No feedback from network to senders when they should slow down their rates
Congestion inferred by packet loss and increased delay
Approach taken by TCP

11

What is the approach taken by TCP for congestion control

End to end

12

Network assisted approach to congestion control

Routers provide feedback to end systems about the rates that should be sending
Single bit
Explicit rates

13

T/F packet loss is a sign of congestion in all networks

False (wireless networks packet loss could happen because of corruption of a packet)

14

How TCP Congestion control works

Senders continue to increase rates until they see packet drops in the network
Packet loss occurs: when senders are sending faster than the rate at which a router on the network can drain its buffer. Has increase algorithm and decrease

15

What are two approaches to adjusting rate

window based algorithm and rate based algorthm

16

Window based algorithm

A sender can only have a certain number of packets outstanding (in flight)
sender window is 4 packets (4 packets outstanding), meaning the sender can’t send more packets until it receives acknowledgement (“ACK”) from the receiver
If sender wants to increase rate that it is sending, increase the window size
A sender might increase the rate, every time it receives an ACK from the receiver
Success: one packet increase window per round trip (additive increase)
Failure: window size reduced by half (multiplicative decrease)

17

Rate based algorithm

Sender monitors loss rate
Uses timer to modulate

18

Fairness

everyone gets fair share of network resources

19

Efficiency

network resources are used well

20

If RTT is 100ms and packet is 1kb (1 byte = 8 bits), window size is 10 packets. What is sending rate?

800 kbps. 100 pkts/sec * 8000 bits/packets = 800,000 bps

21

Phase plot and optimal operating line

represents fairness and efficiency. Left of line is underutilization, right is overutilization
Black line is efficiency and green line is fair line

22

AIMD

additive increase, multiplicative decrease

23

What does AIMD converge to

fairness and efficiency

24

Data centers consist of ____ which are connected by ____

data centers consist of server racks which are connected by switches

25

'high fan in'

high bandwidth, low latency with many clients issuing requests in parallel with a small amount of data

26

TCP Incast problem occurs because of

combine the 1. High bandwidth, low latency application requirements, 2. Lots of parallel requests made by the servers, 3. Small buffer sizes in the switches

27

TCP Incase problem is

drastic reduction in application throughput that results when servers using TCP all simultaneously request data, leading to a gross underutilization of network capacity in many-to-one communication networks like a data center

28

Filling up buffers at the switch level results in (2)

bursty retransmissions and tcp timeouts

29

Bursty transmissions are caused by

TCP timeouts

30

TCP timeouts in a data center

can be 100s of miliseconds, but the round trip time in a data center network is extremely short (< 1 millisecond or a 100s of microseconds). So senders are bottlenecked by TCP timeouts, which they have to wait for before they retransmit (when senders wait, it is link idle time

31

Barrier Synchronization

when a client or an application might have many parallel threads, and no forward progress can be made until all the responses for those threads are satisfied. I.e.: So we get a response for threads 1-3, but TCP times out on the fourth, and the link is idle for a long time until the fourth connection is timed out.

32

solutions to Barrier Synchronization problem to help manage network load

Use fine-grained TCP retransmission timers (on the order of microseconds, not milliseconds). This reduces TCP timeouts, which increases system throughput
Have the client acknowledge every other packet, not every packet. This reduces the overall network load.

33

Challenges with media and streaming

Large volume of data
Data volume varies over time (because of the way data, may not be sent at a constant rate)
Users have low tolerance for delay variation
Once playout of a video starts, users don’t want the video to stop
Users have low tolerance for delay, period.
Some loss is acceptable!

34

Image compression

spatial redundancy (exploits aspects humans don't notice)

35

Compression across images

Temporal redundancy => Uses combo of static image compression on reference/anchor (I) frames and derived (P) frame. The P frame is almost the same as the I frame plus a few motion vectors

36

Playout buffer

Client stores data as it arrives and plays the data in a continuous fashion so that the user plays a smooth playout
Client waits a few seconds to allow data to built up in the buffer (in case of times that data arrives slower)

37

Playout delay

Packets are generated at a rate, but received at a different rate because of delay in the network
Playout delay allows for smooth playout
Start up delays are acceptable
Small amount of loss does not affect playout but re-transmitting a lost packet might take too long (better to lose it then try to get it again)

38

What pathologies can streaming audio/video tolerate

Loss and delay, but NOT variation in delay

39

T/F: TCP is a good fit for streaming data

False. TCP:
Reliable delivery: TCP - retransmits lost packets
Slowing down upon loss
Protocol overhead: doesn’t need ACK for every other packet

40

UDP for streaming

Does not retransmit packets
Smaller header
No sending rate adaptation
HIgher layer must solve these problems
Sending rate must be TCP friendly/fair

41

When applications compete for bandwidth, which is prioritized?

Voice over IP ( VOIP) over FTP

42

How is VOIP prioritized

Mark audio packets as they arrive at the router so they receive a higher priority

43

3 alternatives to marking packets with higher priority

Fixed bandwidth per application, scheduling, and admission control

44

Problem with fixed bandwidth per application

inefficient if one of the flows uses less than
not used much

45

Scheduling

Queue with green packets are served more often than red

46

Admission Control

Application declares needs in advance and the network may block application traffic if can’t serve needs
Con: Blocking
not used much

47

Commonly used for streaming audio/video

scheduling and marking packets