Congestion Control and Streaming Flashcards

1
Q

How does problem of lack of knowledge of shared downstream bottleneck manifest itself?

A
  1. lost packets
  2. long delays
  3. congestion collapse
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Congestion Collapse (short definition)

A

throughput less than bottleneck link

packets consume network resources only to get dropped later at a downstream link

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Congestion Collapse causes

A
  1. spurious retransmission

2. undelivered packets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Solution to spurious retransmission

A
  1. better timers

2. TCP congestion control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does TCP interpret packet loss? What does it do as a result?

A

Congestion. It will slow down as a consequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What do senders do if no packets are dropped?

A

Increase sending rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

TCP increase algorithm behavior

A

Sender tests network to determine if network can sustain higher sending rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

TCP decrease algorithm behavior

A

Senders react to to congestion to achieve optimal loss rates, delays, and sending rates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

RTT = 100 milliseconds

packet size = 1 kb (kilobyte)

window size = 10 packets

What is transmission rate in kbps?

A

800 kbps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Rate Based Approach to Rate Adjustment

A
  1. Sender monitors loss rate
  2. sender uses timer to modulate
    (less common method)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fairness vs Efficiency

A

Fairness is everyone getting ‘fair share’ and efficiency is when network resources are used well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Where does high ‘fan in’ occur

A

between leaves and root of data center

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

data center attributes

A
  1. high ‘ fan in’
  2. high bandwidth, low latency workloads
  3. many parallel requests
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

TCP incast problem

A

throughput collapse resultant from many parallel requests in data center. Switches overflow buffers, causing underutilization of network.

This is a many to one issue

Causes bursty retransmission due to TCP timeouts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

bursty retransmission cause

A

caused by TCP timeouts in TCP incast problem scenario

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

incast

A

drastic reduction in application throughput caused when servers all simultaneously request data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

barrier synchronization

A

client/app may have many parallel threads and no forward progress can be made until all the responses for those threads are satisfied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

solution to idle time in barrier synchronization

A

granular retransmission timers that operate in microseconds

another option is for client to acknowledge very other packet (not main solution)

19
Q

basic goal of TCP

A

prevent congestion collapse

20
Q

challenges of streaming

A
  1. Large volume of data
  2. Data volume varies over time
  3. Low tolerance for delay variation (video)
  4. Low tolerance for delay, period (games, VOIP)
21
Q

analog to digital audio sampling explained

A

samples taken of audio at fixed intervals, with each sample being a fixed size in bits

22
Q

video compress techniques

A
  1. Spatial redundancy

2. Temporal redundancy

23
Q

spatial redundancy

A

video compression method which exploits visual aspects humans tend not to notice

24
Q

temporal redundancy

A

compression across images via reference anchor and derived frames

25
reference anchor
"I" frame. Used as reference frame in video compression. Divided into grid.
26
derived frame
"P" frame
27
motion vectors
difference between the I frame blocks and the P frame blocks in video compression
28
how does TCP know when to stop increasing rate?
when sender notices packet drops
29
causes of packet drops OTHER than congestion
in wireless networks, wireless interference may corrupt packet and result in dropping of packet
30
how does TCP send increase sending rate?
by increasing the window size
31
every time additive increase is applied, what is increasing (that isn't the window size)
efficiency
32
every time multiplicative decrease is applied, what is increasing?
fairness This is because you get closer to the x1=x2 fairness line in the phase plot
33
throughput collapse cause (and what example was used in class?)
causes by switch buffer overflow. (exampled used is the barrier synchronization problem)
34
Challenges of streaming
- large volume of data - data volume varies over time - low tolerance for delay variation (video) - low tolerance for delay, period (games, voip)
35
8,000 samples/sec 8 bits/sample ...what is sampling rate?
64kbps
36
playout delay
acceptable delay at beginning of stream when waiting for initial packets to fill a playout buffer
37
why is TCP bad for streaming?
- reliable delivery - slow down upon loss - protocol overload (headers, acks)
38
why is UDP good for streaming?
- no retransmission - no sending rate adaptation - smaller headers
39
what is delegated to higher layers is UDP is implemented?
- when to transmit - how to encapsulate - whether to retransmit - whether to adapt sending rate
40
what property must UDP have when sharing data through a link?
UDP must be 'TCP friendly'
41
QOS (quality of service) properties
- explicit reservations | - mark certain packet streams as high priority
42
weighted fair queueing
in network, there are multiple queues, and the queues with the with higher priority are services more frequent;y
43
alternatives to weighted fair queueing
fixed bandwith per app ( bad because this is inefficient from a network utilization perspective) admission control where app declares its needs in advance and network blocks contending traffic to accomodate (analogous to having a busy signal in a telephone call)