Transport Layer III Flashcards
(13 cards)
TCP sequence numbers and ACKS
sequence number:
TCP sequence numbers are numbers that keep track of the order of bytes being sent over a connection.
Why are they needed?
To reassemble data correctly at the receiver.
To detect missing or duplicate data.
To ensure reliable, ordered delivery.
ACKs in TCP:
In TCP, ACKs (acknowledgements) are used to confirm that data has been received successfully.
TCP fast retransmit
if sender receives 3 additional
ACKs for same data (“triple
duplicate ACKs”), resend unACKed
segment with smallest seq #
▪likely that unACKed segment lost,
so don’t wait for timeout
TCP round trip time, timeout
▪too short: premature timeout, unnecessary retransmissions
▪too long: slow reaction to
segment loss
▪longer than RTT, but RTT varies!
Q: how to estimate RTT?
▪SampleRTT:
measured time from segment transmission until ACK receipt
* ignore retransmissions
TCP round trip time, timeout
EstimatedRTT = Smoothed average of RTT.
SampleRTT = Measured RTT from the most recent packet.
α (alpha) = A weighting factor (typically 0.125).
TimeoutInterval = EstimatedRTT + 4×DevRTT
It helps avoid premature timeouts or slow recovery
TCP flow control -
What happens if network
layer delivers data faster than application layer removes data from socket buffers?
flow control- receiver controls sender, so sender won’t overflow
receiver’s buffer by
transmitting too much, too fast
▪TCP receiver “advertises” free buffer space in rwnd field in TCP header
* RcvBuffer size set via socket options (typical default is 4096 bytes)
* many operating systems autoadjust RcvBuffer
▪sender limits amount of unACKed (“in-flight”) data to received rwnd
▪guarantees receive buffer will not
overflow
TCP connection management
before exchanging data, sender/receiver “handshake”:
▪ agree to establish connection (each knowing the other willing to establish connection)
▪ agree on connection parameters (e.g., starting seq #s)
Closing a TCP connection
▪client, server each close their side of connection
* send TCP segment with FIN bit = 1
▪respond to received FIN with ACK
* on receiving FIN, ACK can be combined with own FIN
▪simultaneous FIN exchanges can be handled
▪RST for abnormal closing
Causes/costs of congestion
Simplest scenario:
-one router, infinite buffers
-input, output link capacity: R
- two flows
- no retransmissions needed
- Even without drops or retransmissions, congestion = delays increase when too much data arrives for the router to handle instantly.
If the router’s buffer gets full, new packets get dropped.
The sender resends those lost packets (increases traffic).
So, the transport-layer input λ′in becomes larger than λin.
But the application only gets data at λout, which is limited by congestion and retransmissions.
cost of congestion:
▪ more work (retransmission) for given receiver throughput
R/2
▪ unneeded retransmissions: link carries multiple copies of a packet
* decreasing maximum achievable throughput
when packet dropped, any upstream transmission capacity and buffering used for that packet was wasted!
Approaches towards congestion control
What it means:
The network doesn’t tell the sender if it’s congested.
The sender has to guess by watching for signs like:
Packet loss (no ACK received, or timeout happens)
Increased delays (slower ACKs)
🔹 TCP’s Approach:
If TCP notices packet loss or delay, it assumes congestion.
It reacts by slowing down how fast it sends data.
Then it gradually increases the rate again, like “testing the waters.”
TCP congestion control: AIMD/details
approach: senders can increase sending rate until packet loss
(congestion) occurs, then decrease sending rate on loss event
Additive increase:
increase sending rate by 1
maximum segment size every
RTT until loss detected
Multiplicative decrease:
cut sending rate in half at
each loss event
cwnd is a TCP sender-side variable that controls how much data (in bytes) the sender can send without receiving an ACK (acknowledgement).
🔧 How it works:
It grows when the network seems fine (no loss/delay).
It shrinks when there’s congestion (packet loss or timeout).
TCP and the congested “bottleneck link”
TCP starts slowly and increases its sending rate (amount of data it sends) over time.
It does this by increasing a value called the congestion window (cwnd).
It keeps increasing until there is packet loss — which usually means:
A router or network link (called the bottleneck link) is overloaded.
Its buffer overflows and drops a packet.
When TCP detects packet loss (no ACK received or a timeout):
It assumes the network is congested.
It reduces the sending rate (shrinks cwnd).
Then it starts increasing the rate again — more slowly (carefully).
Delay-based TCP congestion control
Keeping sender-to-receiver pipe “just full enough, but no fuller”: keep
bottleneck link busy transmitting, but avoid high delays/buffering
If current throughput is close to the best possible throughput (cwnd / RTTmin):
✅ Network is healthy, no congestion.
👉 Increase cwnd slowly to send more.
If current throughput is much lower than expected:
🚨 Network is likely congested (packets waiting in queues).
👉 Decrease cwnd to back off and relieve congestion.
Explicit congestion notification (ECN)
TCP deployments often implement network-assisted congestion control:
▪ two bits in IP header (ToS field) marked by network router to indicate congestion
* policy to determine marking chosen by network operator