Chap 3 transport Flashcards

(87 cards)

1
Q

What is the role of the transport layer in the TCP/IP model?

A

bridge between the network layer and the application layer,
transfer data between applications on different network nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the key responsibilities of the transport layer?

A
  • Building efficiency and reliability.
  • Key services include port addressing,
  • segmentation,
  • flow control,
  • error control
  • congestion control
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is port addressing and why is it important?

A

The transport layer uses port numbers to identify different (applications) on a host.

Each application is assigned a unique port number,
* allowing multiple applications to communicate simultaneously on the same host

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is segmentation in the transport layer?

A

Breaking down large data streams into smaller, manageable packets for efficient transmission across networks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is flow control in the transport layer?

A

Regulates the data flow between sender and receiver, preventing overwhelming the receiver’s processing capacity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is error control in the transport layer?

A

Detects and corrects errors that occur during transmission using techniques like checksums and acknowledgements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is congestion control in the transport layer?

A

Dynamically adjusts the data transfer rate based on network conditions to avoid congestion and performance degradation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the main characteristics of TCP?

A
  1. Connection-oriented
  2. Reliable
  3. Congestion control
  4. Flow control:
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the main characteristics of UDP?

A

Connectionless
Unreliable
Prioritizes speed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does the transport layer interact with the application layer?

A
  1. Receives data from applications in the form of application messages
  2. Segments the data into smaller, manageable packets.
  3. Adds header information, including source and destination port numbers.
  4. Hands over the packets to the network layer for routing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

At a high level, what type of communication does the transport layer handle?

A

Process to process communication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What actions does the transport layer perform on the sender side?

A
  1. Application layer creates message and drops in socket
  2. Transport layer determines segment header fields values.
  3. Transport layer creates segment and passes segment to network layer
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What actions does the transport layer perform on the receiver side?

A
  1. Receives segment from network layer.
  2. Extracts application-layer message.
  3. Checks header values to ensure segment is not corrupted.
  4. Demultiplexes message up to application via socket
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is multiplexing ?

A

Combining multiple data streams into a single stream for efficient transmission

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What demultiplexing?

A

Separating a combined data stream into individual data streams for specific applications.

Allows multiple applications on a host to share the same network connection by using port numbers to identify different application endpoints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the benefits of multiplexing?

A
  1. Optimizes network bandwidth utilization.
  2. Enables efficient utilization of network resources.
  3. Reduces overall transmission time.
  4. Enables communication for multiple applications on a single connection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How does multiplexing work?

A
  1. Transport layer assigns unique identifiers (port numbers) to each data stream
  2. These identifiers are embedded within the data packets.
  3. Packets from different streams are interleaved and sent as a single data stream
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is demultiplexing?

A

Demultiplexing takes a combined data stream and separates it into individual data streams for specific applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the benefits of demultiplexing?

A
  1. Efficient Resource Utilization:
  2. Improved Performance
  3. Scalability
  4. Security
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How does demultiplexing work?

A
  1. Each data packet in the combined stream carries a unique identifier, typically a port number.
  2. The transport layer uses these port numbers to identify the destination application for each packet.
  3. Based on port number, packet is forwarded to the appropriate application on the receiving device
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What happens during multiplexing at the sender?

A

The** transport layer** will multiplex data from multiple processes and put it into a segment and add this segment to transport layer header,

this segment will be later used during demultiplexing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the purpose of demultiplexing at the receiver?

A

Use header info to deliver received segments to correct socket

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Demultiplexing at reciever steps

A
  1. Host receives IP datagrams, each with source and destination IP addresses,
  2. each carrying one transport-layer segment with source and destination port numbers
  3. Host uses IP addresses and port numbers to direct segment to appropriate socket
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How does demultiplexing work in UDP?

A
  • Demultiplexing based on port numbers only.
  • UDP packets only include the destination port number. T
  • he transport layer uses the port number to deliver the data to the corresponding application
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is connectionless demultiplexing?
UDP sends data packets without establishing a connection, prioritizing speed over reliability
26
How does demultiplexing work in TCP?
Demultiplexing based on IP address, port numbers and sequence numbers
27
What is connection-oriented demultiplexing?
TCP establishes a virtual connection before data transfer, ensuring reliable and ordered delivery
28
How is a TCP socket identified?
By a 4-tuple: 1. source IP address, 2. source port number, 3. destination IP address, 4. and destination port number
29
What values does the demux use to direct a segment to the appropriate socket?
All four values (4-tuple) to direct segment to appropriate socket
30
What are port numbers?
Logical identifiers assigned to specific applications or services on a network device, differentiating between multiple programs running on the same device
31
What are well-known ports?
Standardized ports (0-1023) assigned by IANA to essential services like HTTP (80), FTP (21), and SSH (22)
32
What are registered ports?
Ports (1024-49151) assigned by specific organizations for commonly used applications and services
33
What are dynamic/private ports?
Ports (49152-65535) used by applications dynamically for temporary connections, often assigned by the operating system
34
port number in URL
can be explicitly specified in a URL, for example, http://example.com:**8080**/page.html uses port 8080 for HTTP [1].
35
Importance of Port Numbers
Security: filtering traffic based on authorized ports Efficiency: Standardization: provide consistent access to essential services
36
When is UDP used
**When speed is crucial and occasional data loss is acceptable** transaction-oriented protocols like DNS * For stateless applications with a large number of clients, such as streaming multimedia (IPTV) * For real-time applications like online games * When multicasting is required [8]. * error checking or correction are not required.
37
Why UDP for DNS?
Speed is critical for fast lookups, ensuring smooth browsing [8]. Loss tolerance Small data size: DNS responses are typically compact, minimizing the impact of potential loss [9].
38
Why UDP for SNMP?
Used for efficient data collection in network management **Real-time monitoring: **Timely information is crucial for network troubleshooting [9]. **Loss tolerance** **Large data volume: **UDP minimizes overhead when dealing with a high volume of network monitoring data [9].
39
Trade-offs and Considerations with UDP
Unreliability: Security concerns: Lack of encryption and authentication Checksum: Included for error detection within the packet, but doesn't guarantee delivery or order [10].
40
UDP Header Format (components and bits)
1. Source Port (16 bits): 2. Destination Port (16 bits): 3. Length (16 bits): Total length of the UDP datagram (header + data) in bytes 4. Checksum (16 bits): Detects errors during transmission by calculating a value based on the packet's data and header 5. Data: Contains the payload to be transmitted. Its length is determined by the Length field minus the 8-byte UDP header
41
TCP Features
* Connection-Oriented: * Reliability: * Flow Control: * Congestion Control: * Full-Duplex Communication: * Ordered Data Delivery: * Connection Termination: Uses a four-way handshake to close connections gracefully
42
Ordered Data Delivery
TCP ensures data is delivered to the receiver in the same order it was sent [18]. It uses sequence numbers to track the order of data packets [18]. Out-of-order packets are reordered before delivery to the application [18].
43
Connection Termination
TCP connections are terminated using a four-way handshake process [19]. Both the sender and receiver exchange control packets (FIN and ACK) to close the connection gracefully
44
TCP Segment
* A unit of data transmitted over a TCP connection * Breaks down large data streams into smaller, manageable units * Consists of a TCP header and a data payload
45
TCP Header
Contains critical information for reliable data transfer [22]. Typically 20 bytes (160 bits) long, but can be longer due to options [23].
46
TCP Header Components and Bits:
1. Source Port (16 bits): 1. Destination Port (16 bits): 1. Sequence Number (32 bits): 1. Acknowledgment Number (32 bits): Specifies the next sequence number expected by the sender [24]. 1. Data Offset (4 bits): Indicates the length of the TCP header in 32-bit words, pointing to the start of the data [24]. 1. Control Flags (9 bits): 1. Window Size (16 bits): 1. Checksum (16 bits): 1. Urgent Pointer (16 bits): 1. Options (variable):
47
Control Flags (9 bits):
1. URG: Urgent Pointer field significant [25]. 1. ACK: Acknowledgment field significant [25]. 1. PSH: Push Function [25]. 1. RST: Reset the connection [25]. 1. SYN: Synchronize sequence numbers (used for connection establishment) [25, 26]. 1. FIN: No more data from sender (used for connection termination) [20, 21, 25]. 1. Other flags for congestion notification (CWR, ECE)
48
Window Size
A 16-bit field in the TCP header [25]. * Specifies the size of the receive window * Used for flow control to manage the amount of data the sender can send without acknowledgment * Indicates the available buffer space on the receiver's side
49
Checksum
A 16-bit field in the TCP header [25]. Provides error detection for the TCP header and data [25, 28]. Calculated over the TCP header, TCP data, and a pseudo-header
50
Urgent Pointer
A 16-bit field in the TCP header [30]. Indicates the end of urgent data, if present [30]. Only significant when the URG control flag is set
51
Options
A variable-length field in the TCP header [30]. Provides additional control information or parameters for the TCP connection [30]. Examples include Maximum Segment Size (MSS), Timestamps, and Window Scale
52
TCP Data
Contains the payload or application data to be transmitted [30]. The size can vary depending on the Maximum Segment Size (MSS) and other factors
53
Round Trip Time (RTT)
The time it takes for a TCP segment to travel from the sender to the receiver and for the acknowledgment (ACK) to return [27]. A measure of propagation delay and processing time [27
54
Estimation
TCP implementations estimate the RTT by measuring the time between sending a segment and receiving its ACK [27].
55
Smoothed RTT (SRTT)
An exponentially weighted moving average of recent RTT samples [32]. Used to account for variations in RTT
56
RTT Variance (RTTVAR)
Measures the degree of variation or volatility in RTT samples [32].
57
TCP Timeout
The duration TCP waits for an acknowledgment before retransmitting a segment [32]. Dynamically adjusted based on RTT measurements and estimation
58
Retransmission Timer
Each TCP segment has an associated timer, initially set based on estimated RTT and RTTVAR [33]. If an ACK is not received within the timeout period, the segment is retransmitted [33].
59
Adaptive Timeout Mechanisms
TCP uses algorithms like Karn's algorithm and Jacobson's algorithm to dynamically adjust the timeout value based on RTT measurements and network conditions
60
Exponential Backoff
In case of multiple consecutive timeouts, TCP often increases the timeout value exponentially [33]. This reduces the frequency of retransmissions and helps mitigate congestion
61
Connection Establishment - TCP Sender
The TCP sender initiates the connection establishment by sending a SYN (synchronize) segment to the receiver. * It waits for an acknowledgment (ACK) from the receiver, confirming the receipt of the SYN segment. * Upon receiving the ACK, the sender completes the three-way handshake, establishing the TCP connectio
62
Segment Transmission - TCP Sender
Once the connection is established, the sender can start transmitting data segments to the receiver. * The sender encapsulates application data into TCP segments and sends them over the network to the receiver's IP address and port number. * Each segment includes sequence numbers to allow the receiver to reconstruct the data in the correct order
63
Acknowledgement Reception - TCP Sender
After sending each segment, the sender waits for an acknowledgment (ACK) from the receiver. * The sender maintains a timer for each transmitted segment to detect lost or delayed acknowledgments. * If an acknowledgment is not received within a certain timeout period, the sender may retransmit the segment
64
Connection Establishment - TCP Receiver
The TCP receiver listens for incoming connection requests on a specific port. * When a connection request (SYN segment) is received, the receiver responds with a SYN-ACK (synchronize-acknowledgment) segment to acknowledge the sender's SYN segment and establish the connection. * Once the three-way handshake is completed, the receiver is ready to receive data from the sender
65
TCP Receiver Segment Reception
After the connection is established, the receiver listens for incoming TCP segments sent by the sender. * It receives data segments and acknowledges their receipt by sending acknowledgment (ACK) segments back to the sender. * The receiver uses sequence numbers in the TCP header to reorder out-of-order segments and reconstruct the original data stream
66
TCP Receiver vs. TCP Sender Flow Control
Receiver Controls Sender: The receiver controls the sender to prevent overwhelming its buffers by signaling how much data it can accept. * Window Size: The receiver advertises its available buffer space to the sender using the window size field (rwnd) in the TCP header. * Dynamic Adjustment: The receiver dynamically adjusts the window size based on its available buffer space. * Sender Behavior: The sender regulates its transmission rate based on the receiver's advertised window size, ensuring it doesn't send more data than the receiver can handle
67
TCP Receiver Error Detection and Handling
The receiver verifies the integrity of incoming TCP segments by calculating the checksum and comparing it with the checksum value provided in the segment. * If errors are detected (e.g., corrupted segments, invalid checksums), the receiver may discard the segments or request their retransmission (through lack of acknowledgement).
68
Brief Comparison of TCP and UDP Protocols
* TCP: Reliable transport, connection-oriented (requires connection setup), provides flow control and congestion control, guarantees in-order delivery, more overhead. Suitable for applications requiring reliable data exchange like web browsing (HTTP), email (SMTP), and file transfer (FTP). * UDP: Unreliable data transfer, connectionless (no connection setup), no flow control or congestion control, unordered delivery, less overhead. Prioritizes speed and is suitable for applications where speed is crucial and occasional data loss is acceptable, like online gaming and real-time audio/video conferencing
69
What happens if network layer delivers data faster than application layer removes data from socket buffers?
Buffer Overflow and Data Loss: Incoming data packets may be dropped because there's no space left in the buffer. * Increased Latency: The network layer may need to wait for the application to consume data, leading to delays. * Congestion: Can occur if routers' buffers also become overwhelmed due to the sustained high rate of data. * Resource Starvation: If buffers remain full, it can lead to other processes not having access to necessary resources. * (Solution) Flow Control: Mechanisms like TCP flow control are designed to prevent this by allowing the receiver to signal the sender to slow down
70
Buffer Overflow (in the context of TCP Flow Control)
Occurs when the receiver's buffers become full because the sender is transmitting data faster than the application can consume it. * Can lead to data loss as new incoming packets cannot be stored
71
Resource Starvation (in the context of TCP Flow Control)
A situation where, due to prolonged buffer overflow, other processes or applications on the receiving system may not have access to the resources they need to function properly.
72
Flow Control using Sliding Window Protocol
A mechanism used by TCP to regulate the rate of data transmission between sender and receiver. * The receiver advertises its available buffer space to the sender through the "window size" (rwnd) in TCP segments. * The sender maintains a "send window" representing the range of unacknowledged data it can send, limited by the receiver's advertised window and the sender's congestion window (cwnd). * The sender sends data within the send window and advances the window as acknowledgments are received
73
Sender Behavior (in Flow Control using Sliding Window)
The sender regulates its rate of data transmission based on the receiver's advertised window size (rwnd). * The sender ensures that the amount of unacknowledged data it has sent does not exceed the current window size, thus preventing overwhelming the receiver. * The "send window" at the sender determines which packets can be sent
74
Manifestations of Congestion
Long delays (queueing in router buffers) * Packet loss (buffer overflow at routers)
75
Congestion Control vs. Flow Control
Congestion control: Addresses the issue of too many senders sending too much data too fast for the network to handle. * Flow control: Addresses the issue of one sender transmitting too fast for one receiver
76
Congestion Control Causes/Costs
Throughput can never exceed capacity. * Delay increases as capacity is approached. * Loss/retransmission decreases effective throughput. * Unneeded duplicates further decrease effective throughput. * Upstream transmission capacity / buffering wasted for packets lost downstream
77
End-End Congestion Control Taken by TCP
TCP operates with no explicit feedback from the network layer. * The sender infers congestion by packet loss, indicated by: ◦ Timeout ◦ Duplicate ACKs ◦ Measured RTT * When congestion is detected, the sender should decrease its sending rate
78
Network-Assisted Congestion Control
Involves routers providing direct feedback to sending/receiving hosts with flows passing through a congested router. * Routers may indicate congestion level or explicitly set the sending rate
79
Explicit Congestion Notification (ECN)
A network-assisted congestion control mechanism. * Routers set a flag in the IP header of packets passing through congested areas, indicating congestion. * Endpoints respond by reducing their transmission rates, helping to alleviate congestion before packet loss
80
Asynchronous Transfer Mode (ATM) Congestion Control
Primarily managed through network-assisted mechanisms. * Relies on feedback and signaling from network elements like switches and routers. * ATM provides a framework for implementing network-assisted congestion control techniques
81
AIMD Congestion Control
Stands for Additive Increase, Multiplicative Decrease, a core algorithm used by TCP for congestion control. * Aims to optimize data transmission rates while avoiding congestion. * Key aspects: ◦ Additive Increase ◦ Multiplicative Decrease ◦ Congestion Detection ◦ Dynamic Adaptation
82
Additive Increase (in AIMD)
During idle periods or when no congestion is detected, the sender gradually increases the congestion window (CWND). * The increase is typically by a fixed increment for each successfully acknowledged segment. * Allows the sender to probe the network's capacity
83
Multiplicative Decrease (in AIMD)
When congestion is detected (usually by packet loss or timeouts), the sender aggressively reduces the CWND by a multiplicative factor (e.g., 1/2 or 1/4). * Aims to quickly back off from sending too much data
84
Congestion Detection (in AIMD)
TCP relies on several methods: ◦ Packet loss: Interpreted as a sign of overload. ◦ Timeouts: Lack of acknowledgment suggests potential congestion. ◦ Fast Retransmit: Receipt of duplicate ACKs implies earlier segment loss due to congestion
85
Dynamic Adaptation (in AIMD)
The rate of increase and decrease in CWND can be adjusted based on: ◦ Network conditions: Faster increases when favorable, slower when congested. ◦ Round-trip time (RTT): Longer RTTs might lead to slower increases or quicker decreases. ◦ Fairness: Rate changes can be adjusted to ensure fair resource sharing
86
Benefits of AIMD
Simple and efficient: Relatively easy to implement and computationally inexpensive. * Adaptive: Adjusts to network conditions dynamically. * Fairness: Promotes fair sharing of resources among multiple flows
87
Limitations of AIMD
Slow convergence: Can take time to reach optimal transmission rate, especially after congestion events. * Sensitivity to loss events: Large rate reductions due to single losses can be inefficient