What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?

Process that breaks IP packets into smaller pieces

What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?

An example of the fragmentation of a protocol data unit in a given layer into smaller fragments.

IP fragmentation is an Internet Protocol (IP) process that breaks packets into smaller pieces (fragments), so that the resulting pieces can pass through a link with a smaller maximum transmission unit (MTU) than the original packet size. The fragments are reassembled by the receiving host.

The details of the fragmentation mechanism, as well as the overall architectural approach to fragmentation, are different between IPv4 and IPv6.

Process

RFC 791 describes the procedure for IP fragmentation, and transmission and reassembly of IP packets.[1] RFC 815 describes a simplified reassembly algorithm.[2] The Identification field along with the foreign and local internet address and the protocol ID, and Fragment offset field along with Don't Fragment and More Fragments flags in the IP header are used for fragmentation and reassembly of IP packets.[1]: 24 [2]: 9 

If a receiving host receives a fragmented IP packet, it has to reassemble the packet and pass it to the higher protocol layer. Reassembly is intended to happen in the receiving host but in practice it may be done by an intermediate router, for example, network address translation (NAT) may need to reassemble fragments in order to translate data streams.[3]

IPv4 and IPv6 differences

What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?

The fragmentation algorithm in IPv4.

What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?

An example of IPv4 multiple fragmentation. The fragmentation takes place on two levels: in the first one the maximum transmission unit is 4000 bytes, and in the second it is 2500 bytes.

Under IPv4, a router that receives a network packet larger than the next hop's MTU has two options: drop the packet if the Don't Fragment (DF) flag bit is set in the packet's header and send an Internet Control Message Protocol (ICMP) message which indicates the condition Fragmentation Needed (Type 3, Code 4), or fragment the packet and send it over the link with a smaller MTU. Although originators may produce fragmented packets, IPv6 routers do not have the option to fragment further. Instead, network equipment is required to deliver any IPv6 packets or packet fragments smaller than or equal to 1280 bytes and IPv6 hosts are required to determine the optimal MTU through Path MTU Discovery before sending packets.

Though the header formats are different for IPv4 and IPv6, analogous fields are used for fragmentation, so the same algorithm can be reused for IPv4 and IPv6 fragmentation and reassembly.

In IPv4, hosts must make a best-effort attempt to reassemble fragmented IP packets with a total reassembled size of up to 576 bytes. They may also attempt to reassemble fragmented IP packets larger than 576 bytes, but they are also permitted to silently discard such larger packets. Applications are recommended to refrain from sending packets larger than 576 bytes unless they have prior knowledge that the remote host is capable of accepting or reassembling them.[1]: 12 

In IPv6, hosts must make a best-effort attempt to reassemble fragmented packets with a total reassembled size of up to 1500 bytes, larger than IPv6's minimum MTU of 1280 bytes.[4] Fragmented packets with a total reassembled size larger than 1500 bytes may optionally be silently discarded. Applications relying upon IPv6 fragmentation to overcome a path MTU limitation must explicitly fragment the packet at the point of origin; however, they should not attempt to send fragmented packets with a total size larger than 1500 bytes unless they know in advance that the remote host is capable of reassembly.

Impact on network forwarding

When a network has multiple parallel paths, technologies like LAG and CEF split traffic across the paths according to a hash algorithm. One goal of the algorithm is to ensure all packets of the same flow are sent out the same path to minimize unnecessary packet reordering.

IP fragmentation can cause excessive retransmissions when fragments encounter packet loss and reliable protocols such as TCP must retransmit all of the fragments in order to recover from the loss of a single fragment.[5] Thus, senders typically use two approaches to decide the size of IP packets to send over the network. The first is for the sending host to send an IP packet of size equal to the MTU of the first hop of the source-destination pair. The second is to run the Path MTU Discovery algorithm[6] to determine the path MTU between two IP hosts so that IP fragmentation can be avoided.

As of 2020[update], IP fragmentation is considered fragile and often undesired due to its security impact.[7]

See also

  • IPv4 § Fragmentation and reassembly
  • IPv6 packet § Fragmentation
  • IP fragmentation attack
  • Protocol data unit and Service data unit

References

  1. ^ a b c Internet Protocol, Information Sciences Institute, September 1981, RFC 791
  2. ^ a b David D. Clark (July 1982), IP Datagram Reassembly Algorithms, RFC 815
  3. ^ Architectural Implications of NAT, November 2000, RFC 2993
  4. ^ S. Deering; R. Hinden (December 1998), Internet Protocol, Version 6 (IPv6) Specification, RFC 2460
  5. ^ Christopher A. Kent, Jeffrey C. Mogul. "Fragmentation Considered Harmful" (PDF).
  6. ^ Path MTU Discovery, November 1990, RFC 1191
  7. ^ IP Fragmentation Considered Fragile. September 2020. doi:10.17487/RFC8900. RFC 8900.

  • What is packet fragmentation?
  • The Never-Ending Story of IP Fragmentation

Retrieved from "https://en.wikipedia.org/w/index.php?title=IP_fragmentation&oldid=1110948160"

What you will do: You will watch a video, read, and explore a simulation of unreliable IP transmissions.

What you will learn: You will learn about how the Internet sends data reliably by using protocols.

On your own: You can code your own Transmission Control Protocol.

6.2 Characteristics of the Internet influence the systems built on it.
6.2.1 Explain characteristics of the Internet and the systems built on it. [P5] 6.2.1A The Internet and the systems built on it are hierarchical and redundant. 6.2.1D Routing on the Internet is fault tolerant and redundant.

6.2.2 Explain how the characteristics of the Internet influence the systems built on it. [P4]

6.2.2B The redundancy of routing (i.e., more than one way to route data) between two points on the Internet increases the reliability of the Internet and helps it scale to more devices and more people. 6.2.2D Interfaces and protocols enable widespread use of the Internet. 6.2.2E Open standards fuel the growth of the Internet. 6.2.2F The Internet is a packet-switched system through which digital data is sent by breaking the data into blocks of bits called packets, which contain both the data being transmitted and control information for routing the data. 6.2.2G Standards for packets and routing include transmission control protocol/Internet protocol (TCP/IP).

When you send a message over the Internet, your computer divides it into small chunks called packets that it sends individually, each on its own path. A packet can include any kind of data: text, numbers, lists, etc. Computers, servers, and routers are fairly reliable, but every once in a while a packet will be lost, and devices on the Internet need to tolerate these faults.

The Transmission Control Protocol (TCP) guarantees reliable transmission by breaking messages into packets, keeping track of which packets have been received successfully, resending any that have been lost, and specifying the order for reassembling the data on the other end. This process is what makes the Internet a packet switching network.

What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?

  • The computers (including servers) at the two endpoints of a communication run the Transmission Control Protocol (TCP) that divides up the packets and guarantees reliable transmission.
  • The routers at every connection-point on the Internet run the Internet Protocol (IP) that transmits packets from one IP address to another (not caring that sometimes a packet will be lost and not knowing anything about the purpose or meaning of a packet).

  1. Load this project. It provides a simulation of unreliable data transmission by Internet Protocol.
    • Click the green flag to initialize the incoming transmission variables before each experiment.
    • Click either character to enter a message for it to send to the other one.
  2. What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?
  3. What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?
    Compare the result with what you sent. What problems do you see?

TCP works by including additional information along with each packet so that the receiving computer can keep track of how many packets it has received, re-request any missing packets, and reorder the packets to reconstruct the original message. In this simulation, a packet either arrives correctly (even if it's out of order) or it doesn't arrive at all. But on the Internet, it's possible for a packet to arrive with erroneous data, so the real TCP has to check for errors and request re-transmission of packets with errors too.

  1. Read Blown to Bits pages 306-309.

What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?
  1. Build a simple TCP. Resolve the unreliability so that messages are received reliably despite the limitations of IP packets. You'll need to change the definitions of:
    • What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?
    • What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?

    Do not change the definition of

    What process involves one computer breaking down messages into smaller pieces and sending them along to a receiving computer where the message is then reassembled?
    . That block simulates the unreliable network. You could "solve" the problem by rewriting this block to simulate a perfect network instead of an imperfect one, but that misses the point.

    To solve this problem, you'll need a way to keep track of the order of the data and a way to re-request missing packets:
    • First, solve the problem of packets arriving out of order. You can include extra header information in addition to the packet data in order to help the receiver reconstruct the message. This will require cooperation by both sender and receiver (that is, changes to both grey blocks).
    • Then, solve the problem of packets not arriving at all. That is, make the transmission reliable even though IP is unreliable. This, too, will require changing both sender and receiver.