In earlier days when the Internet was just introduced there were very fewer protocols that would have handled the transmission of data across the network. So was the situation, the data transfer between devices over the network was a cost-effective job because if there was any kind of interruption in transmission (suddenly network becomes busy or there is a fault in the network or error in the data) the data was supposed to be resent whole again this would have consumed time, patience and accuracy. we can say there was no margin of error was allowed.
There were several kinds of research conducted on reliable data communications, The Défense Advanced Research Projects Agency‘s 2 scientists (Vint Cerf and Bob Kahn) came up with TCP/IP (Transmission Control Protocol). This protocol frames how to transfer data from one node to another across the network and it makes sure that data is correctly transmitted to the receiving end
Let us say If the system were to send the whole data in one go, and the network is faulty, the whole data needs to be re-transmitted again from the beginning. To overcome this issue, TCP/IP divides the entire data into small packets, each packet has information like what is the destination, the sequence number for error recovery and acknowledgment fields, etc in the header also the transmitting data. when these packets are transmitted over the network to the receiving node those packets are then rearranged as the original data (Fig 1).
Fig 1: DataStream into packets
TCP/IP uses a different route to make a successful transfer of data if the previously available network is not available now or the network is the congested one (Fig 2).
Continuing further TCP/IP has 4 sub-layers handle the transmission process (what these layers are we will discuss in a while). On the transmission end the data flow in the order Application Layer > Transport Layer > Internet Layer > and Datalink Layer, whereas in receiving end, it follows the reverse order to make them meaningful and original data
Fig 3: Data Transmission
1. Datalink layer
The Datalink layer is meant for sending and receiving the data via some means of connection like cables/ wireless connectivity etc
This layer pushes the data to the underlying hardware in small packets in the form of signals the same layer at the receiving end collects and arranges them to form the complete set of data and gives it to the network layer. this layer also performs the job like
Framing (encapsulation of data packets on transmission end)
Addressing (providing unique data packet addressing while sending)
Synchronization (When data frames are sent on the network, both machines must be in sync to transfer to take place)
Flow Control (Stations on the same link may have different speeds or capacity. Data-link layer ensures flow control that enables both machines to exchange data at the same speed)
Error Control (in case of faulty transmission recovering the original data by error detection)
Multi-Access (When there are more interconnected nodes transferring data there would be chances of data collision, to prevent that it uses CSMA/CD to provide the capability to access)
2. Internet layer
The Internet layer/Network layer controls the movements of packets across the connected network.
It handles the routing mechanism of the packets by selecting the best available routes to the destination.
In case of receiving end, it handles the out of order or error in packet transmission using acknowledge and sequence number
3. Transport layer
This layer provides the connection between the source and the destination and also it segments the DataStream to packets on the Source end. On receiving end, it acknowledges the source on successful arrival of the packet
4. Application layer
This layer has applications and users will be interacting with these applications, such as sharing, sending video, messaging, etc
So to sum it up, TCP/IP provides a reliable way to transfer the data across the network. And It has become the standard Internet communications protocol that allows devices to communicate over long distances. And it ensures that data is not damaged, lost, duplicated, or delivered out of order to a receiving process. This assurance of transport reliability keeps applications trusted reusable and won’t lose the user interest.
Comentarios