Difference between revisions of "TCP protocol"

From SEGGER Wiki
Jump to: navigation, search
(Created page with "__TOC__ The TCP protocol ('''T'''ransmission '''C'''ontrol '''P'''rotocol) is the most widely used transport protocol in the world and is one of the core components of the In...")
 
(No difference)

Latest revision as of 11:47, 14 June 2019

The TCP protocol (Transmission Control Protocol) is the most widely used transport protocol in the world and is one of the core components of the Internet. For this reason IP stacks like emNet are often called TCP/IP stacks while they are not limited to TCP or the IP protocol only.

Terms that are often used with TCP

While this article is not meant to discuss the basics of TCP some features of TCP might need a couple of words to describe what they are and what their benefits for the user and his application are.

Three-Way-Handshake

Opening a TCP connection consists of a three-way-handshake (can also be four in rare cases) allowing both, client and server, to exchange some parameters and ensure that both are happy with establishing the connection. The three-way-handshake looks like this (Server/Client):

Step Direction/Side Client state Server state
1 C->S Calls connect() to send a SYN to the server to establish a connection. Waits for new incoming connection requests.
2.1 C<-S Waits for an ACK for his SYN. Checks his backlog if there is more space to accept the new connection. If another connection can be accepted the server sends back an ACK for the SYN and allocates a connection context as well as the socket buffers in its system. A SYN is worth one payload byte and therefore the sequence number is increased.
2.2 C<-S Waits for an ACK for his SYN. Sends a SYN packet itself to fully establish the connection that is known as "half-open" at this point. The ACK sent from server to client to acknowledge the client's SYN packet is typically combined with the server's own SYN flag/packet but is also valid to be sent in two independent packets, resulting in a total of four packets for the handshake. As the server's SYN/ACK
3 C->S Receives the server's ACK which means the connection is now fully established and sending data to the server is allowed. At the same time the server's SYN also gets ACKed by the client. Typically the ACK for the server's SYN is sent in a separate packet before data are sent. It is not common by stacks to combine this ACK with the first chunk of payload sent by the client. Can already receive data from the client in its Rx socket buffer. Up to this point the server application does not have to allocate application specific resources and is not yet aware of the socket for handling the connection. Completing the three-way-handshake is taken care of by the stack without interaction of the user application.
4 C<->S Sends or receives application data. Calls accept() to fetch one waiting connection from the backlog. The connection might already have Rx data in its socket buffer to read using recv() if client has already sent something in the meantime).

Backlog

TCP is a connection based protocol unlike UDP that is connection-less or message oriented. For a TCP server this means connections have to be actively accepted, typically by using the BSD API call accept(). The timeout between a connection getting stored into the backlog after a successful three-way-handshake and the application taking over responsibility for this connection by calling accept() is determined by the client.

The timeout for a browser client connection can endure several seconds up to minutes if the connection itself can be established (before a connect timeout of the client runs out). The timeouts for data processing such as a recv() timeout are independent from a connect timeout and can be much higher.

Use case

Using the backlog for accepting connections in time for a later processing is a common method to not discard connections (which a browser would not be happy with and tell you about it visually) but to delay processing them until your resources like a worker thread is able to handle the new connection.

Typical server application configurations therefore allow you to set two parameters:

  1. Number of backlog connections (1 is minimum as 0 means to not accept a single connection at all). This means how many connection do you expect in parallel.
  2. Number of worker threads. More worker threads mean you can handle more connection in parallel and their amount should be equal or less than the backlog connections as having more worker threads than possible connections makes no sense.

Bottlenecks

Please note that more worker threads do not necessarily improve the time to handle a connection in all cases. A typical example would be a web server where each worker thread has to serve a page that is read from a real filesystem like an MMC/SD card. While the worker threads can handle the protocol based processing of TCP/IP and the HTTP protocol in parallel, the access to the filesystem will be the limiting factor that can not work in parallel. Serving a larger file to a browser for example that requires multiple filesystem accesses to actually read chunks of the file from the medium might get interrupted to read chunks of other files for other HTTP server child threads.

As typically all child threads of a server application run at the same thread priority often using Round Robin to make sure all threads get some execution time now and then and do not completely block others, this might result in additional CPU overhead without drastically improving the time to serve a resource. This of course depends on the actual server application and its dependencies on other (hardware) bottlenecks as well as CPU and network performance.