Question

I'm trying to reverse engineer an application, and i need help understanding how TCP window size works. My MTU is 1460 My application transfers a file using TCP from point A to B. I know the following:

  • The file is split into segments of size 8K
  • Each segment is compressed
  • Then each segment is sent to point B over TCP. These segment for a text file can be of size 148 Bytes, and for a pdf 6000 Bytes.

For a text file, am i supposed to see the segments of 148 attached to one another to form one large TCP stream? and then it is split according to the Window Size?

Any help is appreciated.

Was it helpful?

Solution

The receiver application should see the data in teh same way, the sender application sent it. TCP uses byte-streaming and so it collects all the bytes in an in-order manner and delivers it to the application. MTU is largely an internal semantics to TCP and does not take into application-layer packet boundaries. If TCP has enough data to send in its send buffer (each TCP socket has its own send buffer, btw), then it will package its next segment worth MTU size and sends it; to be more precise, it deducts TCP and IP header from the MTU size.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top