Pregunta

Latency (delay) is defined here as the time that a packet spends in travelling between sender and reciever.

Above definition is made for IP packets as far as I can understand. Can we say latency includes retransmission time for missing frames in data link layer? Or this definition assumes there is no missing frame?

Is it possible to make a latency definition for application level? Say, we have an application A. A uses TCP to send messages to a remote application. Since TCP is used missing segments will be retransmitted. So that, latency of an A message includes the missing segments' retransmission time.

¿Fue útil?

Solución

Can we say latency includes retransmission time for missing frames in data link layer? Or this definition assumes there is no missing frame?

If you're measuring application latency, you can define latency to include the time it takes for missing TCP segments to be retransmitted.

Is it possible to make a latency definition for application level? Say, we have an application A. A uses TCP to send messages to a remote application. Since TCP is used missing segments will be retransmitted. So that, latency of an A message includes the missing segments' retransmission time.

This measurement is very feasible; obviously you will need to implement measurements of this latency within your application... also be aware that Nagle could skew your latency measurements upwards if your messages typically are larger than TCP MSS (1460 bytes on standard ethernet segments). If your messages tend to be larger than TCP MSS, disable Nagle to get the lowest average message latency.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top