Question

I am debugging a packet loss issue with my C++ program receiving RTP. After my program ran for quite a long time receiving lots of RTP data, it starts to loose packets.

Tracing with Wireshark shows the missing packets, but my application never receives them. It seems the network stack drops them before delivering them to the application. After restarting my application all goes back to normal.

To produce the error condition, I overload the machine with RTP and at that time packet loss happens and for good reason. But even after I stop the overload and start sending at a moderate rate, packet loss still occurs and I have to restart my application to receive all data again.

Is this an issue with Linux receive buffer handling? What Linux stats could I check to see where those missing packets go?

Was it helpful?

Solution

You are not consuming your UDP input fast enough. Here are some usual steps to take to mitigate that:

  • Switch to recvmmsg(2) if your kernel supports it to reduce system call overhead,
  • Pre-allocate all memory used during input processing,
  • Profile your app, find hot spots and optimize,
  • Maybe farm processing out to separate threads, but keep lock scope as small as possible,
  • Increase your socket receive buffer (setsockopt(2)),
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top