If I am not reading data from socket fast enough, the TCP protocol will decrease sliding windows size and sender might get blocked during sending (as discussed here what happens when I don't manage to call `recv` fast enough?).

How do I detect this situation on receiver side on Windows - preferably directly in C# code and without impacting the performance of reading from socket? Other monitoring solution (perfmon, wireshark) is also acceptable but far less optimal for my scenario.

What is the exact scenario? Let's say the server app can transmit data with speed up to 1Mbps, however my client app is able to receive the data only with the speed of 0.5Mbps. How do I find out in the client application that TCP flow control is kicking in and decreasing the transmit speed?

I came over Socket.Available property http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.available.aspx and was wondering if that might be recomendable way of querying this information?

有帮助吗?

解决方案 2

The TCP Window is handled by the Kernel and won't be available to you. I guess you could possibly compare the ReceiveBufferSize with number of bytes Received. If this buffer isn't full, then you are waiting.

其他提示

You would be better off reading as fast as you possibly can, rather than wasting time trying to have the system tell you you're not reading fast enough, which can only slow down your reading even further. If you're reading at maximum speed and the sender is still getting blocked, TCP is working correctly and there is nothing you can do about it, except maybe look into a faster machine.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top