Question

I am working on a project that involves client server communication via TCP and Google Protocol Buffer. On the client side, I am basically using NetworkStream.Read() to do blocking read from server via a byte array buffer.

According to MSDN documentation,

This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes.

It is the same case with async read (NetworkStream.BeginRead and EndRead). My question is that when does Read()/EndRead() return? It seems like it will return after all the bytes in the buffer have been filled. But in my own testing, that is not the case. The bytes read in one operation vary a lot. I think it makes sense because if there is a pause on the server side when sending messages, the client should not wait until the read buffer has been filled. Does the Read()/EndRead() inherently have some timeout mechanism?

I was trying to find out how Mono implements Read() in NetworkStream and traced until a extern method Receive_internal() is called.

Était-ce utile?

La solution

It reads all the data that is available on the networkstream or when the buffer is full. Whichever comes first. You have already noticed this behaviour.

So you will need to process all the bytes and see whether the message is complete. You do this by framing a message. See .NET question about asynchronous socket operations and message framing on how you can do this.

As for the timeout question, if assuming you are asking whether a beginread has a timeout, I would say no, because it is just waiting for data to arrive on the stream and put it into a buffer, after which you can process the incoming bytes.

The number of bytes available on the read action depends on things like your network (e.g. latency, proxy throttling) and the client sending the data.

BeginRead behaviour summary:

  1. Call BeginRead(); -> Waiting for bytes to arrive on the stream......
  2. 1 byte or more have arrived on the stream
  3. Start putting the byte(s) from step 2 into the buffer that was given
  4. Call EndRead(); -> The byte(s) within the buffer can be processed by EndRead();
  5. Most common practice is to repeat all these steps again.

Autres conseils

If Read was waiting for a full buffer of data, you could easily deadlock if the remote party expects your response but you are waiting for a full buffer which will never come.

According to this logic it must return without ever blocking if data is available. Even if it is just a single byte that is available.

assume that server sends one message (100 bytes) every 50 ms, what is the bytes read on client side on one NetworkStream.Read() call?

Each call will return between one byte and the number of bytes available without blocking. Nothing, nothing, nothing else is guaranteed. In practice you will get one or multiple network packets at once. It doesn't make sense for the stack to withhold available bytes.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top