Question

I have a web app which connects to a server using a TCP connection and reads a binary document which it then writes to its response object. In other words it's transferring a file from a backend server using a custom protocol and returning that file to its client through HTTP.

The server sends a status code and a mime type, which I read successfully and then writes the contents of the file and closes the socket. This seems to work fine.

The client (a C# web app), reads the data:

     private NetworkStream stream_;

     public void WriteDocument(HttpResponse response)
     {
        while (stream_.DataAvailable)
        {
           const int bufsize = 4 * 1024;
           byte[] buffer = new byte[bufsize];
           int nbytes = stream_.Read(buffer, 0, bufsize);
           if (nbytes > 0)
           {
              if (nbytes < bufsize)
                 Array.Resize<byte>(ref buffer, nbytes);
              response.BinaryWrite(buffer);
           }
        }
        response.End();
     }

This seems to always exit the read loop before all the data has arrived. What am I doing wrong?

Was it helpful?

Solution

I would use the OutputStream directly with a general-purpose function. With the Stream, you can control Flush.

    public void WriteDocument(HttpResponse response) {
        StreamCopy(response.OutputStream, stream_);
        response.End();
    }

    public static void StreamCopy(Stream dest, Stream src) {
        byte[] buffer = new byte[4 * 1024];
        int n = 1;
        while (n > 0) {
            n = src.Read(buffer, 0, buffer.Length);
            dest.Write(buffer, 0, n);
        }
        dest.Flush();
    }

OTHER TIPS

Here's what I do. Usually the content length is desired to know when to end the data storing loop. If your protocol does not send the amount of data to expect as a header then it should send some marker to signal the end of transmission.

The DataAvailable property just signals if there's data to read from the socket NOW, it doesn't (and cannot) know if there's more data to be sent or not. To check that the socket is still open you can test for stream_.Socket.Connected && stream_.Socket.Readable

    public static byte[] doFetchBinaryUrl(string url)
    {
        BinaryReader rdr;
        HttpWebResponse res;
        try
        {
            res = fetch(url);
            rdr = new BinaryReader(res.GetResponseStream());
        }
        catch (NullReferenceException nre)
        {
            return new byte[] { };
        }
        int len = int.Parse(res.GetResponseHeader("Content-Length"));
        byte[] rv = new byte[len];
        for (int i = 0; i < len - 1; i++)
        {
            rv[i] = rdr.ReadByte();
        }
        res.Close();
        return rv;
    }

Not sure how things work in .Net, but in most environments I've worked in Read() returns 0 bytes when the connection is closed. So you'd do something like:

char buffer[4096];
int num_read;

while ( num_read = src.Read(sizeof(buffer)) > 0 )
{
   dst.Write(buffer, num_read);
}

The root of your problem is this line:

while (stream_.DataAvailable)

DataAvailable simply means there's data in the stream buffer ready to be read and processed. It makes no guarantee about the 'end' of the stream having been reached. In particular, DataAvailable can be false if there's any pause in transmission, or if your sender is slower than your reader.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top