Question

I am having a problem using HttpWebRequest against a HTTP daemon on an embedded device. The problem appears to be that there is enough of a delay between the http headers being written to the socket stream, and the http payload (a POST), that the socket releases what's in the socket buffer to the server. This results in the HTTP request being split over two packets (fragmentation).

This is perfectly valid, of course, however the server the other end doesn't cope with it if the packets are split by more than about 1.8ms. So I am wondering if there are any realistic ways to control this (on the client).

There do not appear to be any properties on HttpWebRequest that give this level of control over the socket used for the send, and one can't appear to access the socket itself (ie via reflection) because it is only created during the send, and released afterwards (as part of the outbound http connection pooling stuff). The BufferWriteStream property just buffers the body content within the webrequest (so it's still available for redirects etc...), and doesn't appear to affect the way the overall request is written to the socket.

So what to do?

(I'm really trying to avoid having to re-write the HTTP client from the socket up)

One option might be to write some kind of proxy that the HttpWebRequest sends to (maybe via the ServicePoint), and in that implementation buffer the entire TCP request. But that seems like a lot of hard work.

It also works fine when I'm running Fidder (for the same reason) but that's not really an option in our production environment...

[ps: I know it's definately the interval between the fragmented packets that's the problem, because I knocked up a socket-level test where I explicitly controlled the fragmentation using a NoDelay socket]

Was it helpful?

Solution

In the end the vendor pushed out a firmware upgrade that included a new version of HTTPD and the problem went away. They were using BusyBox linux, and apparently there was some other problem with the HTTPD implementation that they had suffered from.

In terms of my original question, I don't think there is any reliable way of doing it, apart from writing a socket proxy. Some of the workarounds I played with above worked by luck not design (because they meant .net sent the whole packet in one go).

OTHER TIPS

What has seemed to have fixed it is disabling Nagling on the ServicePoint associated with that URI, and sending the request as HTTP 1.0 (neither on their own seem to fix it):

var servicePoint = ServicePointManager.FindServicePoint(uri.Uri);
servicePoint.UseNagleAlgorithm = false;

However this still seems to have fixed it only by making the request go out faster, rather than forcing the headers and payload to be written as one packet. So it could presumably fail on a loaded machine / high latency link etc.

Wonder how hard it would be to write a defragmenting proxy...

Is your embedded server a HTTP/1.1 server? If so, try setting Expect100Continue=false on the webrequest before you call GetRequestStream(). This will ensure that the HTTP stack does not expect the "HTTP/1.1 100 continue" header response from the server, before sending the entity body. So, even though the packets will still be split between the header and body, the inter packet gap will be shorter.

Just looking at the client side splitting packets problem, I posted an answer to my own question which is linked to this one:

I saw the answer here:

http://us.generation-nt.com/answer/too-packets-httpwebrequest-help-23298102.html

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top