Question

This is rather a question to satisfy curiosity.

How does standard HTTP 1.1 stacks compute chunk-sizes on a HTTP response socket? Is it timeout based, max size based or depends on when the application does a flush on the socket, or an algorithm based on all of them? Is there any open HTTP 1.1 stack implementation guideline available on this?

Thanks in advance.

Was it helpful?

Solution

There is no "standard" HTTP/1.1 stack. Often you have to do it yourself, e.g. make sure a transfer-encoding: chunked header is send, then send all the chunks prefixed with length and then the last empty chunk.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top