Question

I'm trying to implement an HTTP-Server (using Netty) that not only serves "regular" html-pages, but also large files. Thus, I want to use the ChunkedWriteHandler as well as the HttpContentCompressor within my pipeline.

Currently, this pipeline is initialized as follows:

pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("aggregator", new HttpObjectAggregator(1048576));
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("chunkedWriter", new ChunkedWriteHandler());
pipeline.addLast("deflater", new HttpContentCompressor());
pipeline.addLast(new NettyHandler());

The NettyHandler follows this scheme:

@Override
public void channelRead(final ChannelHandlerContext context, final Object message) throws Exception {
    try {
        if (message instanceof HttpRequest) {
            final HttpRequest request = (HttpRequest) message;
            final HttpContext httpContext = new HttpContext(request, context);
            final ChannelFuture future = handleHttpMessage(httpContext);
            httpContext.closeOn(future);
        }
    } finally {
        ReferenceCountUtil.release(message);
    }
}


private ChannelFuture handleHttpMessage(final HttpContext context) {
    //writing to the wire via ChannelHandlerContext.write(...)
    return context.getChannelContext().writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT);
}

If I request/send small files (my test files are about 500 Bytes), everything works fine. But as soon as the requested files get big (my test files about 350 MB), the Browsers (in tested with chrome and firefox) report Problems concerning the encoded parts of the received body. chrome says ERR_CONTENT_DECODING_FAILED, firefox says something like source file could not be read.

Am I doing something fundamentally wrong? Do I have to manipulate the pipeline on-the-fly? Thanks in advance for any help here!

Was it helpful?

Solution

You will need to wrap the written chunks into DefaultHttpContent as the HttpContentCompressor does not understand ByteBuf instances.

So just place a special HttpContentCompressor into the ChannelPipeline which knows howto handle ByteBuf instances. Something like HttpChunkContentCompressor in the vert.x project.

Be sure you place it before the ChunkedWriteHandler.

OTHER TIPS

The answer above is completely correct. However, as the link seems to be dead, here is another approach:

Instead of sending a ChunkedInput of type ByteBuf downstream, wrap it with an adapter to ChunkedInput of type HttpContent. This is quite trivial:

Implementation: https://github.com/scireum/sirius/blob/develop/web/src/sirius/web/http/ChunkedInputAdapter.java

I wrote a short blog post explaining the solution a bit more in depth: http://andreas.haufler.info/2014/01/making-http-content-compression-work-in.html

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top