Question

I have been struggling with something that looks very basic, the problem is related to use of Jetty continuations for long poll.

For the sake of simplicity, i have removed all my application specific code and just left simple continuation related code.

I am pasting the doPost method of my servlet below. The key question, where i need some expert guidance is

  • In the code block below, if i run it as is and fire post requests which carry a post body of approx 200 bytes then the amount of memory for 500 long poll connections is around 20 MB.
  • Where as if I comment the block highlighted as "decrease memory footprint :: comment block below" then the memory foot print comes down to 7 MB

In both the cases i wait for system to be stable, call GC multiple times and then take memory reading via jConsole. Its not exact, but the difference is so much and explanable that precision of few 100 bytes here or there does not matter.

My problem explodes, considering my server is required to hold 100K connections if not more. And here and this unexplanable increase in size eventually leads to close to GBs of extra heap used.

( what is causing this extra heap usage, when even what is read from the stream is not preserved outside the scope of doPost method. But still it adds to the heap....what am i missing?)

   @Override
   protected void doPost(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {

    Continuation cc = ContinuationSupport.getContinuation(req);

    //if continuation is resumed, then send an answer back with 
    //hardcoded answer
    if (cc.isResumed()) {
        String myJson = "{\"1\",\"2\"}";
        res.setContentType("application/json");
        res.setContentLength(myJson.length());
        PrintWriter writer = res.getWriter();
        writer.write(myJson);
        writer.close();
    } 
    // if it is the first call to doPost ( not reentrant call )
    else if (cc.isInitial()) {          

        //START :: decrease memory footprint :: comment this block :: START

        // store the json from the request body in a string
        StringBuffer jsonString = new StringBuffer();
        String line = null;                      
        BufferedReader bufferedReader = req.getReader();
        while ((line = bufferedReader.readLine()) != null) {
            jsonString.append(line);
        }  

        //here jsonString was parsed and some values extracted
        //though that code is removed for the sake of this publish
        // as problem exists irrespective...of any processing

        line = null;            
        bufferedReader.close();
        bufferedReader = null;
        jsonString = null;

        // END :: decrease memory footprint :: comment this block :: END

        cc.setTimeout(150000);        

        cc.suspend();
    }
}
Was it helpful?

Solution

what is causing this extra heap usage...

Take a look at this line:

BufferedReader bufferedReader = req.getReader();

Note that you are not actually creating a new BufferedReader. When you call getBufferedReader, Jetty creates a BufferedReader which wraps an InputStreamReader which wraps a custom InputStream implementation which wraps a byte buffer. I am pretty sure that by executing the code which reads the entire message, you create large byte buffer inside the request object which stores the entire contents of the message body. Plus the request object maintains a reference to the readers.

At the beginning of the function you called:

Continuation cc = ContinuationSupport.getContinuation(req);

I believe your continuation is holding onto the request which is storing all the data. So the simple act of reading the data is allocating the memory which will be preserved until you discontinue your continuation.

One thing you might try just as an experiment. Change your code to:

BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(req.getInputStream()));

This way Jetty won't allocate it's own readers. Again - I don't know how much data is really stored in the readers compared to the rest of the request object - but it might help a little.

[update]

Another alternative is to avoid the problem. That's what I did (although I was using servlet 3.0 rather than Continuations). I had a resource - let's call it /transfer which would POST some data, then use an AsyncContext to wait for a response. I changed it to two requests with different URLS - /push and /pull. Any time I had some content that needed to be sent from client to server, it would go in the /push request which would then immediately return without creating an AsyncContext. Thus, any storage in the request is freed up right away. Then to wait for the response, I sent a second GET request with no message body. Sure - the request hangs around for a while - but who cares - it does not have any content.

You may have to rethink your problem and determine if you can perform your task in pieces - multiple requests - or whether you really have to handle everything in a single request.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top