Question

As part of an effort to decrease the memory load on our application, we've collected an hprof report. The report includes the following:

          percent          live          alloc'ed  stack class
 rank   self  accum     bytes objs     bytes  objs trace name
    1  9.42%  9.42%  57414792  219  57414792   219 373093 byte[]
    2  6.45% 15.87%  39328800  300  39328800   300 367689 byte[]
    8  1.74% 30.92%  10618776   81  39328800   300 367958 byte[]

The corresponding traces are:

TRACE 373093:
    java.nio.HeapByteBuffer.(HeapByteBuffer.java:39)
    java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
    com.sun.enterprise.web.connector.grizzly.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:153)
    com.sun.enterprise.web.connector.grizzly.SocketChannelOutputBuffer$NIOOutputStream.write(SocketChannelOutputBuffer.java:240)

TRACE 367689:
    java.nio.HeapByteBuffer.(HeapByteBuffer.java:39)
    java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
    com.sun.enterprise.web.connector.grizzly.SocketChannelOutputBuffer.(SocketChannelOutputBuffer.java:100)
    com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.initialize(DefaultProcessorTask.java:436)

TRACE 367958:
    java.nio.HeapByteBuffer.(HeapByteBuffer.java:39)
    java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
    com.sun.enterprise.web.connector.grizzly.SocketChannelOutputBuffer.(SocketChannelOutputBuffer.java:100)
    com.sun.enterprise.web.connector.grizzly.ssl.SSLOutputBuffer.(SSLOutputBuffer.java:59)

Anyone got any idea why Grizzly is so... uhmm.. hungry?

Thanks!

Was it helpful?

Solution

Those buffers are used to read/write from the channel. The read buffer is by default 8192 bytes. There's an output buffer whose default size it 16x that. Those sizes are tunable based on your needs but in general have been pretty decent default values over the years.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top