Question

Let me start by saying that my understanding of how JNA and Java direct native memory allocations is visceral at best, so I'm trying to describe my understanding of what's going on. Any corrections in addition to responses would be great...

I'm running an application that mixes Java and C native code using JNA and am running accross a reproducible issue with the Java Garbage Collector failing to free references to direct native memory allocations, resulting in the C heap running out of memory.

I'm positive that my C application is not the source of the allocation issue, as I'm passing a java.nio.ByteBuffer into my C code, modifying the buffer, and then accessing the result in my Java function. I have a single malloc and a single corresponding free during each function call, but after repeatedly running the code in Java the malloc will eventually fail.

Here's a somewhat trivialized set of code that exhibits the issue -- realistically I'm trying to allocate about 16-32MB on the C heap during the function call.

My Java code does something like:

public class MyClass{
    public void myfunction(){
        ByteBuffer foo = ByteBuffer.allocateDirect(1000000);
        MyDirectAccessLib.someOp(foo, 1000000);
        System.out.println(foo.get(0));
    }
}

public MyDirectAccessLib{
    static {
        Native.register("libsomelibrary");
    }
    public static native void someOp(ByteBuffer buf, int size);
}

Then my C code might be something like:

#include <stdio.h>
#include <stdlib.h>
void someOp(unsigned char* buf, int size){
    unsigned char *foo;
    foo = malloc(1000000);
    if(!foo){
        fprintf(stderr, "Failed to malloc 1000000 bytes of memory\n");
        return;
    }
    free(foo);

    buf[0] = 100;
}

Trouble is after calling this function repeatedly the Java heap is somewhat stable (it grows slowly), but the C function eventually cannot allocate any more memory. At a high level I believe this is because Java is allocating memory to the C heap, but not cleaning up the ByteBuffer that points at this memory because the Java ByteBuffer object is relatively small.

Thus far I've found running the GC manually in my function will provide the required cleanup, but this seems like both a poor idea and a poor solution.

How can I manage this problem better so that the ByteBuffer space is appropriately freed and my C heap space is controlled?

Is my understanding of the problem incorrect (is there something I'm running improperly)?

Edit: adjusted buffer sizes to be more reflective of my actual application, I'm allocating for images approximately 3000x2000...

Was it helpful?

Solution

I think that you've diagnosed properly: you never run out of Java heap, so the JVM doesn't garbage collect, and the mapped buffers aren't freed. The fact that you don't have problems when running GC manually seems to confirm this. You could also turn on verbose collection logging as a secondary confirmation.

So what can you do? Well, first thing I'd try is to keep the initial JVM heap size small, using the -Xms command-line argument. This can cause problems, if your program is constantly allocating small amounts memory on the Java heap, as it will run GC more frequently.

I'd also use the pmap tool (or whatever its equivalent is on Windows) to examine the virtual memory map. It's possible that you're fragmenting the C heap, by allocating variable-sized buffers. If that's the case, then you'll see an every larger virtual map, with gaps between "anon" blocks. And the solution there is to allocate constant-size blocks that are larger than you need.

OTHER TIPS

You are actually facing a known bug in the Java VM. The best workaround listed in the bug report is:

  • "The -XX:MaxDirectMemorySize= option can be used to limit the amount of direct memory used. An attempt to allocate direct memory that would cause this limit to be exceeded causes a full GC so as to provoke reference processing and release of unreferenced buffers."

Other possible workarounds include:

  • Insert occasional explicit System.gc() invocations to ensure that direct buffers are reclaimed.
  • Reduce the size of the young generation to force more frequent GCs.
  • Explicitly pool direct buffers at the application level.

If you really want to rely on direct byte buffers, then I would suggest pooling at the application level. Depending on the complexity of your application, you might even simply cache and reuse the same buffer (beware of multiple threads).

I suspect your problem is due to the use of direct byte buffers. They can be allocated outside of the Java heap.

If you are calling the method frequently, and allocating small buffers each time, your usage pattern is probably not a good fit for a direct buffer.

In order to isolate the problem, I'd switch to a (Java) heap-allocated buffer (just use the allocate method in place of allocateDirect. If that makes your memory problem go away, you've found the culprit. The next question would be whether a direct byte buffer has any advantage performance-wise. If not (and I would guess that it doesn't), then you won't need to worry about how to clean it up properly.

If you run out of heap memory, a GC is triggered automatically. However if you run out of direct memory, the GC is not triggered (on Sun's JVM at least) and you just get an OutOfMemoryError even if a GC would free enough memory. I have found you have to trigger a GC manually in this situation.

A better solution may be to reuse the same ByteBuffer so you never need to re-acllocate ByteBuffers.

To free direct Buffer's [1] memory, you can use JNI.

The function GetDirectBufferAddress(JNIEnv* env, jobject buf)[3] from JNI 6 API can be used to acquire pointer to the memory for the Buffer and then the standard free(void *ptr) command on the pointer to free the memory.

Rather than write code such as C to call the said function from Java, you can use JNA's Native.getDirectBufferPointer(Buffer)[6]

The only thing left after that is to give up all references to the Buffer object. Java's garbage collection will then free the Buffer instance as it does with any other non-referenced object.

Please note that direct Buffer doesn't necessarily map 1:1 to an allocated memory region. For example JNI API has NewDirectByteBuffer(JNIEnv* env, void* address, jlong capacity)[7]. As such, you should only free memory of Buffer's, whose memory allocation region you know to be one to one with native memory.

I also don't know if you can free a direct Buffer created by Java's ByteBuffer.allocateDirect(int)[8] for exactly the same reason as above. It could be JVM or Java platform implementation specific details whether they use pool or do 1:1 memory allocations when handing out new direct Buffers.

Here follows a slightly modified snippet from my library regarding direct ByteBuffer[9] handling (uses JNA Native[10] and Pointer[11] classes):

/**
 * Allocate native memory and associate direct {@link ByteBuffer} with it.
 * 
 * @param bytes - How many bytes of memory to allocate for the buffer
 * @return The created {@link ByteBuffer}.
 */
public static ByteBuffer allocateByteBuffer(int bytes) {
        long lPtr = Native.malloc(bytes);
        if (lPtr == 0) throw new Error(
            "Failed to allocate direct byte buffer memory");
        return Native.getDirectByteBuffer(lPtr, bytes);
}

/**
 * Free native memory inside {@link Buffer}.
 * <p>
 * Use only buffers whose memory region you know to match one to one
 * with that of the underlying allocated memory region.
 * 
 * @param buffer - Buffer whose native memory is to be freed.
 * The class instance will remain. Don't use it anymore.
 */
public static void freeNativeBufferMemory(Buffer buffer) {
        buffer.clear();
        Pointer javaPointer = Native.getDirectBufferPointer(buffer);
        long lPtr = Pointer.nativeValue(javaPointer);
        Native.free(lPtr);
}
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top