Question

So I'm trying to read in a very large file using a mapped FileChannel.

The file exceeds 2GB. A snippet of code is:

long fileSize = 0x8FFFFFFFL;
FileChannel fc = new RandomAccessFile("blah.huge", "rw").getChannel();
fc.map(FileChannel.MapMode.READ_WRITE, 0, fileSize);

This throws an error:

Exception in thread "main" java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
   at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:789)

FileChannel.map takes a long as the file size. So does this error make sense? Why would they not provide support for bigger files than that?

Was it helpful?

Solution

The native methods this function uses does take long values without reporting an error. You can call them using reflection. However you would have test whether they work for you on your system and using memory mapping this way, could confuse you more than be useful.

The best approach is to create an array of MappedByteBuffers, e.g. 1 GB each in size and create a wrapper which hides this ugliness.

While not technically a bug, it is BAD (Broken As Designed) Part of the reason this was done originally could be that 32-bit JVM could not support this, but I don't see why 64-bit JVMs still have this limit.

OTHER TIPS

This is not a bug. FileChannel#map is documented as requiring a size argument not greater than Integer.MAX_VALUE, which makes sense as e.g. ByteBuffer#get takes an integer for its index parameter.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top