Question

I am struggling to see the benefits of the memory mapped file (buffer) in Java. here is how I see the approach in practice:

We map chunks of file into main memory and deal with any writes/reads directly in memory, leaving OS to do the job of persisting the file into disk.

Now, I wanted to contrast this with regular I/O and a few scenarios:

  1. Appending into a file

In order to map the file into mem I have to read it in as a whole, to make overall modifications. The buffer size is the size of a file initially (let's say I do not know how much data I am about to write into the file). Now, in order to append to the file, I cannot do that, because the buffer is of limited size. So the most basic operation seems impossible to me. Also, reading in the whole file to append a small portion seems rather wasteful. So I guess regular I/O performs better in this scenario.

  1. Persisting changes

In order to persist the changes i still need to flush them. Therefore, if I don't do this periodically I might have lost the changes in the main memory. This is the same idea with regular I/O streams, so no gain here either.

  1. Random changes

This is where I can see this to work - replacing n bytes with other n bytes. Nevertheless, this seems rather unusual operation to have m characters being replaced by exactly m characters. we would rather have Hello and replace it with Hi. But we have a filled, fixed sized buffer, so that is not easy... Besides, it reminds me of RandomAccessFile with a better performance.

Probably most of what I wrote is nonsense, but I will be happy to be told so, as for me atm the MemoryMappedBuffer seems hard and cumbersome (or even impossible) to use.

Was it helpful?

Solution

Memory Mapped File Benefits

  • They are a means of IPC (Process intercommunication), which is very fast
  • You don't have to use the slow system calls such as open, read, write (they are terribly slow, because the cpu has to do context switching if you are calling a system call
  • You can use a very clean interface: writing to the main memory. That is easy and people know how to use it
  • No disk-I/O is wasted, all modifications are done in RAM. For one, other processes can utilize the disk better and for another you increase the durability of your SSD, which has a limited amount of overwrites until it is defunct
  • Random Access is much faster by any means, since RAM is "Random Access Memory" and was built for exactly that

One drawback however is that you should save your memory-mapping back to disk from time to time. Imagine yourself doing some highly complicated operations in RAM for hours and all of a sudden there's a black out - all information is lost.

OTHER TIPS

There are number of advantages of using Memory mapped File in Java. some of them are:

  • very fast IO operations
  • share data between two processes
  • Operating System does the actual read/writes.

In addition to the above, the big disadvantage that comes with Memory Mapped File feature of java.nio Java is that if the request page is not in RAM, it results in page fault. so if you are doing append after some initial read/write in the middle of the file, it could possibly result in page fault and that will cause the portion of file to be loaded into memory and I/Os will perform faster afterwards.

Refer this link for more details: 10-Things-to-Know-about-Memory-Mapped-File-in-Java

"Now assume each process has a number N of pages it is allowed to hold in the RAM. If your binary consumes (Nb) pages, there are N - Nb for other stuff, including MMF. This demonstrates how increasing the size of Nb will decrease the number of available pages for MMF." OS uses LRU to handle page replacement. how can os give a up bound in per process level for total number pages?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top