Question

I'm writing something to handle concurrent read/write requests to a database file.

ReentrantReadWriteLock looks like a good match. If all threads access a shared RandomAccessFile object, do I need to worry about the file pointer with concurrent readers? Consider this example:

import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class Database {

    private static final int RECORD_SIZE = 50;
    private static Database instance = null;

    private ReentrantReadWriteLock lock;
    private RandomAccessFile database;

    private Database() {
        lock = new ReentrantReadWriteLock();

        try {
            database = new RandomAccessFile("foo.db", "rwd");
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        }
    };

    public static synchronized Database getInstance() {
        if(instance == null) {
            instance = new Database();
        }
        return instance;
    }

    public byte[] getRecord(int n) {
        byte[] data = new byte[RECORD_SIZE];
        try {
            // Begin critical section
            lock.readLock().lock();
            database.seek(RECORD_SIZE*n);
            database.readFully(data);
            lock.readLock().unlock();
            // End critical section
        } catch (IOException e) {
            e.printStackTrace();
        }
        return data;
    }

}

In the getRecord() method, is the following interleaving possible with multiple concurrent readers?

Thread 1 -> getRecord(0)
Thread 2 -> getRecord(1)
Thread 1 -> acquires shared lock
Thread 2 -> acquires shared lock
Thread 1 -> seeks to record 0
Thread 2 -> seeks to record 1
Thread 1 -> reads record at file pointer (1)
Thread 2 -> reads record at file pointer (1)

If there are indeed potential concurrency issues using ReentrantReadWriteLock and RandomAccessFile, what would an alternative be?

Was it helpful?

Solution

Yes, this code isn't synchronized properly, just as you outline. A read-write lock isn't useful if the write lock is never acquired; it's as if there is no lock.

Use a traditional synchronized block to make the seek and read appear atomic to other threads, or create a pool of RandomAccessFile instances that are borrowed for the exclusive use of a single thread and then returned. (Or simply dedicate a channel to each thread, if you don't have too many threads.)

OTHER TIPS

This is a sample program that lock file and unlock file.

try { // Get a file channel for the file 

    File file = new File("filename");

    FileChannel channel = new RandomAccessFile(file, "rw").getChannel(); // Use the file channel to create a lock on the file.

    // This method blocks until it can retrieve the lock. 

    FileLock lock = channel.lock(); // Try acquiring the lock without blocking. This method returns // null or throws an exception if the file is already locked. 

    try { 

        lock = channel.tryLock();

    } catch (OverlappingFileLockException e){}


    lock.release(); // Close the file 

    channel.close();
} 

catch (Exception e) { } 

You may want to consider using File System locks instead of managing your own locking.

Call getChannel().lock() on your RandomAccessFile to lock the file via the FileChannel class. This prevents write access, even from processes outside your control.

Rather operate on the single lock object rather than the method, ReentrantReadWriteLock can support upto a maximum of 65535 recursive write locks and 65535 read locks.

Assign a read and write lock

private final Lock r = rwl.readLock();
private final Lock w = rwl.writeLock();

Then work on them...

Also: you are not catering for an exception and failure to unlock subsequent to locking. Call the lock as you enter the method (like a mutex locker) then do your work in a try/catch block with the unlock in the finally section, eg:

public String[] allKeys() {
  r.lock();
  try { return m.keySet().toArray(); }
  finally { r.unlock(); }
}

OK, 8.5 years is a long time, but I hope it's not necro...

My problem was that we needed to access streams to read and write as atomic as possible. An important part was that our code was supposed to run on multiple machines accessing the same file. However, all examples on the Internet stopped at explaining how to lock a RandomAccessFile and didn't go any deeper. So my starting point was Sam's answer.

Now, from a distance it makes sense to have a certain order:

  • lock the file
  • open the streams
  • do whatever with the streams
  • close the streams
  • release the lock

However, to allow releasing the lock in Java the streams must not be closed! Because of that the entire mechanism becomes a little weird (and wrong?).

In order to make auto-closing work one must remember that JVM closes the entities in the reverse order of the try-segment. This means that a flow looks like this:

  • open the streams
  • lock the file
  • do whatever with the streams
  • release the lock
  • close the streams

Tests showed that this doesn't work. Therefore, auto-close half way and do the rest in good ol' Java 1 fashion:

try (RandomAccessFile raf = new RandomAccessFile(filename, "rwd");
    FileChannel channel = raf.getChannel()) {
  FileLock lock = channel.lock();
  FileInputStream in = new FileInputStream(raf.getFD());
  FileOutputStream out = new FileOutputStream(raf.getFD());

  // do all reading
  ...

  // that moved the pointer in the channel to somewhere in the file,
  // therefore reposition it to the beginning:
  channel.position(0);
  // as the new content might be shorter it's a requirement to do this, too:
  channel.truncate(0);

  // do all writing
  ...

  out.flush();
  lock.release();
  in.close();
  out.close();
}

Note that the methods using this must still be synchronized. Otherwise the parallel executions may throw an OverlappingFileLockException when calling lock().

Please share experiences in case you have any...

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top