Question

I am using the following programming idiom. I keep a synchronized HashMap with an association of names to objects. To lookup an object for a name I use the following code:

MyObject getObject(String name) {
   synchronized(map) {
      MyObject obj = map.get(name);
      if (obj == null) {
         obj = new MyObjec();
         map.put(name, obj);
      }
   }
}

When I then want to work exclusively on such an object I will use a synchronized on such an object:

synchronized(obj) {
    /* do something exclusively on obj (work type I) */
}

This has been working fine so far until recently. The new requirement is that there are type I and type II exclusive works. Type I will keep the object and type II should remove the object after it has completed the work. If I do something along the following:

synchronized(obj) {
    /* do something exclusively on obj (work type II) */
}
synchronized(map) { /* not good! */
   map.remove(obj);
}

I might grant some object some type I work, although the object has already been removed from the map. So basically the synchronized(obj) for type I work should be replaced by some new semaphore which rejoins the object to the map in case a type II work was granted before. Respectively the object should only leave the map when no sychronized are pending on it.

Best would be if the objects are not seen. I would go with an API with the names only. The objects are only used to maintain some state for the names. But the HashMap should be freed from the name after type II work has been completed. But during type I or type II work, the HashMap should not be locked.

Any ideas how to do that? Is this a known pattern?

Bye

Was it helpful?

Solution 2

You could use an AtomicInteger to keep track of the number of tasks in progress on the object. Then, for type II tasks, only remove the object if there are no tasks in progress:

class MyObject {
   private AtomicInteger worksInProgress = new AtomicInteger(0);
   public int incWIP() {
      return worksInProgress.incrementAndGet();
   }
   public int decWIP() {
      return worksInProgress.decrementAndGet();
   }
   public int getWIP() {
      return worksInProgress.get();
   }
   ...
}

MyObject getObject(String name) {
   synchronized(map) {
      MyObject obj = map.get(name);
      if (obj == null) {
         obj = new MyObject();
         map.put(name, obj);
      }
      obj.incWIP(); // assume you're doing work on this starting now
   }
}

Work type I would look like:

MyObject obj = getObject(name);
synchronized(obj) {
   obj.workI();
}
obj.decWIP(); // finished doing work type I

And type II would look like:

MyObject obj = getObject(name);
synchronized(obj) {
   obj.workII();
}
if (obj.decWIP() == 0) { // finished with this work and all others
   synchronized(map) {
      // double-check the value because we checked previously without the map lock
      if (obj.getWIP() == 0) {
         map.remove(obj);
      }
   }
}

OTHER TIPS

The requirement seems to be this:

  • There is a Map<String, Object> that is a cache.
  • There are a number of worker threads in a pool the access the cache
  • Some types of work require the object in the cache to be invalidated when they are done

First you will need a ConcurrentHashMap<String, Lock> keys. This Map will store a relationship between the String keys and and Lock objects that we will use the lock the keys. This allows us to replace the key -> value mappings without locking the entire data Map.

Next you will need a ConcurrentHashMap<String, Object> data. This Map will store the actual mappings.

The reason to use a ConcurrentHashMap rather than a plain one is that it is thread safe. This means that manually synchronizing is not required. The implementation actually divides the Map into sectors and only locks the required sector to carry out operations - this makes it more efficient.

Now, the logic will be

  1. putIfAbsent a new ReentrantLock into keys. This will, in a thread safe manner, check if a lock is already present for a key. If not a new one will be added, otherwise the existing one is retrieved. This means that there will only ever be one lock per key
  2. Acquire a lock. This means that you gain exclusive access to a mapping.
  3. Do work. In the case of TypeII remove the mapping from data after finishing.
  4. Unlock the lock.

The code would look something like this:

private final ConcurrentHashMap<String, Object> data = new ConcurrentHashMap<>();
private final ConcurrentHashMap<String, Lock> keys = new ConcurrentHashMap<>();
private final ExecutorService executorService = null; //obviously make one of these

@RequiredArgsConstructor
private class TypeI implements Runnable {

    private final String key;
    private final Work work;

    @Override
    public void run() {
        final Lock lock = keys.putIfAbsent(key, new ReentrantLock());
        lock.lock();
        try {
            final Object value = data.get(key);
            work.doWork(value);
        } finally {
            lock.unlock();
        }
    }
}

@RequiredArgsConstructor
private class TypeII implements Runnable {

    private final String key;
    private final Work work;

    @Override
    public void run() {
        final Lock lock = keys.putIfAbsent(key, new ReentrantLock());
        lock.lock();
        try {
            final Object value = data.get(key);
            work.doWork(value);
            data.remove(key);
        } finally {
            lock.unlock();
        }
    }
}

public static interface Work {

    void doWork(Object value);
}

public void doTypeIWork(final String key, final Work work) {
    executorService.submit(new TypeI(key, work));
}

public void doTypeIIWork(final String key, final Work work) {
    executorService.submit(new TypeII(key, work));
}

I have used Lombok annotations to reduce the amount of clutter.

The idea is to minimise, or almost eliminate, the amount of common resource locking while still allowing a Thread to gain, if needed, exclusive access to a particular mapping.

To clean the keys Map you would need to guarantee that no work is currently ongoing and that no Threads would try and acquire any locks during the cleaning period. You could do this by attempting to acquire the relevant lock and then removing the mapping from the keys map - this would ensure no other thread was using the lock at the time.

You could run a scheduled task that clears, say, 20 keys from the map every X minutes. If you implemented it as an LRU cache then it should be fairly clean. Google Guava provide an implementation that you could use.

1) The first is to use a hash to store your data object, objHash.

2) You need a extra lock to assure the atomic exeuction of type1 and type2 operation on objHash.Type2 is write operations, and Type1 is read operations. You can use a readwrite lock and store the locks in a hash table, lockHash.

3) For assure the atomic operation of type1, type2 on the data object, you have to enclosed your type1/2 operation in a synchronzied statement that get a lock to this data object.

public class ConDeleteHash {
    ConcurrentHashMap <String, Object> objHash = new ConcurrentHashMap <String, Object> ();
    ConcurrentHashMap <String, ReentrantReadWriteLock> lockHash = new ConcurrentHashMap <String, ReentrantReadWriteLock> ();
    void Type1Op(String name) {
        ReadWriteLock rwl = lockHash.get(name);
        if(rwl==null) return;
        Lock lock = rwl.readLock();
        lock.lock();
        Object obj = objHash.get(name);
        if(obj==null) return;
        synchronized(obj) {
             System.out.println("TYPE1 to :"+ obj.toString());
        }
        lock.unlock();      
    }
    void Type2Op(String name) {
        ReadWriteLock rwl = lockHash.get(name);
        if(rwl==null) return;
        Lock lock = rwl.writeLock();
        Object obj = objHash.get(name);
        synchronized(obj) {
            System.out.println("TYPE2 to :"+ obj.toString());
        }
        lockHash.remove(name);
        objHash.remove(name);
        lock.unlock();
    }
    void add(String name, Object obj) {
        if(lockHash.get(name)!=null) return;
        objHash.put(name, obj);
        lockHash.put(name, new ReentrantReadWriteLock());
    }
}

Delay removing from map

... class MyObject{
    boolean active = true;
    ...
}


synchronized(obj) {
    if(obj.active){
        /* do something exclusively on obj */
        obj.active = false; //or not
    }
}


MyObject getObject(String name) {
   synchronized(map) {
      MyObject = map.get(name);
      if (obj == null) {
         obj = new MyObjec();
         map.put(name, obj);
      }else{
         synchronized(obj){
             if(!obj.active){
                 //any remove action here
                 obj = new MyObjec();
                 map.put(name, obj); // no previous obj in map
             }
         }
   }
}

How about this.. An slightly modified version of Boris the Spider.

Main Class with ConcurrentHashMap to work workers

public class Concurrent {
        // Hash map to hold workers
        final ConcurrentHashMap<String, Work> jobs = new ConcurrentHashMap<>();

Work Interface

    interface Work{
         void doWork(Object value);
    }

Base class for Blocking Works. mean Only one work can be done by this instance

 abstract class BaseWork implements Work {

    String name;

    Lock lock = new ReentrantLock();

    BaseWork(String name){
        this.name = name;
    }

    @Override
    public void doWork(Object value) {
        lock.lock();   // lock the following block
        try{
            if (jobs.get(name) != null) {    // Check in case there are waiting threads to perform work on this instance which is removed by completed Type11 Work
                performTask(value);
                System.out.println("Job Completed");
            }else{
                    jobs.putIfAbsent(name, new Type2Work(name)).doWork(value); // if new job has to be trigger. Note this section only possible when Type2Work, so created Type2Work

                System.out.println("Removed.. Job terminated");
            }
        }finally{
            lock.unlock(); // unlock this block , so other threads can start working
        }
    }
    abstract void performTask(Object value);    // Actual Job 
}

Here , name would be the same as key in the concurrentHashMap. As soon as doWork called, it would lock the block where the actual work got executed.

Type1 and Type 2 Implementation

  class Type1Work extends BaseWork{

    Type1Work(String name) {
        super(name);
    }

    @Override
    void performTask(Object value) {
        // Do type 1 Work
    }

}



   class Type2Work extends BaseWork{

    Type2Work(String name) {
        super(name);
    }

    @Override
    void performTask(Object value) {
        // Do Type 2 work.
        jobs.remove(name);
    }
}

Non Blocking Work - type 111 kind of ( doWork can perform work without sharing any information between threads)

    class NonLockingWork implements Work {

    @Override
    public void doWork(Object value) {
        // Do thread safe non blocking Work ( Type 111)
    }
}

Finally block to load the works into Map

          String key = "type1-name1";
    Work work = jobs.putIfAbsent(key, new Type1Work(key));
    work.doWork(new Object());



}

There is a problem with the logic of this question: Why are you deleting an object from the map if a user is allowed to ask for type 1 operations after type 2 operations? If a user is allowed to submit a type 1 after a type 2 there will always be cases where a type 1 is requested after an object is deleted. In which case why are you deleting it?

Is the issue that a client can only submit type one operations before type two operations, but you cannot guarantee that one will be carried out before the other by the executor service? In that case use a priority executor service and submit type 1 with a higher priority than type 2, as that will insure a type 1 is always started before a type 2 if both are pending. Since the type 2 cannot start until the type 1 is finished, this will mean that a delete is always carried out after type one provided type 1 is submitted before type 2.

This feels like a case where bad program design elsewhere has resulted in an insoluble dilemma. If you can explain how these odd specifications came about, then we might be able to craft a more durable solution.

Also, in this type of concurrency, don't delete objects from the map, replace them with a singleton object which is a subclass of your object, and have the doWork method check for this singleton, this is more reliable than null checking, as nulls can be created for lots of reasons, but your singleton object is passed to do work for a particular reason, which means that error tracing is easier later in development.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top