Question

I'm trying to use double-checked locking to maintain an array of binomial coefficients, but I read recently that double-checked locking doesn't work. Efficiency is extremely important so using volatile isn't an option unless it's only inside the conditional statements. I can't see a way to use a static class with a singleton object (this is part of a framework and I don't know what kinds of numbers people will need to use the function for so I can't guess what the maximum chosen value will be or whether the function will be used at all). The only thing I can think of is to make everything not static and insist that each thread that needs to use this method instantiate a Choose object with its own array. It seems like that shouldn't be necessary.

public static final class Util{
/**
 * Static array of nCr values
 */
public static long[][] nCr_arr;

/**
 * Calculate binomial coefficient (n k)
 * 
 * @param n
 *            n
 * @param k
 *            k
 * @return n choose k
 */
public static long nCr(int n, int k) {
    if (k < 0)
        throw new ArithmeticException("Cannot choose a negative number");
    if (n < 0) {
        if (k % 2 == 0)
            return nCr(-n + k - 1, k);
        else
            return -nCr(-n + k - 1, k);
    }
    if (k > n)
        return 0;
    if (k > n / 2)
        k = n - k;
    if (nCr_arr == null) {
        synchronized (Util.class) {
            if (nCr_arr == null)
                nCr_arr = new long[n + 1][];
        }
    }
    if (nCr_arr.length <= n) {
        synchronized (Util.class) {
            if (nCr_arr.length <= n) {
                long[][] newNCR = new long[n + 1][];
                System.arraycopy(nCr_arr, 0, newNCR, 0, nCr_arr.length);
                nCr_arr = newNCR;
            }
        }
    }
    if (nCr_arr[n] == null) {
        synchronized (Util.class) {
            if (nCr_arr[n] == null)
                nCr_arr[n] = new long[k + 1];
        }
    }
    if (nCr_arr[n].length <= k) {
        synchronized (Util.class) {
            if (nCr_arr[n].length <= k) {
                long[] newNCR = new long[k + 1];
                System.arraycopy(nCr_arr[n], 0, newNCR, 0,
                        nCr_arr[n].length);
                nCr_arr[n] = newNCR;
            }
        }
    }
    if (nCr_arr[n][k] == 0) {
        if (k == 0)
            nCr_arr[n][k] = 1;
        else
            nCr_arr[n][k] = nCr(n, k - 1) * (n - (k - 1)) / k;
    }
    return nCr_arr[n][k];
}
}
Was it helpful?

Solution 6

I ended up just making it not static. If a thread needs to get nCr values, it creates a new Coefficient object and holds onto it.

OTHER TIPS

Well, you could always avoid double checked locking by changing the code from:

if (nCr_arr == null) {
    synchronized (Util.class) {
        if (nCr_arr == null)
            nCr_arr = new long[n + 1][];
    }
}

to this:

synchronized (Util.class) {
    if (nCr_arr == null)
        nCr_arr = new long[n + 1][];
}

I bet the performance impact would be very small.

Are you sure you need to optimize this? Have you profiled running code and found the single lock is too expensive?

Or rewrite your code using new Java Concurrency API http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/ReadWriteLock.html and obtain write lock only if really needed.

Given you are using this in a very performance critical part of your code, I recommend ditching the lazy initialization idea, because it requires several additional comparisons to be performed for each access to a coefficient.

Instead, I'd require the user of your library to manually specify how many coefficients she needs at initialization time. Alternatively, I'd precompute more than the user is every likely to need - you can fit all nCk for n < 1000 into 1 MB of memory.

PS: Might I suggest you use the recursive formula to compute a coefficient?

c[n][k] = c[n-1][k-1] + c[n-1][k]

It won't matter much, but why use to complicated formula when you need Pascals Triangle?

I looks like you a building a cache of the results as they are calculated, so you could use a concurrent map to hold the results by building a key that combines the 2 int values into a single long.

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

public final class Util {
    /**
     * Static array of nCr values
     */
    private static final ConcurrentMap<Long,Long> CACHE = 
        new ConcurrentHashMap<Long, Long>();

    /**
     * Calculate binomial coefficient (n k)
     * 
     * @param n
     *            n
     * @param k
     *            k
     * @return n choose k
     */
    public static long nCr(int n, int k) {
        if (k < 0)
            throw new ArithmeticException("Cannot choose a negative number");
        if (n < 0) {
            if (k % 2 == 0)
                return nCr(-n + k - 1, k);
            else
                return -nCr(-n + k - 1, k);
        }

        if (k > n)
            return 0;
        if (k > n / 2)
            k = n - k;

        final long key = (n << 32L) + k;

        Long value = CACHE.get(key);
        if (value != null) {
            return value.longValue();
        } 

        long result;

        if (k == 0)
            result = 1;
        else
            result = nCr(n, k - 1) * (n - (k - 1)) / k;

        CACHE.put(key, result);

        return result;
    }
}

The original code has way too many race conditions. For starters you cant update non-volatile nCr_arr and hope for double-check idiom to work. Declaring it volatile totally defeats the purpose of the cache somehow. Proper code should not use sync at all but CAS.

CHM is a very bad choice here as well (also CHM doesn't scale well). (Also using Long as key is not very good b/c of how valueOf works, it cannot be properly inlined by hotspot since it doesn't always create an object and final value field doesn't help either)

If anyone is (still) interested how to do the code, drop a note. Cheers

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top