Question

I am running Ubuntu on a machine with a quad core cpu. I have written some test Java code that spawns a given number of processes that simply increment a volatile variable for a given number of iterations when run.

I would expect the running time to not increase significantly while the number of threads are less than or equal to the number of cores i.e. 4. In fact, these are the times I get using "real time" from the UNIX time command:

1 thread: 1.005s

2 threads: 1.018s

3 threads: 1.528s

4 threads: 1.982s

5 threads: 2.479s

6 threads: 2.934s

7 threads: 3.356s

8 threads: 3.793s

This shows that adding one extra thread does not increase the time as expected, but then the time does increase with 3 and 4 threads.

At first I thought this could be because the OS was preventing the JVM from using all the cores, but I ran top, and it clearly showed that with 3 threads, 3 cores were running at ~100%, and with 4 threads, 4 cores were maxed out.

My question is: why is the code running on 3/4 CPUs not roughly the same speed as when it runs on 1/2? Because it is running parallel on all the cores.

Here is my main method for reference:

class Example implements Runnable {

    // using this so the compiler does not optimise the computation away
    volatile int temp;

    void delay(int arg) {
        for (int i = 0; i < arg; i++) {
            for (int j = 0; j < 1000000; j++) {
                this.temp += i + j;
            }
        }
    }

    int arg;
    int result;

    Example(int arg) {
        this.arg = arg;
    }

    public void run() {
        delay(arg);
        result = 42;
    }

    public static void main(String args[]) {

        // Get the number of threads (the command line arg)

        int numThreads = 1;
        if (args.length > 0) {
            try {
                numThreads = Integer.parseInt(args[0]);
            } catch (NumberFormatException nfe) {
                System.out.println("First arg must be the number of threads!");
            }
        }

        // Start up the threads

        Thread[] threadList = new Thread[numThreads];
        Example[] exampleList = new Example[numThreads];
        for (int i = 0; i < numThreads; i++) {
            exampleList[i] = new Example(1000);
            threadList[i] = new Thread(exampleList[i]);
            threadList[i].start();
        }

        // wait for the threads to finish

        for (int i = 0; i < numThreads; i++) {
           try {
                threadList[i].join();
                System.out.println("Joined with thread, ret=" + exampleList[i].result);
            } catch (InterruptedException ie) {
                System.out.println("Caught " + ie);
            }
        }
    }
}
Was it helpful?

Solution 2

The Core i5 in a Lenovo X1 Carbon is not a quad core processor. It's a two core processor with hyperthreading. When you're performing only trivial operations that do not result in frequent, long pipeline stalls, then the hyperthreading scheduler won't have much opportunity to weave other operations into the stalled pipeline and you won't see performance equivalent to four actual cores.

OTHER TIPS

Using multiple CPUs helps up to the point you saturate some underlying resource.

In your case, the underlying resource is not the number of CPUs but the number of L1 caches you have. In your case it appears you have two cores, with an L1 data cache each and since you are hitting it with a volatile write, it is the L1 caches which are your limiting factor here.

Try accessing the L1 cache less with

public class Example implements Runnable {
    // using this so the compiler does not optimise the computation away
    volatile int temp;

    void delay(int arg) {
        for (int i = 0; i < arg; i++) {
            int temp = 0;
            for (int j = 0; j < 1000000; j++) {
                temp += i + j;
            }
            this.temp += temp;
        }
    }

    int arg;
    int result;

    Example(int arg) {
        this.arg = arg;
    }

    public void run() {
        delay(arg);
        result = 42;
    }

    public static void main(String... ignored) {

        int MAX_THREADS = Integer.getInteger("max.threads", 8);
        long[] times = new long[MAX_THREADS + 1];
        for (int numThreads = MAX_THREADS; numThreads >= 1; numThreads--) {
            long start = System.nanoTime();

            // Start up the threads

            Thread[] threadList = new Thread[numThreads];
            Example[] exampleList = new Example[numThreads];
            for (int i = 0; i < numThreads; i++) {
                exampleList[i] = new Example(1000);
                threadList[i] = new Thread(exampleList[i]);
                threadList[i].start();
            }

            // wait for the threads to finish

            for (int i = 0; i < numThreads; i++) {
                try {
                    threadList[i].join();
                    System.out.println("Joined with thread, ret=" + exampleList[i].result);
                } catch (InterruptedException ie) {
                    System.out.println("Caught " + ie);
                }
            }
            long time = System.nanoTime() - start;
            times[numThreads] = time;
            System.out.printf("%d: %.1f ms%n", numThreads, time / 1e6);
        }
        for (int i = 2; i <= MAX_THREADS; i++)
            System.out.printf("%d: %.3f time %n", i, (double) times[i] / times[1]);
    }
}

On my dual core, hyperthreaded laptop it produces in the form threads: factor

2: 1.093 time 
3: 1.180 time 
4: 1.244 time 
5: 1.759 time 
6: 1.915 time 
7: 2.154 time 
8: 2.412 time 

compared with the original test of

2: 1.092 time 
3: 2.198 time 
4: 3.349 time 
5: 3.079 time 
6: 3.556 time 
7: 4.183 time 
8: 4.902 time 

A common resource to over utilise is the L3 cache. This is shared across CPUs and while it allows a degree of concurrency, it doesn't scale well above to CPUs. I suggest you check what your Example code is doing and make sure they can run independently and not use any shared resources. e.g. Most chips have a limited number of FPUs.

There are several things that can limit how effectively you can multi-thread an application.

  1. Saturation of a resource such as memory/bus/etc bandwidth.

  2. Locking/contention issues (for example if threads are constantly having to wait for each other to finish).

  3. Other processes running on the system.

In your case you are using a volatile integer being accessed by all of the threads, that means that the threads are constantly having to send the new value of that integer between themselves. This will cause some level of contention and memory/bandwidth usage.

Try switching each thread to be working on its own chunk of data with no volatile variable. That should reduce all forms of contention.

If you are running this on the Core i5 (as much as Google tells me about the Lenovo X1 Carbon), then you have a dual core machine with 2 hyper-cores. The i5 reports to the OS - and therefore to Java - as a quad-core, so the hyper-cores are used like real cores, but all those do is to speed-up the thread-context switching.

That is why you get the expected minimal difference in execution time with 2 threads (1 per real core), and why the time does not raise linearly with additional threads, because the 2 hyper-cores take some minor load from the real cores.

There are two good answer to you already, both are fine to explain what is happening.

Look to you processor, most part of the "quad core" from intel is in fact a dual core, that simulates a quad core do OS (yes, they tell you that you have 4 core, but you have just 2 in fact...). This is the better explanation to your problem, because the time increments as a dual core processor.

If you have a real 4 core, the another answer is that you code have some concurrency.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top