Writing a correct micro-benchmark is very time consuming and error prone. I would suggest doing micro-benchmarks only with an already available library like e.g. Caliper, that is specially designed for doing just that.
Your micro-benchmark has quite a lot of flaws, that will lead to unpredictable results:
- You are doing no warmups.
- You are benchmarking both approaches inside your main method, thereby giving the JIT compiler a harder time to optimize the code.
- The code "z = x + y;" comes actually down to "z = 9 + 9;" and does not ever change during the loop, so the loop may be completely optimized away to the simple expression "z = 18".
Anyway, here is the code for a corresponding benchmark done with Caliper:
@VmOptions("-server")
public class Test {
@Benchmark
public int timeSum1(long reps) {
int dummy = 0;
int x = 9, y = 9;
for (int j = 0; j < reps; j++) {
dummy = x + y;
}
return dummy;
}
@Benchmark
public int timeSum2(long reps) {
int dummy = 0;
int x = 9, y = 9;
for (int j = 0; j < reps; j++) {
dummy = sum(x, y);
}
return dummy;
}
public static int sum(int x, int y) {
int t;
t = x + y;
return t;
}
}
You can have a look at the results for this benchmark here:
The results are as expected: both approaches take about the same time because they can be inlined by the JIT compiler. Running with -server both approaches still take about the same time but are optimized a little better.