Question

Which of the following would be more optimal on a Java 6 HotSpot VM?

final Map<Foo,Bar> map = new HashMap<Foo,Bar>(someNotSoLargeNumber);    
for (int i = 0; i < someLargeNumber; i++)
{
  doSomethingWithMap(map);
  map.clear();
}

or

final int someNotSoLargeNumber = ...;
for (int i = 0; i < someLargeNumber; i++)
{
  final Map<Foo,Bar> map = new HashMap<Foo,Bar>(someNotSoLargeNumber);      
  doSomethingWithMap(map);
}

I think they're both as clear to the intent, so I don't think style/added complexity is an issue here.

Intuitively it looks like the first one would be better as there's only one 'new'. However, given that no reference to the map is held onto, would HotSpot be able to determine that a map of the same size (Entry[someNotSoLargeNumber] internally) is being created for each loop and then use the same block of memory (i.e. not do a lot of memory allocation, just zeroing that might be quicker than calling clear() for each loop)?

An acceptable answer would be a link to a document describing the different types of optimisations the HotSpot VM can actually do, and how to write code to assist HotSpot (rather than naive attmepts at optimising the code by hand).

Was it helpful?

Solution

Don't spend your time on such micro optimizations unless your profiler says you should do it. In particular, Sun claims that modern garbage collectors do very well with short-living objects and new() becomes cheaper and cheaper

OTHER TIPS

That's a pretty tight loop over a "fairly large number", so generally I would say move the instantiation outside of the loop. But, overall, my guess is you aren't going to notice much of a difference as I am willing to bet that your doSomethingWithMap will take up the majority of time to allow the GC to catch up.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top