No algorithm can be faster than O(n) (have a look at the wikipedia page for big-o notation). At least, not consistently (across all possible inputs). This does not mean that it cannot go any faster -- just that, beyond a certain problem size, whatever is faster can't keep on increasing the speed difference by more than a (probably small) linear factor.
This is because, whatever the order in which you examine the elements, given an array that is "almost balanced" as to the winner, the last element you examine can turn out to be the tiebreaker. Give me any algorithm that doesn't look at all elements, and I can write a input array that will make it return incorrect results. Therefore, you have to examine all of them at least once: O(n) complexity.
Hashmaps have general insert and lookup complexities of O(1) -- that is, on average, regardless of the size of the data, they take up a constant time to do their thing. Note that this constant time is several times larger than, say, array update/lookups (see TwoThe's answer). Therefore, except for constants (which will vary depending on hashmap implementation, machine, VM, and so on), you can't get much faster than the code you posted. If you really need that 10% extra performance, then build a benchmark on hardware/software/input data as near as possible to your intended deployment and optimize that.