Complexity-wise, they're all the same. Hash table complexity is given as average-case amortized O(1), because hash collisions, once you have a good hashing functions, come down to a matter of luck. And the worst-case complexity of any hash table is O(N), no matter what you do.
That said, useful implementations resize based on load factor, which is the ratio between total elements and number of buckets ("array size"). Waiting until every bucket has at least one entry will cause sub-optimal behavior too often. But a load factor of 1 (N elements in N buckets) is probably too high; most implementations I've seen default to somewhere around 0.7 (7 elements for 10 buckets), and generally let the user configure the load factor (see C++ and Java both). This is trading memory vs speed, and hash tables are often all about speed. In general, only profiling will show the right value for any given program.
Also, the size need not double. Typical vector
implementations increase their size by 50% or 70% on each resize, because large-scale testing on real-world applications has shown that to be a better speed/memory trade-off than doubling. It stands to reason that a similar thing would apply to hash tables, although again this is subject to profiling.