The automatic growing of the stack can be thought of as automatic calls to mremap
to resize the virtual address region that counts as "stack". Once that's handled, page faults to the stack area or to a vanilla mmap region are handled the same, i.e., one page at a time.
Thus you should end up with ~2 pages allocated, not ~51. @perreal's empirical answer validates this ...
To the last part of the question, the cost of contiguous page faults is one of the factors that lead to the development of "huge pages". I don't think there are other ways in Linux to "batch" page fault handling. Maybe madvise
might do something but I suspect its mostly optimizing the really expensive part of page faults which is looking up the backing pages on storage). Stack page faults which map to zero pages are relatively lightweight by comparison.