Each reducer produces one output file, named by default part-xxxxx
(part-00000
for the first reducer, part-00001
for the second reducer etc.).
With your code, when you have more than 3 nodes, you will have more than one reducers, so the output data will be split into parts (more than one files). This means that some word counts will be in the first file (part-00000), some word counts will be in the second file (part-00001), etc. You can later merge these parts by calling the getmerge command, like:
hadoop dfs -getmerge /HADOOP/OUTPUT/PATH /local/path/
and get one file in your specified local path with the merged results of all the partial files. This file will have the same results as the file that you get when you have two nodes and hence 2/2 = 1 reducer (producing one output file).
By the way, setting the number of reducers to numOfNodes/2
may not be the best option. See this post for more details.