This isn't something you can control - the assignment of map and reducer tasks to nodes is handled by the JobTracker.
There's an O'Reilly Answer detailing the specifics of Task Assignment in a good amount of detail:
http://answers.oreilly.com/topic/459-anatomy-of-a-mapreduce-job-run-with-hadoop/
The default behaviour is to assign one task per update iteration of the Job Tracker so you shouldn't typically see all reduce tasks being satisfied by the same node - but if your cluster is busy with other tasks and only a single node has available reducer slots then all your reduce tasks may get tasked to that node.
As for handling skew, this will alleviate all data for a single known high volume key possibly being sent to a single node (again there is no guarantee of this), but you'll still have a problem that you'll need to combine the three reducer outputs for this skew key into the final answer.