Looks like you have a mixture of configuration settings here:
These two define the number of map and reduce slots available on each slave node (running a task tracker):
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>15</value>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>15</value>
</property>
Where as mapred.map.tasks
is a (pretty much ignored) hint for a particular job as to how many map and reduce tasks to schedule for your job.
The final config property is malformed, i think you mean mapred.reduce.tasks
which does control the number of reducers that will run for a particular job.
So currently it looks like you have 15 map and 15 reduce slots configured for a given task tracker (these values are for each task tracker, not for the entire cluster) - amend these values to 5. You'll also need to deploy this configuration change to all 3 of your cluster nodes, and finally you will need to restart the task trackers on all three nodes (for the change to take effect). You should be able to see the change in the Job tracker Web UI, under the number of map and reduce slots.