It seems that the solution is to set the initial memory footprint of hadoop children via:
<property>
<name>mapreduce.map.child.java.opts</name>
<value>-Xmx512</value>
</property>
and we also set following parameters to the same value, just to be sure (they set heap sizes of child processes):
yarn.app.mapreduce.am.command-opts
which sets the MR App Master processes heap size.
the number of hadoop containers can be set via the pattern described above. important to note: there must be at least the amount of mapreduce.map.child.java.opts
free to be able to spawn child processes. we used:
mapreduce.map.memory.mb = yarn.scheduler.maximum-allocation-mb - mapreduce.map.child.java.opts
everything works smoothly now. hope this may help someone in the future!