According to my understanding these points i can tell you (may or may not help you, correct me if i am wrong):
1) map 1-> reduce 1-> directly to mapper2 : for optimizations are addressed in spark cluster computing framework (using in-memory computations, avoiding unnecessary read/writes to hdfs).
2) if you want something like reducer1 ->reducer2 . you have to think how you can write the logic in one reducer itself , but the problem here is its all depends on your requirement i mean the aggregation on which keys you want to perform (in more detail : reducer1 receives same set of key, on which only u can act the task of next aggregation).
3) In Hadoop the protocol is like this only : map --> then aggregation , if any next aggregation , it has to come from a Userdefinedmapper/IdentityMapper.
hope this helps :)