Intel and AMD processors are at large binary compatible though there are things like difference in cache sizes and instruction scheduling that could result in sub-optimal run of a particular code on AMD if the code was compiled with optimisations for Intel and vice versa. There are some differences in the instruction sets implemented by both vendors but those are usually not very useful in scientific computing anyway.
Since (1) is not a problem, one does not need a workaround. Still one has to keep in mind that some compilers enable by default instruction sets and optimisations for the processor, on which the code is being compiled. Therefore one has to be extra careful with the compiler options when the head node uses CPUs from a different vendor or even from the same vendor but from a different generation. This is especially true for Intel's compiler suite, while GCC is less aggressive by default. On the other hand, one could usually instruct the compiler what architecture to target and optimise for, e.g. by providing the appropriate -mtune=...
option to GCC.
As for sharing the file system, it depends on how your data storage is organised. Parallel applications often need to access the same files from all ranks (e.g. configuration files, databases, etc.) and therefore require both home and work file systems to be shared (unless one uses the home file system as working one). Also you might want to share things like /opt
(or whatever the location where you store cluster-wide software packages) in order to simplify the cluster administration.
It is hard to point you to a definitive source since there are as many "best practices" as cluster installations around the world. Just stick with a working setup and tune it iteratively until you reach convergence. Installing TORQUE is a good start.