I've managed to successfully setup the above configuration with Spark 1.0.0.
It's somewhat long story, but most problems were configuration related. Perhaps an experienced Spark + Hadoop developer would have no problem, except for the one I'll write below.
And the above question was for Spark 0.9.1, which is now out of date, so it's not so useful to answer it.
But one problem is a cross-platform issue, and still applies to Spark 1.0.0.
I've created a pull request for it: https://github.com/apache/spark/pull/899
If interested, follow the link.
UPDATE: The above cross-platform issue was resolved on version 1.3.0.