After creating my own hadoop cluster in order to better understand how hadoop works. I fixed it.
You have to provide Spark with a valid .keytab file which has been generated for an account which has at least read access to the hadoop cluster.
Also, you have to provide spark with the hdfs-site.xml of your hdfs cluster.
So for my case I had to create a keytab file which when you run
klist -k -e -t
on it you get entries like the following
host/fully.qualified.domain.name@REALM.COM
In my case the host was the literal word host and not a variable. Also in your hdfs-site.xml you have to provide the path of the keytab file and say that
host/_HOST@REALM.COM
will be your account.
Cloudera has a pretty detailed writeup on how to do it.
Edit after playing a little bit with different configurations I think the following should be noted. You have to provide spark with the exact hdfs-site.xml and core-site.xml of your hadoop cluster. Otherwise it wont work