Question

As per documentation,

"Apache Spark is a fast and general engine for large-scale data processing."

"Shark is an open source distributed SQL query engine for Hadoop data."

And Shark uses Spark as a dependency.

My question is, Is Spark just parses HiveQL into Spark jobs or does anything great if we use Shark for fast response on analytical queries ?

Was it helpful?

Solution

Yes, Shark uses the same idea as Hive but translates HiveQL into Spark jobs instead of MapReduce jobs. Please, read pages 13-14 of this document for architectural differences between these two.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top