Question

My Use Case:

I want to import the large data from EC2 through SQOOP into the Hive. Imported data in Hive will get processed in Hive by applying some algorithm and will generate some result (in table form, in Hive only). And generated result will be exported back to Ec2 again through SQOOP only.

I am new to Amazon Web Services and want to implement this use case with the help of AWS EMR. I have implemented it on local machine.

I have read some links related to AWS EMR for launching the instance and about what is EMR, How it works and etc...

I have some doubts about EMR like:

1) EMR uses S3 Buckets, which holds Input and Output data Hadoop Processing (in the form of Objects). ---> I didn't get How to store the data in the form of Objects on S3 (My data will be files)

2) As already said I have implemented a task for my use case in Java. So If I create the JAR of my program and create the Job Flow with Custom JAR. Will it be possible to implement like this or do need to do some thing extra for that?

3) As I said in my Use Case that I want to export my result back to Ec2 with the help of SQOOP. Does EMR have support of SQOOP?

--edited part 4) Also I will import my data from SQL Server daily/weekly as my data in SQL Server get updated daily/weekly. If I think to import that data on S3 and give it to Hive then How can I do that? (Because Hive stores its data on HDFS under /user/hive/warehouse directory). How can I make link to S3 and /user/hive/warehouse directory in HDFS.

Please reply me with your answer as soon as possible. I want to do this as early as possible.

many Thanks.

Was it helpful?

Solution

It is possible to install Sqoop on AWS EMR. You are not required to use S3 to store files and can use the local (temporary) HDFS instead. After you have Sqoop installed, you can import your data with it into HDFS, run your calculations in HDFS, then export your data back out using Sqoop again.

Here's an article I wrote about how to install Sqoop on AWS EMR: http://blog.kylemulka.com/2012/04/how-to-install-sqoop-on-amazon-elastic-map-reduce-emr/

OTHER TIPS

Same as my response from Hive mailing list:

To answer your questions:

1) S3 terminology uses the word "object" and I am sure they have good reasons as to why but for us Hive'ers, an S3 object is the same as a file stored on S3. The complete path to the file would be what Amazon calls the S3 "key" and the corresponding value would be the contents of the file e.g. s3://my_bucket/tables/log.txt would be the key and the actual content of the file would be S3 object. You can use the AWS web console to create a bucket and use tools like S3cmd (http://s3tools.org/s3cmd) to put data onto S3.

However, you don't necessarily need to use S3. S3 is typically only used when you want to have a persistent storage of data. Most people would store their input logs/files on S3 for Hive processing and also store the final aggregations and results on S3 for future retrieval. If you are just temporarily loading some data into Hive, processing it and exporting it out, you don't have to worry about S3. The nodes that form your cluster have ephemeral storage that forms the HDFS. You can just use that. The only side effect is that you will loose all your data in HDFS once you terminate the cluster. If that's ok, don't worry about S3.

EMR instances are basically EC2 instances with some additional setup done on them. Transferring data between EC2 and EMR instances should be simple, I'd think. If your data is present in EBS volumes, you could look into adding an EMR bootstrap action that mounts that same EBS volume onto your EMR instances. It might be easier if you can do it without all the fancy mounting business though.

Also, keep in mind that there might be costs for data transfers across Amazon data centers, you would want to keep your S3 buckets, EMR cluster and EC2 instances in the same region, if at all possible. Within the same region, there shouldn't be any extra transfer costs.

2) Yeah, EMR supports custom jars. You can specify them at the time you create your cluster. This should require minimal porting changes to your jar itself since it runs on Hadoop and Hive which are the same as (well, close enough to) what you installed your local cluster vs. what's installed on EMR.

3) Sqoop with EMR should be OK.

References: http://mail-archives.apache.org/mod_mbox/hive-user/201204.mbox/%3CCAGif4YQv1RVSoLt+Yqn8C1jDN3ukLHZ_J+GMFDoPCbcXO7W2tw@mail.gmail.com%3E

@mark-grover mentioned you can use s3:// interchangeably with hdfs:// which is not entirely accurate. You may in some cases, however using the AWS EMR built in apache sqoop with the import command it complains;

ERROR tool.ImportTool: Imported Failed: Wrong FS: s3://<my bucket path>, expected: hdfs://ip-<private ip>.ap-southeast-2.compute.internal:8020

(i have not enough rep here to comment apparently but it is ok to respond go figure)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top