Question

There are some large datasets (25gb+, downloadable on the Internet) that I want to play around with using Amazon EMR. Instead of downloading the datasets onto my own computer, and then re-uploading them onto Amazon, what's the best way to get the datasets onto Amazon?

Do I fire up an EC2 instance, download the datasets (using wget) into S3 from within the instance, and then access S3 when I run my EMR jobs? (I haven't used Amazon's cloud infrastructure before, so not sure if what I just said makes any sense.)

Was it helpful?

Solution

I recommend the following...

  1. fire up your EMR cluster

    elastic-mapreduce --create --alive --other-options-here

  2. log on to the master node and download the data from there

    wget http://blah/data

  3. copy into HDFS

    hadoop fs -copyFromLocal data /data

There's no real reason to put the original dataset through S3. If you want to keep the results you can move them into S3 before shutting down your cluster.

If the dataset is represented by multiple files you can use the cluster to download it in parallel across the machines. Let me know if this is the case and I'll walk you through it.

Mat

OTHER TIPS

If you're just getting started and experimenting with EMR, I'm guessing you want these on s3 so you don't have to start an interactive Hadoop session (and instead use the EMR wizards via the AWS console).

The best way would be to start a micro instance in the same region as your S3 bucket, download to that machine using wget and then use something like s3cmd (which you'll probably need to install on the instance). On Ubuntu:

wget http://example.com/mydataset dataset
sudo apt-get install s3cmd 
s3cmd --configure
s3cmd put dataset s3://mybucket/

The reason you'll want your instance and s3 bucket in the same region is to avoid extra data transfer charges. Although you'll be charged in bound bandwidth to the instance for the wget, the xfer to S3 will be free.

I'm not sure about it, but to me it seems like hadoop should be able to download files directly from your sources.

just enter http://blah/data as your input, and hadoop should do the rest. It certainly works with s3, why should it not work with http?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top