How about using Amazon's Elastic Map Reduce (EMR). This is Amazon's hadoop service which basically runs on top of EC2. You can copy you your data files to AmazonS3 and have your EMR cluster pick up the data from there. You can also send your results to files in Amazon S3.
When you launch your cluster you can customize how many EC2 instances you want to use and what size for each instance. That way you can tailor how much CPU power you need. After you are done with your job you can tear down your cluster when you are not using it. (Avoiding paying for it)
You can also do all of the above programmatically too. For example python I use the boto Amazon API which is quite popular.
For getting started on how to write python map reduce jobs you can find several posts on the web explaining how to do it. Here's an example: http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
Hope this helps.