Question

I have a website running on AWS EC2. I need to create a nightly job that generates a sitemap file and uploads the files to the various browsers. I'm looking for a utility on AWS that allows this functionality. I've considered the following:

1) Generate a request to the web server that triggers it to do this task

  • I don't like this approach because it ties up a server thread and uses cpu cycles on the host

2) Create a cron job on the machine the web server is running on to execute this task

  • Again, I don't like this approach because it takes cpu cycles away from the web server

3) Create another EC2 instance and set up a cron job to run the task

  • This solves the web server resource issues, but why pay for an additional EC2 instance to run a job for <5 minutes? Waste of money!

Are there any other options? Is this a job for ElasticMapReduce?

Was it helpful?

Solution

Amazon has just released[1] new features for Elastic Beanstalk. You can now create a worker environment containing cron.yaml that configures scheduling tasks calling an URL with the CRON syntax: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html#worker-periodictasks

[1] http://aws.amazon.com/about-aws/whats-new/2015/02/17/aws-elastic-beanstalk-supports-environment-cloning-periodic-tasks-and-1-click-iam-role-creation/

OTHER TIPS

If I were in your shoes, I'd probably start by trying to run the cron job on the web server each night at low tide and monitor the resource usage to make sure it doesn't interfere with the web server.

If you find that it doesn't play nicely, or you have high standards for the elegance of your architecture (I can admire that), then you'll probably need to run a separate instance.

I agree that it seems like a waste to run an instance 24 hours a day for a job you only need to run once a night.

Here's one aproach: The cron job on your primary machine (currently a web server) could fire up a new instance to run the task. It could pass in a user-data script that gets run when the instance starts, and the instance could shut itself down when it completes the task (where instance-initiated-shutdown-behavior was set to "terminate").

Unfortunately, this misses your desire to enforce separation of concerns, it gets complicated when you start scaling to multiple web servers, and it requires your web server to be alive in order for the job to run.

A couple months ago, I came up with a different approach to run an instance on a cron schedule, relying entirely on existing AWS features and with no requirement to have other servers running.

The basic idea is to use Amazon's Auto Scaling with a recurring action that scales the group from "0" to "1" at a specific time each night. The instance can terminate itself when the job is done, and the Auto Scaling can clean up much later to make sure it's terminated.

I've provided more details and a working example in this article:

Running EC2 Instances on a Recurring Schedule with Auto Scaling
http://alestic.com/2011/11/ec2-schedule-instance

Assuming you are running on a *nix version of EC2, I would suggest that you run it in cron using the nice command.

nice changes the priority of the job. You can make it a much lower priority, so if your webserver is busy, the cron job will have to wait for the CPU.

The higher the nice number, the lower the priority. Nicenesses range from -20 (most favorable scheduling) to 19 (least favorable).

AWS DataPipeline

You can use AWS Data Pipeline to schedule a task with a given period. The action can be any command when you configure your Pipeline with the ShellCommandActivity.

You can even use your existing EC2 instance to run the command: Setup Task Runner on your EC2 instance and set the workerGroup field when setting the ShellCommandActivity (doc) on your DataPipeline:

{
 "pipelineId": "df-0937003356ZJEXAMPLE",
 "pipelineObjects": [
    {
      "id": "Schedule",
      "name": "Schedule",
      "fields": [
        { "key": "startDateTime", "stringValue": "2012-12-12T00:00:00" }, 
        { "key": "type", "stringValue": "Schedule" }, 
        { "key": "period", "stringValue": "1 hour" }, 
        { "key": "endDateTime", "stringValue": "2012-12-21T18:00:00"  }
       ]
     }, {
      "id": "DoSomething",
      "name": "DoSomething",
      "fields": [
        { "key": "type", "stringValue": "ShellCommandActivity" },
        { "key": "command", "stringValue": "echo hello" },
        { "key": "schedule", "refValue": "Schedule" },
        { "key": "workerGroup", "stringValue": "yourWorkerGroup" }
      ]
    }
  ]
}

Limits: Minimum scheduling interval is 15 minutes.
Pricing: About $1.00 per month.

You should consider CloudWatch Event and Lambda (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html). You only pay for the actual runs. I assume the workers maintained by Elastic beanstalk still cost some money even when they are idle.

Update: found this nice article (http://brianstempin.com/2016/02/29/replacing-the-cron-in-aws/)

If this task can be accomplished with one machine, i recommend booting up an instance programmatically using the fog gem written in ruby.

After you start an instance, you can run a command via ssh. Once completed you can shutdown with fog as well.

Amazon EMR is also a good solution if your task can be written in a map reduce manner. EMR will take care of starting/stopping instances. The elastic-mapreduce-ruby cli tool can help you automate it

You can use AWS Opswork to setup cron jobs for your application. For more information read their user guide on AWS OpsWork. I found a page explaining how to setup cron jobs: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-extend-cron.html

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top