Question

I have a simple SuckerPunch job, I am trying to make it so only one job runs at any given time. Struggling to work it out tried playing with Celluloid and Ruby concurrency

What I have

DataChangeJob.new.async.perform

with

class DataChangeJob
  include SuckerPunch::Job
  def perform
    value = Random.rand
    SuckerPunch.logger.info ("starting #{value}")
    sleep(5)
    SuckerPunch.logger.info ("running data change #{value}")
  end
end
Was it helpful?

Solution

If you define to have only one worker, that would achieve what you want imho. By default SuckerPunch assumes two workers. So explicitly define 1 as follows:

class DataChangeJob
  include SuckerPunch::Job
  workers 1

  def perform
    value = Random.rand
    SuckerPunch.logger.info ("starting #{value}")
    sleep(5)
    SuckerPunch.logger.info ("running data change #{value}")
  end
end

You are making me curious: why do you need this constraint?

[UPDATE] While SuckerPunch does allow 1 worker, celluloid does not. So you are back to using mutexes.

class DataChangeJob
  include SuckerPunch::Job

  def initialize
    @@mutex = Mutex.new 
  end

  def perform
    @@mutex.lock
    begin
      value = Random.rand
      SuckerPunch.logger.info ("starting #{value}")
      sleep(5)
      SuckerPunch.logger.info ("running data change #{value}")
    end
  ensure
    @@mutex.unlock
  end   
end

This is just a quick write-up. Assuming all jobs are instances of DataChangeJob, we use a mutex on class level. Locking the mutex will attempt to lock the mutex, and wait until the mutex is free. The ensure block will make sure we unlock the mutex no matter what happened.

This will make sure only one DataChangeJob can run at a time. A bit defeating the advantage celluloid gives you :)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top