Question

We've just begun using beanstalkd in production, and are impressed with it's performance.

I've noticed that it does something peculiar with it's memory management. For instance, I create 100k jobs in PHP with 1111.013122,1212.121311 as data in each job. The memory usage of beanstalkd process goes up to 18MB from about 300KB. After a few minutes, the memory goes down to about 1.5MB, with the same number of jobs still there.

Beanstalkd not running in persistent mode either.

I'm on a Mac, btw though our servers run Ubuntu 12.04. I've observed this on the Mac - haven't tried it on our servers yet.

Is this because of memory compression on the Mac or from Beanstalkd or is Beanstalkd writing out to a file? This would help us plan the memory requirement of our Queue servers.

Was it helpful?

Solution

beanstalkd is an in-memory queue, it uses external files only for error recovery (it can checkpoint jobs to a binlog, to recover in case of a crash). So if the jobs haven't been deleted, they're still there in memory.

How are you measuring memory use? Does the beanstalkd process virtual size shrink too? Could the contents have been paged out to disk, thus no longer show up as resident? (or under "physical memory used"?)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top