Question

I'm looking to write a custom Mesos executor that will allow for doing things like requesting 1CPU(1024 shares) for my rails application, and then "plugging in" nginx in front. In this process, I'd actually like to start my rails and nginx containers using the same shared 1024 cpu shares.

I understand that cgroups are hierarchical, and I should be able to do something like

  Base(1024 shares)
  /              \
nginx(no limit)   rails(no limit)

or

  rails(1024 shares)
         |
    nginx(no limit)

So that I still only use up 1 CPU, but my containers will share resources and be linked. Looking through cgroups and lxc docs, I couldn't find anything obvious to pass to docker's -lxc-conf=[] option that would allow me to tell nginx as it started to start up under a pre-existing cgroup created for the previously started rails container.

Another thing I need to consider is that while I want rails and nginx to share the 1024 cpu shares, I don't want either to know about the other or have access to each other's data unless I have deliberately shared a /public volume from rails or something.

Any advice here would be appreciated!

Was it helpful?

Solution

Docker doesn't support this (yet).

Here is a possible workaround. Warning: this is a very hackish. I don't really recommend using that in production, but it gives you an idea of what is involved.

We will use a Mesos hook (or a tool like Docker Spotter) to trigger automatic actions when containers are started.

We will also use a separate, manually created cgroup with the appropriate CPU share allocation.

When the tool detects that one of the two containers was just started, it moves all its processes to this special cgroup. Since all child processes are created in the control group of their parent, all future processes will also be in that cgroup.

Note, however, that there is a potential race condition: if new processes are created in those containers while you move the existing processes from the original cgroup to the "static" one, the new processes might not be moved automatically. You would probably have to rescan the tasks file multiple times to make sure you moved everything.

A better implementation would be in Docker itself; maybe by allowing the creation of "container containers" (containers which don't run processes but are just there to group other containers) and then put a container under another one. Or, alternatively, a syntax similar to --volumes-from, but for resources. That would allow to start a container but instruct Docker to create the cgroups under an existing container.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top