سؤال

I'm working on developing a online-judge type system where maybe 100 or so untrusted executables will all be running simultaneously and evaluating the same input data.

I'd like each executable to be limited to an equal share of CPU, memory, disk space, etc. of a pre-defined resource pool. For example if the resource pool were set to 3/4 of the machine's CPU, 3 GB of memory, 300 GB of disk, and 2 executables were running, each would get 3/8 of the CPU, 1.5 GB of memory, 150 GB of disk. If another were to join in, the resource would be readjusted into three equal slices. This is to prevent a malicious or buggy executable from stealing resources from others as well as to give everyone equal resources.

Ideally, I'd also like the executables to not be constrained to a single language (e.g. let users develop in whatever they're comfortable -- C, C++, Java, Python, etc.).

Using a whole VM or something like OpenVZ seems like overkill. Are there lighter weight alternatives that essentially use a separate process for each executable while limiting its resources, disabling things like network access, process spawning, etc.? Part of the reason I'm looking for a light weight solution is that there's quite a bit of input data -- would rather not have to copy it to each executable but let them read from shared memory.

هل كانت مفيدة؟

المحلول

Perhaps it would be enough to create different user IDs for each process, and then limit them by means of "ulimit". Manpage of bash's ulimit under Debian:

ulimit [-HSTabcdefilmnpqrstuvx [limit]]

Provides control over the resources available to the shell and to processes started by it, on systems that allow such control. The -H and -S options specify that the hard or soft limit is set for the given resource. A hard limit cannot be increased by a non-root user once it is set; a soft limit may be increased up to the value of the hard limit. If neither -H nor -S is specified, both the soft and hard limits are set. The value of limit can be a number in the unit specified for the resource or one of the special values hard, soft, or unlimited, which stand for the current hard limit, the current soft limit, and no limit, respectively. If limit is omitted, the current value of the soft limit of the resource is printed, unless the -H option is given. When more than one resource is specified, the limit name and unit are printed before the value. Other options are interpreted as follows:

          -a     All current limits are reported
          -b     The maximum socket buffer size
          -c     The maximum size of core files created
          -d     The maximum size of a process's data segment
          -e     The maximum scheduling priority ("nice")
          -f     The maximum size of files written by the shell and its children
          -i     The maximum number of pending signals
          -l     The maximum size that may be locked into memory
          -m     The **maximum resident set size** (many systems do not honor this limit)
          -n     The maximum number of open file descriptors (most systems do not allow this value to be set)
          -p     The pipe size in 512-byte blocks (this may not be set)
          -q     The maximum number of bytes in POSIX message queues
          -r     The maximum real-time scheduling priority
          -s     The maximum stack size
          -t     The maximum amount of cpu time in seconds
          -u     The maximum number of processes available to a single user
          -v     The maximum amount of virtual memory available to the shell and, on some systems, to its children
          -x     The maximum number of file locks
          -T     The **maximum number of threads**

I think that limiting the threads to 1 you can get a fair distribution of CPU computation time.

Limit also the maximum RSS.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top