Question

I start a new bash shell, and execute:

ulimit -m 102400
ulimit -a
"
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) 102400
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
"

and then ,I execute compiling a huge project. the Linking of it will use large memory, more then 2G. The result, process ld used more then 2G resident memory.

is there any wrong ? how to use ulimit or can I use other programs to limit resident memory?

the target of limit resident memory, is because computer will freeze when one process almost used all memory.

Was it helpful?

Solution

According to the man page for setrlimit:

RLIMIT_RSS

Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED

You probably want to set the virtual memory size instead, via ulimit -v

OTHER TIPS

You can restrict the resident memory using cgroups. See Resident Set Size (RSS) limit has no effect

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top