ulimit not limiting memory usage
When writing program, there are times when a runaway program slurps half of my RAM (generally due to practically infinite loops while creating large data structures), and bringing the system to become really slow that I can't even kill the offending program. So I want to use ulimit to automatically kill my program automatically when my program is using an abnormal amount of memory:
$ ulimit -a
core file size (blocks, -c) 1000
data seg size (kbytes, -d) 10000
scheduling priority (-e) 0
file size (blocks, -f) 1000
pending signals (-i) 6985
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 10000
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 6985
virtual memory (kbytes, -v) 100000
file locks (-x) unlimited
$ ./run_programbut why is my program still using more RAM than the given limit (yes, I'm starting the program in the same bash shell)?
Have I misunderstood something about ulimit?
23 Answers
Your example should work like you think (program gets killed after consuming too much RAM). I just did a small test on my shell server:
First I restricted my limits to be REALLY low:
ulimit -m 10
ulimit -v 10That lead to about everything getting killed. ls, date and other small commands will be shot before they even begin.
What Linux distribution you use? Does your program use only a single process or does it spawn tons of child processes? In the latter case ulimit might not always be effective.
1ulimit -m no longer works. Use ulimit -v instead.
The reason is that ulimit calls setrlimit, and man setrlimit says:
RLIMIT_RSS Specifies the limit (in bytes) of the process's resident set (the number of virtual pages resident in RAM). This limit has effect only in Linux 2.4.x, x < 30, and there affects only calls to madvise(2) specifying MADV_WILLNEED.
This only works in a single bash session unless you put it into your .bash_profile and won't apply for the already running processes.
What I find strange is that the:
max memory size (kbytes, -m) unlimitedis not present in /etc/security/limits.conf even tho it's only limits memory consumption per process not overall for 1 user account. Instead of them adding Cgroup, they should have just modify the existing unix commands to accomodate those new features.
1