The problem has reached a point where I’m no longer willing to let it slip, so I wish to find a solution to this ASAP. What happens precisely is: Occasionally, programs have memory leaks. Usually they tend to be harmless, as they are slow and / or don’t fill quite a lot of RAM. In a few cases however, a process fills the memory with Gigabytes of RAM in a matter of seconds, until every bit of it is taken up. When this happens, the system becomes unusable (nothing can be clicked, mouse pointer freezes, etc) and the computer needs a hard restart. This is both a major problem as well as a security risk, because such a thing can cause loss of data or put the system in danger (consider it were to happen in the middle of a Kernel update).
I was hoping that at this day, the Linux Kernel has a minimal protection against this sort of thing, and enforces a pocket within the available RAM which user applications cannot simply take up… in order to save the system if memory is filled abusively. But since it doesn’t appear to, I need to add one manually. Problem is I don’t know how exactly, and I’m hoping someone can clarify me better.
What is the best way to limit memory usage for normal processes, in order to disallow memory leaks from bringing the system down by blocking all the RAM? I’m thinking something that restricts non-root processes to only part of the total memory. For example, I have 9GB of RAM. If it were to solve those crashes, I am okay with allowing 1GB to be used only by root and system processes, whereas normal programs I run may only have access to the other 8 GB.
The right path seems to be the ulimit command and the /etc/security/limits.conf file. But ulimit seems to have a lot of parameters and memory types it addresses (which aren’t clarified either) and I’m not sure exactly what to set it to for this scenario. I basically seek the ulimit settings that require me to give up as little RAM as possible, in exchange for guaranteeing a space that memory leaks cannot touch to keep the system safe. Also, I’d prefer using percentages rather than fixed values, so I don’t have to re-configure everything if I gain or lose RAM… like for example use “90%” instead of “8000 MB”.
One clarification: I believe that in the past, I’ve heard people say that if a process of low priority has a limitless memory leak, it shouldn’t actually take down the system because the Kernel knows to handle it, so maybe something else is happening. I’ve had the problem numerous times and can confirm this is false! If any badly written program fills up all the memory in a few seconds (which I get to see in KSysGuard before the system dies) it will render the system unusable and the user has to unplug the computer and start it up again. Also, I do have a SWAP partition… and large one at that (8GB). Even so, such leaks do bring down the system.