LImiting overall memory usage for child processes

I have a long-running script that launches several child processes on a Linux machine with 8GB of memory. After a few hours of running, it takes up nearly 90% of memory, and this makes other services, like SSH, unresponsive when it starts disk swapping.

What's the best way to programmatically limit the overall memory usage for my script and all child processes, without setting a specific memory limit for each process individually? Different child processes can use very different amounts of memory, so it would be very inefficient to set fixed thresholds.

Ideally, I'd like to simply specify "only use up to 75% of memory" and let the system divide that among the children as needed, to ensure I can still SSH into the machine at any given time. I first tried setting up a cron job to automatically renice sshd to the highest priority, but that's had no effect, and I'm routinely unable to SSH in, or the SSH prompt is unusably slow.


Run the processes as a dedicated user and set it up via cgroups:

/etc/cgconfig.conf:

group limitedram {
    memory {
        memory.limit_in_bytes = 6442450944;
    }
}

and /etc/cgrules.conf:

serviceuser   memory   limitedram/

That limits the memory usage of serviceuser to 6 GB.