systemd, per-user cpu and/or memory limits
There is similar question: Cgroups, limit memory per user, but the solution doesn't work in "modern" systems, where cgroups hierarchy is managed by systemd.
Straightforward solution — templating user-UID.slice — won't work, because it is not supported, see https://github.com/systemd/systemd/issues/2556.
Is there any way to achieve the desired effect — manage CPU and/or memory resources on a per-user basis?
UPD: I'll keep my solution for the sake of history, but systemctl set-property
should be called at login time, using pam_exec
, see https://github.com/hashbang/shell-etc/pull/183. In this approach, there is no time window between the user's login and setting of limits.
My solution. Interface org.freedesktop.login1.Manage
of /org/freedesktop/login1
object emits UserNew(u uid, o object_path)
signal. I've written a simple daemon which listens to the signal and every time it is emitted set CPUAccounting=true
for just-logged-in-user's slice.
Starting with systemd v239, you can use drop-ins https://github.com/systemd/systemd/commit/5396624506e155c4bc10c0ee65b939600860ab67
# mkdir -p /etc/systemd/system/user-.slice.d
# cat > /etc/systemd/system/user-.slice.d/50-memory.conf << EOF
[Slice]
MemoryMax=1G
EOF
# systemctl daemon-reload
UPD: I'll keep my solution for the sake of history, but systemctl set-property
should be called at login time, using pam_exec
, see https://github.com/hashbang/shell-etc/pull/183. In this approach, there is no time window between the user's login and setting of limits.
Old solution
Here is a very simple script which does the job
#!/bin/bash
STATE=1 # 1 -- waiting for signal; 2 -- reading UID
dbus-monitor --system "interface=org.freedesktop.login1.Manager,member=UserNew" |
while read line
do
case $STATE in
1) [[ $line =~ member=UserNew ]] && STATE=2 ;;
2) read dbus_type ID <<< $line
systemctl set-property user-$ID.slice CPUAccounting=true
STATE=1
;;
esac
done
It can be easily extended to support per-user memory limits.
Tested it on a VM with 2 CPUs and 2 users. The first user run dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null
command and the second one run only one instance of dd
. Without this script running, each instance of dd
used around 70% of CPU.
Then I started the script, relogged users, and starded dd
commands again. This time two dd
processes of the first user took only 50% of CPU each and the process of the second user took 100% of CPU. Also, systemd-cgtop
showed, that /user.slice/user-UID1.slice
and /user.slice/user-UID2.slice
take 100% of CPU time each, but the first slice has 6 tasks and the second one only 5 tasks.
When I kill dd
task of the second user, the first user starts consuming 200% of CPU time. So, we have fair resource allocation without artificial restrictions like "each user may use only one core".
The issue you mentioned is still open, but this works for me.
sudo systemctl edit --force user-1234.slice
Then type and save this:
[Slice]
CPUQuota=10%
I'm not sure why it works.