As an author of original feature request which I did more than and year ago (thanks for adding to new feedback system https://feedback.directadmin.com/b/...d-io-limits-per-account-package-with-cgroups/) I post that here to be my skin in the game
The story: Years ago when centos 6 was new and they introduced cgroups for the first I did some experiments and cgroups seems to worked ok ... but I never develop this scripts in something serious... Yesterday I decide tho see what I have and I found it outdated as systemd added cgroups features, but still all documentation about cgroups sucks... there are only partiality info here and there how to set the entire thing...
Currently I'm laying the basics.... You can apply limits on user by user case or if you use systemd 237+ (Centos 8, Debian, Ubuntu ) it can be applied to all users at ones ( again there are insufficient docs how exactly )
It mostly works, I'm missing something on tasks but I hope community will help with that
@smtalk it will be nice to see that integrated to be set at hosting package creating ...
Or if someone can create a script for it...
So here what I did
I ensured that cgroups are fully enabled (on some distros it is on some not fully) by adding to systemd.unified_cgroup_hierarchy=1 to /etc/default/grub CMD line and update-grub && reboot
I added a user test1 which got uid 1000
created /etc/systemd/system/user-1000.slice
100% is one core but I set it to 11% for the purpose of the tests
CPU Test
then i logged in as test1 and executed
-
it vary a bit from 11% but with very tiny amount so I can say it works
next was memory limit I used the stress-ng here
it means both will use total 1600M on 1G limit
--oomable -means process will not be restarted if it is killed during the test
it starts then they start to use memory and ... during the test one of the processes was killed so memory limits work as intended
next was io
BlockIOWriteBandwidth= / 5M where / is the device or path can be /home/ but for the purpose of the test it was /
it works as supposed too
I think I missed something on Tasks docs are not clear but that seems do not work per user (workaround can be pam limits)
Someone can continue the community work from here...
Happy Sunday
P.S I'm adding link to our blog where I published that in not that casual style as a protection from fake DMCA claims
The story: Years ago when centos 6 was new and they introduced cgroups for the first I did some experiments and cgroups seems to worked ok ... but I never develop this scripts in something serious... Yesterday I decide tho see what I have and I found it outdated as systemd added cgroups features, but still all documentation about cgroups sucks... there are only partiality info here and there how to set the entire thing...
Currently I'm laying the basics.... You can apply limits on user by user case or if you use systemd 237+ (Centos 8, Debian, Ubuntu ) it can be applied to all users at ones ( again there are insufficient docs how exactly )
It mostly works, I'm missing something on tasks but I hope community will help with that
@smtalk it will be nice to see that integrated to be set at hosting package creating ...
Or if someone can create a script for it...
So here what I did
I ensured that cgroups are fully enabled (on some distros it is on some not fully) by adding to systemd.unified_cgroup_hierarchy=1 to /etc/default/grub CMD line and update-grub && reboot
Code:
systemctl set-property user.slice TasksAccounting=1
systemctl set-property user.slice MemoryAccounting=1
systemctl set-property user.slice CPUAccounting=1
systemctl set-property user.slice BlockIOAccounting=1
systemctl daemon-reload
systemctl restart user.slice
I added a user test1 which got uid 1000
created /etc/systemd/system/user-1000.slice
Code:
[Slice]
CPUQuota=11%
MemoryLimit=1G
BlockIOReadBandwidth= / 5M
BlockIOWriteBandwidth= / 5M
TasksMax=5
100% is one core but I set it to 11% for the purpose of the tests
CPU Test
then i logged in as test1 and executed
Code:
dd if=/dev/zero of=/dev/null
it vary a bit from 11% but with very tiny amount so I can say it works
Code:
top - 14:53:18 up 50 min, 2 users, load average: 0.22, 0.05, 0.02
Tasks: 91 total, 2 running, 89 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.3 us, 3.8 sy, 0.0 ni, 92.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 2992.5 total, 2430.5 free, 114.0 used, 448.0 buff/cache
MiB Swap: 512.0 total, 512.0 free, 0.0 used. 2710.4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2239 test1 20 0 9872 720 656 R 11.3 0.0 0:02.58 dd if=/dev/zero of=/dev/null
next was memory limit I used the stress-ng here
Code:
stress-ng --vm 2 --vm-bytes 1600M --timeout 60s --oomable
it means both will use total 1600M on 1G limit
--oomable -means process will not be restarted if it is killed during the test
it starts then they start to use memory and ... during the test one of the processes was killed so memory limits work as intended
Code:
Oct 11 15:13:59 XXX kernel: [ 4274.971821] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0,oom_memcg=/
user.slice/user-1000.slice,task_memcg=/user.slice/user-1000.slice/session-6.scope,task=stress-ng-vm,pid=2893,uid=1000
Oct 11 15:13:59 XXX kernel: [ 4274.971832] Memory cgroup out of memory: Killed process 2893 (stress-ng-vm) total-vm:873488kB, anon-
rss:523824kB, file-rss:1388kB, shmem-rss:24kB, UID:1000 pgtables:1676kB oom_score_adj:1000
Oct 11 15:13:59 XXX systemd[1]: session-6.scope: A process of this unit has been killed by the OOM killer.
Oct 11 15:13:59 XXX kernel: [ 4275.001236] oom_reaper: reaped process 2893 (stress-ng-vm), now anon-rss:0kB, file-rss:0kB, shmem-rs
s:24kB
next was io
BlockIOWriteBandwidth= / 5M where / is the device or path can be /home/ but for the purpose of the test it was /
Code:
dd bs=1M count=256 if=/dev/zero of=test oflag=dsync
256+0 records in
256+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 55.8237 s, 4.8 MB/s
it works as supposed too
I think I missed something on Tasks docs are not clear but that seems do not work per user (workaround can be pam limits)
Someone can continue the community work from here...
Happy Sunday
P.S I'm adding link to our blog where I published that in not that casual style as a protection from fake DMCA claims
Last edited: