$#!tstemd cgroups CPU, RAM, IO, (TASKs?) made easy

DanielP

Verified User
Joined
Jun 28, 2019
Messages
161
As an author of original feature request which I did more than and year ago (thanks for adding to new feedback system https://feedback.directadmin.com/b/...d-io-limits-per-account-package-with-cgroups/) I post that here to be my skin in the game

The story: Years ago when centos 6 was new and they introduced cgroups for the first I did some experiments and cgroups seems to worked ok ... but I never develop this scripts in something serious... Yesterday I decide tho see what I have and I found it outdated as systemd added cgroups features, but still all documentation about cgroups sucks... there are only partiality info here and there how to set the entire thing...

Currently I'm laying the basics.... You can apply limits on user by user case or if you use systemd 237+ (Centos 8, Debian, Ubuntu ) it can be applied to all users at ones ( again there are insufficient docs how exactly )

It mostly works, I'm missing something on tasks but I hope community will help with that

@smtalk it will be nice to see that integrated to be set at hosting package creating ...

Or if someone can create a script for it...

So here what I did

I ensured that cgroups are fully enabled (on some distros it is on some not fully) by adding to systemd.unified_cgroup_hierarchy=1 to /etc/default/grub CMD line and update-grub && reboot

Code:
systemctl set-property user.slice TasksAccounting=1
systemctl set-property user.slice MemoryAccounting=1
systemctl set-property user.slice CPUAccounting=1
systemctl set-property user.slice BlockIOAccounting=1
systemctl daemon-reload
systemctl restart user.slice

I added a user test1 which got uid 1000

created /etc/systemd/system/user-1000.slice

Code:
[Slice]
CPUQuota=11%
MemoryLimit=1G
BlockIOReadBandwidth= / 5M
BlockIOWriteBandwidth= / 5M
TasksMax=5

100% is one core but I set it to 11% for the purpose of the tests

CPU Test
then i logged in as test1 and executed
Code:
dd if=/dev/zero of=/dev/null
-

it vary a bit from 11% but with very tiny amount so I can say it works

Code:
top - 14:53:18 up 50 min,  2 users,  load average: 0.22, 0.05, 0.02
Tasks:  91 total,   2 running,  89 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.3 us,  3.8 sy,  0.0 ni, 92.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   2992.5 total,   2430.5 free,    114.0 used,    448.0 buff/cache
MiB Swap:    512.0 total,    512.0 free,      0.0 used.   2710.4 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                     
   2239 test1     20   0    9872    720    656 R  11.3   0.0   0:02.58 dd if=/dev/zero of=/dev/null

next was memory limit I used the stress-ng here

Code:
stress-ng --vm 2 --vm-bytes 1600M --timeout 60s  --oomable

it means both will use total 1600M on 1G limit

--oomable -means process will not be restarted if it is killed during the test

it starts then they start to use memory and ... during the test one of the processes was killed so memory limits work as intended

Code:
Oct 11 15:13:59 XXX kernel: [ 4274.971821] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0,oom_memcg=/
user.slice/user-1000.slice,task_memcg=/user.slice/user-1000.slice/session-6.scope,task=stress-ng-vm,pid=2893,uid=1000
Oct 11 15:13:59 XXX kernel: [ 4274.971832] Memory cgroup out of memory: Killed process 2893 (stress-ng-vm) total-vm:873488kB, anon-
rss:523824kB, file-rss:1388kB, shmem-rss:24kB, UID:1000 pgtables:1676kB oom_score_adj:1000
Oct 11 15:13:59 XXX systemd[1]: session-6.scope: A process of this unit has been killed by the OOM killer.
Oct 11 15:13:59 XXX kernel: [ 4275.001236] oom_reaper: reaped process 2893 (stress-ng-vm), now anon-rss:0kB, file-rss:0kB, shmem-rs
s:24kB

next was io

BlockIOWriteBandwidth= / 5M where / is the device or path can be /home/ but for the purpose of the test it was /

Code:
dd bs=1M count=256 if=/dev/zero of=test oflag=dsync
256+0 records in
256+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 55.8237 s, 4.8 MB/s

it works as supposed too

I think I missed something on Tasks docs are not clear but that seems do not work per user (workaround can be pam limits)

Someone can continue the community work from here...

Happy Sunday
P.S I'm adding link to our blog where I published that in not that casual style as a protection from fake DMCA claims
 
Last edited:
You could implement it manually by creating slices (user-1000.slice, user-10001.slice... ) for every user even now it is not that hard, but will be nice to be available to at package creation for example so every package can be set to have different limits
 
Yes, this would be lovely to add naitively, not super hard for directadmin to add this, ie with a check if enabled: show cgroups options, if not don't show options
 
So I've been thinking about this quite a lot. It seems simple to me how we can offer user management for account-wide virtual quotas. This seems to me to be something simple that is similar to what CloudLinux offers. On CloudLinux though, they simply modify the kernel and here we would be using super-user processes. While it may not be a like for like implementation, it could well be a like for like feature set.

Let me know if there are any special requests.
 
yeah, a like-for-like feature set would actualy be all that's needed. no real special feature request from me apart from having a check to make sure cgroups is supported, to apply it to existing users automaticaly (ie a button: "rewrite user cgroups"), and to set individual or all quota's to unlimited
 
Indeed. Could potentially save people some money and even could be viable for a standalone plugin but I’ll stick to DirectAdmin for now.
 
So I've been thinking about this quite a lot. It seems simple to me how we can offer user management for account-wide virtual quotas. This seems to me to be something simple that is similar to what CloudLinux offers. On CloudLinux though, they simply modify the kernel and here we would be using super-user processes. While it may not be a like for like implementation, it could well be a like for like feature set.

Let me know if there are any special requests.

Yes it is similar to part of Cloud Linux, especially if you use DA bublewarp and LiteSpeed / OLS bublewarp for the php and cgi processes which DA enables too, but you can use all of this goodies with Debian and Ubuntu too... Enough of these Enterprise Linux old kernels,

It is good for DirectAdmin to resale it as it will add direct value to their product without a lot of effort from their side and will increase overall stability for customers that enables that , not only hosting providers that want to save, also for small web agencies that use vps to host their clients and do not need full hosting provider features or even cutomers with couple of sites .....

Perfect place to be at Create New Package menu and creation of the new user with certain package to create user-xxx.slice

P.S Will force cloud linux to not went crazy with ideas to increase the prices (as a check and balance to their product)
 
That;
Yes it is similar to part of Cloud Linux, especially if you use DA bublewarp and LiteSpeed / OLS bublewarp for the php and cgi processes which DA enables too, but you can use all of this goodies with Debian and Ubuntu too... Enough of these Enterprise Linux old kernels,

It is good for DirectAdmin to resale it as it will add direct value to their product without a lot of effort from their side and will increase overall stability for customers that enables that , not only hosting providers that want to save, also for small web agencies that use vps to host their clients and do not need full hosting provider features or even cutomers with couple of sites .....

Perfect place to be at Create New Package menu and creation of the new user with certain package to create user-xxx.slice

P.S Will force cloud linux to not went crazy with ideas to increase the prices (as a check and balance to their product)
Yes I am thinking along these lines, I see no reason not to keep it simple and just do the barebones of keeping users within a set of protocols for memory and CPU. I think the notion here really is to KISS (keep-it-simple-stupid) and work from there. If it is what it is supposed to be, it should give momentum for the project.
 
@youds are you just turning into a Plugin making Machine... What prompted all the interest in the work?
 
To be fair I have only 3 DirectAdmin projects on-the-go. This one, Netdata and rclone. Basically, I am preparing for writing a feature set which would include DirectAdmin hence am getting used to the system, etc. Plus I need some open-source love for DigitalMods and furthermore I may need these projects to help my own hostings.

Basically I won't be doing very much, other than program and develop until about 2022-2025 so these are nice things to keep in the pipeline.
 
This is definitely interesting.

I had previously looked at using cgroup templates for something like this, but I couldn't get it to stabilize, and then cgroups appeared to have been phased out starting with CentOS 7, so I never really revisited it.

I looked at user slices in passing with CentOS 7, never gave it a huge amount of research, didn't have a lot of good results - eventually abandoned the project. But I don't remember doing any of your initialization systems - specifically adding anything to the kernel boot options in grub - so that may have been what I was missing.

Will definitely have to look into this a bit more when I get time. This is good research.

For those wanting to incorporate this with DirectAdmin - it should be relatively simple to add something to /usr/local/directadmin/scripts/custom/user_create_post.sh to create a rudimentary /etc/systemd/system/user-%uid%.slice with default limits.
 
For those wanting to incorporate this with DirectAdmin - it should be relatively simple to add something to /usr/local/directadmin/scripts/custom/user_create_post.sh to create a rudimentary /etc/systemd/system/user-%uid%.slice with default limits.
yes, definately a good place to start. though having it at the package level within DA might be a bit nicer: if you upgrade/downgrade a user package, it gets updated automaticaly, and when you say customize a single user you could give him custom limits from the gui.
 
Well, I can certainly understand that.

But... if you can learn to use the command-line and learn some basic scripting, you can make a script that accomplishes an upgrade/downgrade of limits and run it from the command-line. This will greatly, greatly reduce the amount of time it will take for DirectAdmin developers to incorporate this (if they did incorporate this, which I'm not sure if it's within the scope they want to provide).

This is true of any control panel - not just DirectAdmin - if you can take initiative yourself, you can write your own stuff and have it included faster and work the way YOU want it to work, rather than waiting for a control panel development team to write it with a broader reach.
 
Last edited:
Well, I can certainly understand that.

But... if you can learn to use the command-line and learn some basic scripting, you can make a script that accomplishes an upgrade/downgrade of limits and run it from the command-line. This will greatly, greatly reduce the amount of time it will take for DirectAdmin developers to incorporate this (if they did incorporate this, which I'm not sure if it's within the scope they want to provide).

This is true of any control panel - not just DirectAdmin - if you can take initiative yourself, you can write your own stuff and have it included faster and work the way YOU want it to work, rather than waiting for a control panel development team to write it with a broader reach.

the idea to be on package creation is that different plans can have different limits but that wih post creation is good idea and the is the easiest way to be implemented
 
btw years ago I'm sure I read somewhere ( maybe on their blog) about cloudlinux lve switching to cgroups

An i can prove it it is cgroups if you start systemd-cgtop on a cloud linux server it will show LVEs :)


Code:
Path                                     Tasks   %CPU   Memory  Input/s Output/s

/                                          382  426.1    25.8G        -        -
/lve1005                                    14   56.3   492.6M        -        -
/lve1003                                     8   56.0   428.2M        -        -
/lve1049                                    10   52.7   696.6M        -        -
/lve1057                                     8   36.9    98.9M        -        -
/lve1021                                     3   30.7   155.8M        -        -
/lve1023                                     3   29.6   321.1M        -        -
/lve1061                                     3   22.5   185.2M        -        -
/lve1052                                     2   20.3   139.3M        -        -
/lve1029                                     4   15.6   157.7M        -        -
/lve1009                                     2    8.7   158.7M        -        -
/lve1050                                     2    7.5   242.2M        -        -

Edit: found it guest blog post by Igor Seletskiy from 2016, where he explains how they use cgroups (so we are on the right track here) https://blog.phusion.nl/2016/02/03/...tainer-technology-to-docker-and-virtuozzolxc/
 
Last edited:
I've never been a huge CloudLinux fan myself. I know that probably puts me in the minority (a minority of one?), but I just didn't see where the costs were justified.

When I tried CloudLinux - I wanted to set the default limits really, really, really low. The reason being, cheap accounts shouldn't have access to resources of a higher paying account. But what I found was, unless you set the limits high enough, the low limits dragged down the whole system. And when you raised the lower floor of the limits up to the point that there was no performance drag, well then those limits were good enough for probably 95% of your customers. So how did you incentivize customers to upgrade to higher limits?

This is sort of what I ran into when I tried using cgroup templates (on CentOS 6... so it was a few years ago), the limits just could not be set really, really low - but at least it didn't cost anything in terms of licensing. But I eventually abandoned the project. I would sort of have the fear that user slices would result in the same thing - but again, there's no licensing cost, so doesn't hurt to try. It's one thing to set limits when the floor is good enough for 95% of your customers and there's no additional licensing cost required, it's another when you have to pay for a license and can't justify limit upgrades.

The other aspect that CloudLinux provides - CageFS - is actually a worthwhile project. But again, as you can see in the Bubblewrap chroot thread, php-fpm provide a facility for chrooting a pool's environment, get that solved - or use some other PHP-webserver connector with Bubblewrap and/or chroot capabilities and the need for CageFS is minimized.

There's a lot of software out there that web hosting providers pay a license for. But with a little work, you can often accomplish the same task for free without having to pay for that license. That's not to say that the licensed software isn't worth it. But the more licenses you have for software that's just the more cost involved in running your operation. Profits are generated by taking the costs of all of those licenses and hardware from the revenue you get from customers. You can increase profits by either raising the prices for customers to create more revenue or you can cut or minimize costs.
 
Time ago Cloud Linux send a warning to not set the limit under 1 core as that leads to (got to find why again as I forgot that) I will post in couple of hours ...
 
Back
Top