High cpu on user account, and high iowait.

Same server without raid sync running, last 6 hours.
afbeelding.png

Still usage but way not as bad as when raid-check was running, then it went way higher and way more iowaits.
 
Hi;
what i can see from
Status
user using plugin yith-woocommerce-wishlist
pls check logs from user word
add_to_wishlist

if the user using free versions should change plugin to Premium version
there is the option of forcing registration before creating a wishlist.
 
Thanks I will ask him.
But does lines are not used a lot, so maybe he has already the pro. That part of the status was just an example of normal lines I see.

I also see strange things like this:
www.christxxxxxxxxxxx.com:443GET /?nocache=17-32-46&_=1732659105277 HTTP/2.0

I tried to put that /?nocache= etc. behind the domain name but then I get a 404. No clue what this is.
 
Hi;
i am not well in raid but what i can see from your output
This is the mdstat output at this moment.
[/code][root@server27: ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[0] sda1[1]
67042304 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sdb3[0] sda3[1]
1884198976 blocks super 1.2 [2/2] [UU]
[============>........] check = 64.5% (1217061632/1884198976) finish=484.4min speed=22949K/sec
bitmap: 5/15 pages [20KB], 65536KB chunk

md1 : active raid1 sdb2[0] sda2[1]
2070528 blocks super 1.2 [2/2] [UU][/code]

bitmap: 5/15 pages [20KB], 65536KB chunk
why md2 bitmap enable ??

Bitmaps optimize rebuild time after a crash, or after removing and re-adding a device.
after fully synced you should disable Bitmap on this device
i thing this make high iowait
 
How much memory does the server have and how much swap memory? Is the server swapping heavily due to memory shortages?
 
why md2 bitmap enable ??
That's default on all the raid systems we have. It's software raid created by Hetzner automatically when installing the servers.

Disabling bitmaps can cause higher speed on very high load, but in case of failure and replacement, rebuild will be slower. I don't have enough experience with raid to mess with it on live servers. So I rather keep it this way. Never had issues with it so to change it because of 1 user... I don't really dare to do this.

@exlhost Oh that's more than enough, already checked that. :)
Code:
[root@server27: ~]# free
              total        used        free      shared  buff/cache   available
Mem:       65453448     4596436    37081812      662704    23775200    59470512
Swap:      67042300        2328    67039972
 
That's default on all the raid systems we have. It's software raid created by Hetzner automatically when installing the servers.

Disabling bitmaps can cause higher speed on very high load, but in case of failure and replacement, rebuild will be slower. I don't have enough experience with raid to mess with it on live servers. So I rather keep it this way. Never had issues with it so to change it because of 1 user... I don't really dare to do this.

@exlhost Oh that's more than enough, already checked that. :)
Code:
[root@server27: ~]# free
              total        used        free      shared  buff/cache   available
Mem:       65453448     4596436    37081812      662704    23775200    59470512
Swap:      67042300        2328    67039972

HI Richard,

it was suggesting by raid-check Speed up your disk Speed, mybe you have better iowaits.
but i can understand you about messing the live server.

If one hd fails and has to be replaced, a bitmap makes no difference rebuilding will be still slower.


here some other suggesting

check your I/O Scheduler

cat /sys/block/sda/queue/scheduler
cat /sys/block/sdb/queue/scheduler


check your raid rebuild Speed

cat /proc/sys/dev/raid/speed_limit_max
cat /proc/sys/dev/raid/speed_limit_min (defaul 1000)

you can increase speed_limit_min
this will to increase the speed of RAID rebuilds

Regards


Best regards
 
I think we are going to split that account and devide it amongst 2 servers.

I've checked the scheduler:
Code:
[root@server27: ~]# cat /sys/block/sda/queue/scheduler
[mq-deadline] kyber bfq none
and it's te same for /dev/sdb.

Raid speeds min is 1000 and max is 200000. But during the high load it was like 49000 if I remember correctly.
 
[mq-deadline] is okey
Raid speeds max 200000 is okey


you should test raid min speed biger mount
5000, 10000, 15000 , 20000, 25000, 30000
if this will speed raid-check time
and wach server loads and cpus.
 
I don't know of that would make a lot of difference. I had it at minimum 10k before, but even under high load it's always running more than 40000.
I will check again next monday.
 
min speed 1000 =1,000 KB/sec max speed 200000 = 200,000 KB/sec
because the current minimum is effectively 'stop the resync when there's enough other IO activity'
You probably don't want to wait that long.
 
Back
Top