mangelot
Verified User
Hello,
I'm wondering what kind off HDD results/config other directadmin user have to optimize directadmin I/O performance?
Cannot find much info on the forum about this.
The facts we currently use this raid layout:
Centos 6 (ext4)
raid 5 (3 disks array + 1 hotspare)
Shunk size 64kb
Stripe 128kb
Read-ahead cache automatic (options: disabled, 256kb, 512 kb, 1MB, 2 MB, automatic)
Write back cache 16MB (options: disabled, 1MB, 2MB, 4MB, 8MB, 16MB, 32MB, 64MB, 128MB, 256MB, MAX MB)
An estimated shell dd command speedtest gives arround 120MB
And are planning to upgrade to this:
Centos 7 (ext4 of xfs blocksize ??)
Raid 10 (12 disks array)
Shunk size: ??
Stripe size: ?? (disk * shunk size = stripe size)
Read-ahead cache: ??
Write back cache: ??
Any suggestions / knowledge what is the best option for us to use in combination with directadmin server?
I'm wondering what kind off HDD results/config other directadmin user have to optimize directadmin I/O performance?
Cannot find much info on the forum about this.
The facts we currently use this raid layout:
Centos 6 (ext4)
raid 5 (3 disks array + 1 hotspare)
Shunk size 64kb
Stripe 128kb
Read-ahead cache automatic (options: disabled, 256kb, 512 kb, 1MB, 2 MB, automatic)
Write back cache 16MB (options: disabled, 1MB, 2MB, 4MB, 8MB, 16MB, 32MB, 64MB, 128MB, 256MB, MAX MB)
An estimated shell dd command speedtest gives arround 120MB
And are planning to upgrade to this:
Centos 7 (ext4 of xfs blocksize ??)
Raid 10 (12 disks array)
Shunk size: ??
Stripe size: ?? (disk * shunk size = stripe size)
Read-ahead cache: ??
Write back cache: ??
Any suggestions / knowledge what is the best option for us to use in combination with directadmin server?