optimal hdd config directadmin

mangelot

Verified User
Joined
Jan 11, 2007
Messages
67
Location
Enschede, Netherlands
Hello,

I'm wondering what kind off HDD results/config other directadmin user have to optimize directadmin I/O performance?
Cannot find much info on the forum about this.

The facts we currently use this raid layout:

Centos 6 (ext4)
raid 5 (3 disks array + 1 hotspare)
Shunk size 64kb
Stripe 128kb
Read-ahead cache automatic (options: disabled, 256kb, 512 kb, 1MB, 2 MB, automatic)
Write back cache 16MB (options: disabled, 1MB, 2MB, 4MB, 8MB, 16MB, 32MB, 64MB, 128MB, 256MB, MAX MB)

An estimated shell dd command speedtest gives arround 120MB

And are planning to upgrade to this:

Centos 7 (ext4 of xfs blocksize ??)
Raid 10 (12 disks array)
Shunk size: ??
Stripe size: ?? (disk * shunk size = stripe size)
Read-ahead cache: ??
Write back cache: ??

Any suggestions / knowledge what is the best option for us to use in combination with directadmin server?
 
raid and directadmin are absolutely not related. What you need to do is look at your expected load, the kind of sites you're planning on running, the expected disk io (throughput, access).
Based on that info you choose the type of raid, type of disks. In general raid 10 is most used because it has some redundancy and lacks the overhead of calculating parities.

In short... going from a slow raid 5 of 3 disks to a raid 10 setup with 12 disks (please assign 1-2 as a hotspare) will give you at least a threefold or more in performance if you simply stick with the defaults of the raidcontroller, but read speed kan easily jump up to 1GB/s or more (with a decent controller and disks and default settings).
 
Thanks, For your answer, but its not really what I mean.

Put it this way, how can I optimize the performance of the (complete) directadmin server based on blocksizes.
whats the best filesystem?, best block sizes for mySQL, httpd and or MTA.

specific finetuning for this can optimize the entire I/O workload of the server..
 
Well, you can't. Choosing an optimized raid type and it's parameters depends on the disk-io you expect and the type of sites you're running. The same goes for the raid settings and filesystem type.
Your chunk size e.g. depends on the type of files you're going to place on the array and it's io characteristics.
Say you have a lot of video's being requested. You would make the chunk size small to make sure the file is split over as many disks as possible. On the other hand... if your io is mainly database access where small blocks of data are requested, you would use a larger chunk size because you would want only a single io disk action to get the data.

That's why i said to stick with the defaults but if you really want to super optimize the array, you'll have to test what's best for you.

The effect of caching not only depends on your workload but also if you have the controller battery backed or not and how the raid controller interacts with your disks. (Some raidcontrollers even deactivate the disk cache if their battery is not available or broken, dragging performance down to tape level)

So 'specific' finetuning will almost never optimize the entire I/O workload of a server if you have many different services each with their own I/O characteristics.

Best is to simply record your current requests for a while and use a tool like 'ab' to rerun the requests on your new server, change raid parameters and rerun again until you find the perfect setup. (I know, it sucks...)
 
Back
Top