To software raid or not to?

eger

Verified User
Joined
Nov 3, 2006
Messages
71
I plan on doing a partiton scheme similar to:

150MB - /boot
2GB - swap
2GB - /tmp
Rest - /

I have 2 x 150GB drives. I can't seem to CentOS to recognize my ICH5 raid as one drive in the installer. So short of buying a new raid card (which for 32bit PCI SATA are all software anyways), would it be better to setup software raid mdX devices for my partitions or to just run one drive and use the second for backup?

They are both 10,000RPM drives so I would hate to waste one as backup if I can get a read performance gain from a mirror software raid.

I would be willing to go hardware raid if anyone knows of a 2 port hardware RAID 1 SATA card for normal 32-bit PCI. Though I have been looking and it seems such a thing does not exist. All are for 64-bit PCI if I am not mistaken.
 
Last edited:
Hi,

I've just put two new CentOS 4.4 servers online running software RAID, after spending a lot of time thinking about this same issue and trying out various setups.

I’ve got my partitioning pretty much the same just a little bigger.

2 x 250GB SATA Drives setup as Linux software RAID1

100MB (md0) /boot
4GB (md1) swap
4GB (md2) /tmp
233GB (md3) /

So far I’ve been very impressed with software raid on Linux. After having trying a few different RAID hardware solutions in the past and experiencing all the driver complexities that this can add, I decided to take the simpler route of many medium spec servers with the least complexity for offering our hosting services. I think I’ll be sticking with this setup in the future because at the end of the day, this setup is going to be the easiest to restore quickly and cheaply if anything fails.


Les.
 
I tested software RAID 1 setup under CentOS and I didn't see any performance gain like in hardware RAID 1. In hardware, usually you double the read speed utilizing the redundant data. However I did not see this happening in the software RAID 1.

I think I am going to go with a single 150GB 10K RPM data drive and a 320GB 7200 RPM backup drive.

I will hate to use 2 of my 10K RPM drives when they will not give me a performance increase.
 
Your choice may be good for you if you'd use RAID only for speed increase and not for redundancy. We use it for redundancy, so for us, it's software RAID.

Jeff
 
I would also like redundancy. However I would rather save my second fast drive and build a second server and rely on backups from the second drive should there be a drive failure.

Is it safe to say that I should be able to install the OS, install DA, and restore a fully working system from backups made with DA? Or is there much more tweaking and configuration invlovled that may not be backed up (such as apache build configuration)?
 
You can back up anything you want; you can even use rsync to back up the entire server.

Restoring will take time. And if you upgrade at the same time as you restore it's going to take a lot of expertise in how manage files.

Jeff
 
Thanks,

I think the best way will be for me to test DirectAdmin backups and simulate a disk crash while on my demo period.
 
Good idea.

Note that the admin level sysbk backups don't have a restore script; they must be restored manually.

They can be, we've done it. and we've gotten both fast and good at it, but nevertheless it's not automated.

Reseller level and user level backups are completely automated but for best results you have to create the resellers before you restore the backups.

Jeff
 
if you're looking for reliability from a sw raid solution, this is what i do:

i usually use a two drive software raid-1 solution with a mirrored boot, a mirrored LVM (with the other partitions), as well as two swap partitions (no need to mirror swap, and i just make two to keep the drives the same)..

by doing this i have a bootable solution if one of the drives goes out, whereas normally people tend to just put /boot on one of the drives, which then required a rescue cd if the primary drive fails.

using grub, after the distro installer (i mostly use cent os), you have to manually configure grub on both partitions as follows:

from the grub prompt:
device (hd0) /dev/hda
root(hd0,0)
setup(hd0)

where /dev/hda is your first drive, then:

device (hd0) /dev/hdb
root(hd0,0)
setup(hd0)

where /dev/hdb is your second drive. voila!
 
This is what I was originally going to do. However I purchased 2 10K RPM drives not realizing I couldn't use a hardware RAID 1 solution for both redundancy and read performance.

Since I couldn't do this I thought it might be more cost effective for me to save the second (very expensive) drive and use another large normal drive and do regular backups to it. Should a drive failure happen I will just take the downtime hit and restore as much as possible from backups done to the normal second drive.

Does this sound like a good idea? Or is this type of setup frowned upon?
 
eger: i supposed it depends what kind of downtime that you can tolerate..
if i were you, i'd save myself the (potential) hassle and setup a sw raid-1 using both drives as i suggested above..

if you also would like an external backup solution, i would check out rlbackup or dirvish for rsync based snapshot-style backups..

BUT, you could always just go with a single drive and use rlabckup/dirvish to backup.. that's your call :p
 
I think I am going to rethink my software RAID setup due to the responses all favoring software RAID.

What I now plan to do is backup current user accounts and DirectAdmin (all test accounts). Rebuild the server with software RAID 1 on my 2 drives using 1 md0 partiition and one swap partition on both drives. Then using LVM on the md0 RAID 1 partition for /, /tmp, and /boot.

Then take the opportunity to test restore functionality to the new RAID 1 system from backups.

ANyone have any thought on whether the software RAID 1 should be a separate RAID parition for each logical partition (ex. md0 for /, md1 for /tmp, md2 for /boot) or just created one raid partition and using LVM on it and creating the logical partitions in LVM?

I was going to go with LVM my only reasoning being it would be easier to manage a single RAID 1 partition should I need to rebuild it or move it and that LVM might handle a drive failure better.

I will still probably test the setup and drive rebuild after by yanking out one of the SATA hotswap drives during operation, waiting a while, and the putting it back in and using mdadm to check how it rebuilds.

Does this sound like a better plan than my original?
 
I would be willing to go hardware raid if anyone knows of a 2 port hardware RAID 1 SATA card for normal 32-bit PCI. Though I have been looking and it seems such a thing does not exist. All are for 64-bit PCI if I am not mistaken.

I have several 3ware 8006-2LP running RAID 1 on 2 SATA drive.
I like them because Fedora core 4 have built in driver for this raid card, so just boot up the Fedora intall CD and it will recognize the card and raid automatically. It is a 64 bit card but can be used on a 32bit PCI slot.
 
Try some googling. If a RAID card requires drivers, then it's really not hardware drive; some people call it pseudo-RAID but it's really software RAID.

The current FC4 may have a driver for the card, but the problems we've had in the past were when we installed new kernels.

I'd recommend that if you use a RAID card that requires a driver you disable automatic kernel updates.

Jeff
 
yes, i agree that if you need a driver to run the raid card then it is not 100% hardware raid.

However, i think if you want to manage the raid drive within the OS itself(like rebulding the raid manually, checking the status of individual disks of the raid etc), the OS must have a way to communicate with the raid card directly, so it is impossible to completely 100% isolate the OS from the raid card, that's why a device driver may be still needed.

Even a raid card is not 100% hardware based, I think as long as most of the CPU load was handled by the CPU on the raid card instead of relying on the system borad cpu, it really worth to get one.

If you want a 100% hardware raid, I think a raid controller connected to the motherboard through the motherboard SATA port may be a good choice. But you lost the functions of monitoring / managing the raid within the OS.

Yes, you are right, updating the kernel may cause problem if the raid require a driver, but I am lucky that my raid card has the driver buildin in the linux kernel in 2.6.x, so far i don't have problem updating my kernel. Hope they don't drop the support in the future release of linux.
 
Jeff hit it on head. I don't want to 'need' a driver. However for managing the card, obviously a driver is needed.

Though my previous hardware RAID servers operate 100% without a driver and they rebuild new drives on the fly without any intervention. This was what I was looking for in a 32bit PCI for SATA. I think the true hardware cards for SATA just do not exist (maybe because SATA speeds have outgrown the 32bit PCI bus?).

Also if anyone is wondering I went with Linux software RAID setup + LVM and simulated a disk crash and it worked well. However to rebuild the drive I had to do 2 commands for each degraded raid device:

Remove failed drive (sda3):
mdadm --manage /dev/md1 -r /dev/sda3

Re-add failed drive (sda3):
mdadm --manage /dev/md1 -a /dev/sda3

It then began rebuilding md1 after it finished md0, then continued to md2. The server stayed online while I pulled the disk and put it back in for rebuild. Good enough for me =)
 
Last edited:
Back
Top