Anything to watch out for??

truenegative

Verified User
Joined
Feb 16, 2006
Messages
153
Here is the system I'll be using:

Tyan GT20 Barebones
Opteron 170 Dual Core
4 x 1GB DDR400 ECC Registered
4 x 250GB SATA2 (RAID5)
Areca PCI-e 4port HW RAID


I'll be running CentOS 4.2 x86_64 or FreeBSD6 amd64


Any caveats, suggestions, etc?
 
Last edited:
tarionyx said:
Areca PCI-e 4port HW RAID
Many linux developers and admins (including me) prefer to use Linux Software RAID as opposed to hardware RAID.

That way we've got a known configuration on all servers.

tarionyx said:
Also, which do you all suggest for the OS, CentOS 4.2 or FreeBSD6 ??
That's a religious argument :p .

Jeff
 
My point, Jon, was to avoid religious argument.

But here's the kicker:

DA was originally designed for and written for, RedHat Linux.

The most recent free version of RedHat Linux is CentOS 4.2.

Everything else is a port.

;)

Jeff
 
Yeah seeing as DA was designed for RedHat, that is what is swaying me to use CentOS. Plus I've read a lot of reviews and a lot of people seem to like it. I've been a FreeBSD guy for a while, so I'm ok with giving a new OS a chance :)

As for the HW RAID, I think that when running RAID5 and hard drives of that size, software RAID would be a significant performance hit.

My two servers at home will have similar cards but with 300gb and 400gb drives respectively. :)
 
Yeah, I would agree that in most situations software RAID would hold its own, however it can increase minimum processor usage during any r/w operations to 20-30%.

This page here is also very interesting for a comparison:

http://www.chemistry.wustl.edu/~gelb/castle_raid.html


The advantage I can see with the hardware raid5 is hot swap and supporting a drive failure, have it keep running, and possibly notify me by SNMP/email.

Any suggestions for a filesystem to use? ReiserFS or ext3 seem to be the the best in this case.

thanks :)
 
tarionyx said:
The advantage I can see with the hardware raid5 is hot swap and supporting a drive failure, have it keep running, and possibly notify me by SNMP/email.
I don't see anything there you can't do with software RAID. Our systems continue to run when a RAID part fails, and they notify us by email. Our hotswap servers allow us to pull the bad drive and slide in a new one of the same or larger size.
Any suggestions for a filesystem to use? ReiserFS or ext3 seem to be the the best in this case.
I wish you had asked a week ago; I saw Hans Reiser on Sunday; he was at SCALE 2006. The last time I spoke with him (around two years ago) he said that there was no specific advantage to using ReiserFS on a shared webserver.

He's been working on Reiser4 for some time now, but I don't think any production kernel includes support.

Jeff [/B][/QUOTE]
 
hmm, i was under the impression that software raid couldn't do hot swap or raid5. i guess you learn something new everyday (seems to be multiple times a day for me hehe)

enough of this though, i dont want to feel sour for spending the $300 on a hw raid card :p :D


thats awesome that you got to meet Mr. Reiser himself, heh. I need to get out to some of these bigger conventions. Working as a network and security specialist for IBM can take its toll on me, hehe :)
 
tarionyx said:
hmm, i was under the impression that software raid couldn't do hot swap or raid5. i guess you learn something new everyday (seems to be multiple times a day for me hehe)
The Software-RAID HowTo can be found here; you might want to look at the section marked Raid-5.

Information on hot-swapping linux software raid can be found here.
thats awesome that you got to meet Mr. Reiser himself, heh.
A few years ago I visited the San Gabriel Valley Linux Users Group along with some friends from The Hawthorne Center for Innovation. After the meeting and Hans' most interesting talk (which some of us even understood a bit of), it turned out that he needed transportation to dinner for himself and for his family. We were happy to oblige :) .
I need to get out to some of these bigger conventions.
Neither the monthly meeting of the San Gabriel Valley Linux Users Group, nor the SCALE expo are what you'd call big conventions :) .
Working as a network and security specialist for IBM can take its toll on me, hehe :)
Yeah but at least you're guaranteed employment for life; after all, no one ever got fired for recommending IBM :D .

Jeff
 
haha nice, i guess i meant ANY of the conventions, there isn't much out here in western ny :P

thanks for all the help, i may use software raid5 for some of my dedicated servers to save some money.
 
Give it a try, and let me know if the articles misled me, or if it really works :) .

(I only use RAID 1, myself.)

One more thing:

You'll probably have to use a non-RAID partition for /boot to get a bootloader to work. We have a fix for GRUB for RAID 1, but not for any other RAID configuration.

Jeff
 
ahh ok, maybe that will be an advantage to the hw raid since the partition appears to the system already as one 750gb volume. i'll look into it.

Also, do you have a link to that thread you had said you posted about what basic packages you set up?

I'll be doing the same thing since I don't live near my datacenter, I'll be building it at home and then sending it to the datacenter, THEN installing DA, updating packages, etc. :)

thanks!
 
I can't find a link right now...

I suggest installing only the base packages of the OS. DA will install the rest.

Jeff
 
jlasman said:
You'll probably have to use a non-RAID partition for /boot to get a bootloader to work. We have a fix for GRUB for RAID 1, but not for any other RAID configuration.
Jeff [/B]
I use Centos 4.2 64bit, I have /boot on a software RAID1 partition and everything works fine.

This was out-of-the-box configuration when I installed it, no patches were required.
 
Did you try to boot the server with the first drive taken out?

With the second drive taken out?

That's what you need to check for.

If that doesn't work, and if one drive fails you won't be able to reboot your server.

Jeff
 
jlasman said:
If that doesn't work, and if one drive fails you won't be able to reboot your server.

Jeff

I think thats what happened with my current server that this new one is replacing. I went to reboot, and the onboard promise raid crapped out on me.... :(


Anyways, so just base OS packages. I'll have to install everything, change the ip, and send it out. I hope to get that done this weekend :D
 
Most inexpensive promise RAID isn't really hardware RAID; it's disguised software RAID. Really. You're much better off running Linux software RAID than Promise pseudo-hardware RAID.

See the Linux SATA RAID FAQ; though it's written specifically for SATA drives, what it says is true for most Promise controllers, including almost all the ones built into motherboards.

Also read this post.

Jeff
 
Yeah, I'm going to look into providing these servers using the silicon image 3114 and software RAID5 in the future.

I ran into an issue because I bought the wrong ram (registered vs unbuffered), so I'm waiting on the new ram, and I'm going to do the minimal CentOS install with DA on top of that.

I played around with CentOS in a VM on my laptop, and its interesting, because yum is definitely a sidestep from ports or portage, hehe.

Am I able to get a copy of DA to test in a VM, so that I can do a full test install?
 
Back
Top