Large partitions - new server

Duboux

Verified User
Joined
Apr 20, 2007
Messages
251
Hi,

I have been browsing around the forum and install.html for a good partition set for the nowadays larger hdd's.

I'm thinking abt a server with 16gb ram and a 2TB disc, with plans for additional 2TB disc when it's needed.

I was thinking about the following partion system:

Code:
/boot	[B]40meg[/B]	(standard value)
swap	[B]8GB[/B]			(I read that beyond 8GB ram the 2x rule doesn't apply, and that the larger the partition the slower it would become)
/tmp	[B]2GB[/B]			(mounted with noexec,nosuid in /etc/fstab)
/	[B]10GB[/B]	(=0.5%)
/var	[B]20GB [/B]	(=1%)		(logs and databases stored here on Redhat/CentOS/Fedora)
/usr	[B]20GB[/B]	(same as /var)	(DA data, source code, frontpage, but most of all mysql backups with custombuild option.)
/home	[B]1939GB[/B]	(rest of drive)	(for user data and email. Mounted with nosuid in /etc/fstab)

When a 2nd disc were to be added, I was thinking about doubling the /var and /usr partitions with the space from /home and create only a /home2 on the 2nd disc.
I'd need to place the 2nd disc before I'd have 40GB (the space needed for the larger /var and /usr) free space left in the /home partition to make it the easiest.

The only problem though is that the users on /home could keep increasing their used size because they'd have space left on their hosting account.


What do you guys think ?

And do you have any experience with ext4 file system on a CentOS 64bit ?
 
Last edited:
Ive a server with two hard disk atm, a 500GB and a 2TB (is for testing purppose and installations scripts) and ive setup this in this way:

Code:
>df
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             345G  1.2G  326G   1% /
/dev/sdb1             1.8T  1.2T  535G  69% /home
/dev/sda4             9.7G  151M  9.1G   2% /tmp
/dev/sda3              49G  2.8G   44G   7% /var
/dev/sda2              49G  2.4G   44G   6% /usr
tmpfs                1014M     0 1014M   0% /dev/shm

I supose that you should do var and usr 40/50GB, tmp at 5GB and rest on /home

On second disk you should have two ways to act:
1 - all in /home2
2 - /backup partitions where to put weekly backups

This is how i should do, but, if you have a remote backup server so, do a /home2 is ok, but, if you have no raid at all, i also would suggest to use the second 2TB drive to a raid1 instaed of additional space, and, when additional space will be required think about use a new server or a SAN (with iSCSI for example).

Depend on how much you think to grow and how fast.

Regards
 
Thanks.
And yes, I do have a remote backup server, and this server has a hardware raid.
It's actually 2x 2TB with space for another 2x 2TB
I have a RAID-1 setup.

I'm thinking about rsyncing accounts to the backupserver and have it tar there, daily.
the gzipping on my current servers are really slowing the server down, so I'd like to avoid that.

Why do you have so much space on / and /tmp ?
 
Cause is a 500GB space and i have plent on unused space :) Is just for testing so i filled randomy, just important thing is had many space on /home

Well, why dont you so use a RAID5 with 4x2TB? will be 6TB of space and you should not have any space problem at all.

Im studying about rsync backups aswell, but, if i dont remember bad there was a way to rsync backup gzipped, have to check for that.

Also, i suppose you can disable gzip function on da backup (maybe in directadmin.conf? dont remember) or also use a script for that... rsync account, well yes should work, but, maybe can be done something better dont you think?
 
Well, I was actually thinking abt using as little space as possible on the backup server, while still having like 5 days of backups.
I was thinking abt having a file that has been unchanged in teh last 5 days, only appear once on that server.
Not sure yet on how to execute this, but it's a good goal.

abt the raid-5, I can't believe I've never thought of that.
I was just used to buying 2 discs, and adding more when the time would need it.
Which I did this time on purpose, since the prices have gone up in 3fold lately... -_-;
You just gave me new food to think abt ;)


~~~~~~~~~~~~~~~~~~~~~~~~

edit:
because the prices are so high for discs due to the flooding in Tailand, I've choosen to start out with 2 disks.
I hope I can make a 2disc-raid1 now and change it into a 4disc-raid5 later on.
Or that I can make a 2disc-raid5 now and will be able to add 2 more discs to it later on.
(I heard a rumour this was possible as the raid-card would work with it as it it were a raid1)
I have the 3ware SAS9750-4i raid card, btw


~~~~~~~~~~~~~~~~~~~~~~~~

edit2:
Actually now that I thought abt it even more ^_^
I'm thinking to use RAID-1 (in case of 2 discs) and RAID-10 (in case of 4 discs) for my webservers, because this is the safest way (in case a disc goes down, the server can continue & up to 50% from the discs may break) raid10 has the same advantages but adds more read/write speed because of the striping.

And use RAID-5 for less-trivial servers like backup-servers when they hold >4 discs, because I can take the risk of a lower % (1/n*100%) of discs that may break. Plus it allows for more storage space. Plus I can allow the hours of time it takes to fill a replaced disc.
The only thing I have to keep an eye out for is to make an external backup from all the working discs in the array, before I replace a broken one with a new one, as I read that around that time other discs may break due to the heavy load and warmth it takes to fill the new disc.
btw, this is just my 2ct on the subject. I gotta say I have no experience with arrays with more than 4 discs yet.
Maybe, to be on the safer side for the backup server, I'll use RAID-6, which allows 2 discs (2/n*100%) to break down instead of 1.

I also use WD black edition discs, as I believe they are stronger for RAID arrays, especially when a replacement takes place.

Hope this helps ur thoughts ;)
 
Last edited:
Back
Top