disk drive size and fsck

rldev

Verified User
Joined
May 26, 2004
Messages
1,072
One of the problems with using large disk drives is when you have to run a fsck. I can't image using a 200GB disk drive due to the length a fsck would take in case the server crashed.

Could someone enlighten me on this matter.?
 
By enlighten you, do you mean tell you you're right?

You're right.

:)

Jeff
 
Hi rldev,

What kind of file system(s) are you using on the drive? If you are using Linux, I would suggest creating ReiserFS or ext3 filesystems. These are journaling filesystems, and are much more tolerant of accidental powerdowns. I believe FreeBSD also has a feature called "soft updates" that works similarly.

If you have a journaling filesystem and a reliable UPS with server auto-shutdown, you really should not need to run fsck on your disk very often. I think the last time I ran fsck on a Linux box was about 3 years ago when I was still using RHL 5.2 (pre-ext3) and the disk was on its way out.

Good luck,
Greg
 
:) LMAO`Good Stuff, thanks for the file system info Jeff.
 
glarkin said:
I think the last time I ran fsck on a Linux box was about 3 years ago when I was still using RHL 5.2 (pre-ext3) and the disk was on its way out.
You might want to run "tunefs" (see "man tunefs" to check to see after how much uptime a simple reboot will force an fsck.

A simple reboot after a long uptime will force an fsck that can take a long time.

Jeff
 
jlasman said:
You might want to run "tunefs" (see "man tunefs" to check to see after how much uptime a simple reboot will force an fsck.

A simple reboot after a long uptime will force an fsck that can take a long time.

Jeff

Hi Jeff,

That's right, thanks for reminding me of that!

Regards,
Greg
 
Everything is relative :)

  • Mmmm, another hardware thread! :cool:

    Until just now we've been seriously considering installing a 250Gb 7200 rpm IDE on our RHL DA box. The 'fsck effect' never occured to us before! Not wishing to argue sematics, could someone please volunteer ballpark definitions of both "long uptime" and "a long time" as mentioned above by JL?

    The only vaguely related experience we have is that one of our NTFS workstations takes approximately 6 hours to complete CHKDSK /R on a very full but also very young 80Gb 7200 rpm IDE.
    jlasman said:
    A simple reboot after a long uptime will force an fsck that can take a long time.
 
fsck is a lot more efficient than chkdsk.

It's even a more interesting acronym; now you can safely yell across the room to any of your admins:

"Don't bother me about the lousy drive; fsck it!"

Time is relative, but so is server speed, and of course the time to fsck a drive is proportional to not only drive size, but system speed and the amount of data on the drive that needs to be actually fixed as opposed to just scanned.

All that said, you can expect fsck times of an hour or more under certain circumstances with huge drives.

Jeff
 
Are you enlightened yet, rldev?

  • Okay so let's recap.

    [*]long uptime
    . . . . your server has been running without a reboot for 10 days or more


    [*]reboot after a long uptime will force an fsck
    . . . . by default your server will force an fsck after your next reboot if you have a long uptime


    [*]fsck is a lot more efficient than chkdsk
    . . . . a few Windows utils are known to be a tad inefficient


    [*]a long time
    . . . . more than 10 minutes


    [*]you can expect fsck times of an hour or more under certain circumstances with huge drives
    . . . . It will take more than an hour to fsck a 200Gb drive.


    :rolleyes:
 
Back
Top