need to delete a file if >500MB

SupermanInNY

Verified User
Joined
Sep 28, 2004
Messages
419
Hi all,


I have a server with a run away script that fills the /var/log/httpd/error_log into 2GB of errors within few minutes (once it hits that error).
As such, the apache gets stuck and fails to come back online.

While the correct solution is to find the culprit script/code and fix it, the user is not fast to find it.
As a quick fix, I simply delete the error_log file and restart apache.
Then we get few days of quiet and then go through this motion again.

I am looking to automate the quick fix, but somehow I'm not able to run a cron that does that.

The idea is that if the file exceeds 500MB, it is already too big to be real and it is likely that the infinite loop is already in progress.


/usr/bin # vi cleanApacheErrorLogFile.sh

#!/bin/bash
test $(stat -c \%s /var/log/httpd/test.log1.tar) -gt 500000000 && /usr/sbin/apachectl stop && rm -Rf /var/log/httpd/test.log1.tar && wall wow


/usr/bin # crontab -e

*/10 * * * * /usr/bin/rdate time-a.nist.gov >date -s
0 5 * * 0 /usr/local/sysbk/sysbk -q
* * * * * root /usr/bin/cleanApacheErrorLogFile.sh

The cron doesn't do the trick.
If I run the file manually it runs fine, but not as a cron.
(yes, I know I'm running it against a test file,.. I'm still testing).

Any pointers on how or what I'm missing with running it as a cron?


Thanks for any insight.

-Alon.
 
Use this one:
Code:
* * * * * find /var/log/httpd/error_log -size +500M -exec rm '{}' \; 2>/dev/null

Be sure to copy it right.

EDIT: for the record, you need to specify the username only for /etc/crontab. Running `crontab -e' opens the user-related contrab file and should contain 6 fields (minute, hour, day/month, month, day/week, command), not 7 (..., user, command).
 
Last edited:
Use this one:
Code:
* * * * * find /var/log/httpd/error_log -size +500M -exec rm '{}' \; 2>/dev/null

Be sure to copy it right.

EDIT: for the record, you need to specify the username only for /etc/crontab. Running `crontab -e' opens the user-related contrab file and should contain 6 fields (minute, hour, day/month, month, day/week, command), not 7 (..., user, command).

1. a missing piece from this code is the "service httpd stop". Keep in mind that the file is still open and may not be deleteable. How do I add that bit of code?

2. I can through it into /etc/cron.d/ would it be better?

thanks,

-Alon.
 
1. a missing piece from this code is the "service httpd stop". Keep in mind that the file is still open and may not be deleteable. How do I add that bit of code?
Not quite true; on linux you can delete any file at any time, thanks to mmap.
The known problem is instead that Apache will stop writing to the file because the inode is different (or because it doesn't use the "append" method to write to the file, I read about it but I can't remember). There is a known workaround: do a "graceful" restart of httpd (kill -HUP).
2. I can through it into /etc/cron.d/ would it be better?
Sure, juste create /etc/cron.d/error_log with this content:
Code:
# erase /var/log/httpd/error_log when too large and reload Apache

* * * * * root find /var/log/httpd/error_log -size +500M -exec rm '{}' \; -exec /usr/sbin/apachectl graceful \; 2>/dev/null

/etc/cron.d is just an evolution of /etc/crontab, as you can see here you will have to specify the username.

EDIT: sorry, almost forgot: be sure to `chmod +x' it as it won't run otherwise.
 
Last edited:
This is great!
It worked like a charm :), but I decided not to use the gracefull option as it was generating lots of GGGGGGG in the httpd status and 'unavailables'.
Instead I put restart and it does the trick very nicely.
How do I add a logging option?

* * * * * root find /var/log/httpd/test.log1.tar -size +500M -exec rm '{}' \; -exec /usr/sbin/apachectl restart \; echo "cron error_log was initiated and httpd was restarted on date" >> /var/log/httpd/CleanApacheRestarts.log \; 2>/dev/null


I tried this, but it did not run for me.
Also, how do I add a datetimestamp instead of the word date?

thanks,

-Alon.
 
Use this:
Code:
MAILTO=your@address
* * * * * root find /var/log/httpd/error_log -size +500M -exec rm '{}' \; -exec /usr/sbin/apachectl restart \; -exec sh -c 'echo $(date): $1 erased and httpd restarted |tee -a /var/log/httpd/CleanApacheRestarts.log' {} \; 2>/dev/null

This will write the log and send you an Email directly from cron when the action occurs. Just change "|tee -a" with ">>" if you don't want any Email.
Glad to be helpful.
 
Use this:
Code:
MAILTO=your@address
* * * * * root find /var/log/httpd/error_log -size +500M -exec rm '{}' \; -exec /usr/sbin/apachectl restart \; -exec sh -c 'echo $(date): $1 erased and httpd restarted |tee -a /var/log/httpd/CleanApacheRestarts.log' {} \; 2>/dev/null

This will write the log and send you an Email directly from cron when the action occurs. Just change "|tee -a" with ">>" if you don't want any Email.
Glad to be helpful.


1. cron.d file permission.
2. log file > threshold
3. log file is corrrect
4. Not running still.


:/var/log/httpd # ll
total 1502952
-rw-r--r-- 1 root root 53236 Nov 29 20:28 access_log
-rw-r--r-- 1 root root 73029 Nov 20 13:38 access_log.1
...
-rw-r--r-- 1 root root 768143360 Nov 30 02:38 test.log1.tar
-rw-r--r-- 1 root root 768143360 Nov 29 03:38 test.log.tar

/var/log/httpd # vi /etc/cron.d/error_log

# erase /var/log/httpd/error_log when too large and reload Apache

# * * * * * root find /var/log/httpd/error_log -size +500M -exec rm '{}' \; -exec /usr/sbin/apachectl restart \; 2>/dev/null
# * * * * * root find /var/log/httpd/test.log1.tar -size +500M -exec rm '{}' \; -exec /usr/sbin/apachectl restart \; cat "cron error_log was initiated and httpd was restarted on " date >> /var/log/httpd/CleanApacheRestarts.log \; 2>/dev/null


[email protected]
# * * * * * root find /var/log/httpd/error_log -size +500M -exec rm '{}' \; -exec /usr/sbin/apachectl restart \; -exec sh -c 'echo $(date): $1 erased and httpd restarted |tee -a /var/log/httpd/CleanApacheRestarts.log' {} \; 2>/dev/null


* * * * * root find /var/log/httpd/test.log1.tar -size +500M -exec rm '{}' \; -exec /usr/sbin/apachectl restart \; -exec sh -c 'echo $(date): $1 erased and httpd restarted |tee -a /var/log/httpd/CleanApacheRestarts.log' {} \; 2>/dev/null


/var/log/httpd # ll /etc/cron.d/error_log
-rwxr-xr-x 1 root root 937 Nov 30 02:59 /etc/cron.d/error_log
 
Some implementations of cron (I guess all but vixie-cron) need a restart/reload of crond to take care of any new/modified crontab file.

Do `/etc/init.d/crond reload' or `/etc/init.d/cron reload', wait 1 minute then read the last lines (`tail') of /var/log/cron, /var/log/messages or /var/log/syslog to check if the command has been executed.
The command works, I just tested it. That must be it.
 
Some implementations of cron (I guess all but vixie-cron) need a restart/reload of crond to take care of any new/modified crontab file.

Do `/etc/init.d/crond reload' or `/etc/init.d/cron reload', wait 1 minute then read the last lines (`tail') of /var/log/cron, /var/log/messages or /var/log/syslog to check if the command has been executed.
The command works, I just tested it. That must be it.



Yes. that resolved the problem!! (service crond restart)
Strange,. I was sure that the error_log in the cron.d is read from scratch every time, but I guess it keeps it in some cache memory?

Thank you so much.
Huge help!

-Alon.
 
For what it's worth; if you need to do a quick delete and restart, you can do this:

The following is NOT all the steps that need to be done; just an outline.

1. stop apache
2. mv (rename) the log file
3. create (touch) a new log file
4. restart apache
5. remove (rm -f) the renamed file

Don't forget that you have to do this all (except for the final remove) almost immediately; otherwise DirectAdmin will find apache not running and restart it.

Jeff
 
For what it's worth; if you need to do a quick delete and restart, you can do this:

The following is NOT all the steps that need to be done; just an outline.

1. stop apache
2. mv (rename) the log file
3. create (touch) a new log file
4. restart apache
5. remove (rm -f) the renamed file

Don't forget that you have to do this all (except for the final remove) almost immediately; otherwise DirectAdmin will find apache not running and restart it.

Jeff

* * * * * root find /var/log/httpd/error_log -size +500M -exec rm '{}' \; -exec /usr/sbin/apachectl restart \; -exec sh -c 'echo $(date): $1 erased and httpd restarted |tee -a /var/log/httpd/CleanApacheRestarts.log' {} \; 2>/dev/null

This script accomplish this automatically all in One script.
There is no problem for DA to restart apache.
The problem is that the log file is so big that DA attempts and Fails to restart apache. Once the file is removed, be it DA or the script that restarts apache, it doesn't really matter. The error_log file is by then already removed and then it can be re-written.

-Alon.
 
Since removing a large file can take a long time to restart apache, it's better to move the file and rebuild it, then remove the file later, after apache is restarted.

There's more than one way to skin a cat :).

Jeff
 
Since removing a large file can take a long time to restart apache, it's better to move the file and rebuild it, then remove the file later, after apache is restarted.

There's more than one way to skin a cat :).

Jeff

Thats why we use super computers :)

I tested it several times with large size files attempting to mimic the same behaviour.
I don't believe that the file is being built in less than a minute (for 2GB of logs). I've set the threshold to 500MB, so there should be ample time for the removal and then the restart of apache.

Once apache is restarted, the infinite loop that is causing the problem is broken and from here on the error_log file doesn't grow any more and is less than 500MB,. or if it is still above 500MB, then after a minute then apache will be restarted again fresh and a new error_log file will be created.
So the worst that can happen is two one minute consequetive runs and it should be in perfect order.
This problem happens once or twice a week, so given such fairly low rate of occurance, 4 minutes a week (at worst) is an acceptable down time for the user.
 
Back
Top