running out of file handles for logs...

webquarry

Verified User
Joined
Mar 19, 2004
Messages
182
Has anyone had this happen to them?

Is it even possible? ;)

It has not happened to me but it occurs to me that apache con only open so many log files and three are opened for each domain and each subdomain.

Has anyone hit the limit?
 
As far as I know this is not a limit of apache but rather you server resources (cpu/ram/etc).
Apache can handle as many requests and log handles as your server resources can support without crashing or becomming slow. This limit would vary from computer to computer depending on specs.
 
Agreed.

But I was under the impression that apache has it's own internal limit kinda like the 250 child limit that DA has raised to 500 or so. Does anyone know where that limit is or am I not remembering it right?
 
Apache does have some built-in safe guards that might help protect against an overload caused by apache from bringing down the entire server. These values can be changed in the /etc/httpd/conf/httpd.conf file. Setting the values to low may result in hanging connections and slow responses while setting the values to high can overwhelm your server and bring it to it's knees. (err segfault? :p )
 
Last edited:
256 descriptors per process is a lot. You can probably solve the problem by lowering the "MaxRequestsPerChild" in /etc/httpd/conf/httpd.conf file (don't forget to restart Apache).

Jeff
 
256 descriptors is indeed a lot IF apache only opens the needed log files per request. I suspect that might be a tad slow to open and close all those files though and I fear that it just opens them all. (!)

Anyone know more about this?
 
If you read the link to the linux section (search for the word linux here), you might reach the conclusion I did...

That shutting down and restarting child processes before they have the opportunity to use that many descriptors might be a better way of handling it.

The other options are (a) recompile a lot of the base OS (and make it close to impossible to do automatic updates), or (b) rework the system so everything gets logged to one set of logs, and do a logrotate every night, and then a splitlog (you'll have to do it every night to keep Webalizer up to date), before running Webalizer.

Jeff
 
Terminating children before they open too many logs would certainly help IF apache only opens new logs as they are needed. Is that indeed how apache operates or does it open everything that it might need at launch to make it handle requests faster (at the expense of a slightly slower launch time per child)
 
If each child opened up every file it might need as it started up, that would probably be slower than opening up files as it needed them.

And it would be quite inefficient and waste a lot of resources. Why should every instance of httpd open up hundreds of files when chances are it would never be called upon to use them.

My educated guess, based on experience, is that it opens files as it needs them.

I've never seen any behavior to prove otherwise.

If the original poster tries my suggestion and it doesn't work, that will disprove my suggestion.

Jeff
 
I have about 300 domains on my server. In certain situations more than 30 instances of httpd are running simultaniously. Every instance open logfiles for ALL (sub)domains (300 x 30 x 6 filehandles), which is weird because why should a program open an error-log-file if ther aren't any errors to log.....
A side effect is that the memory required for 1 instance of httpd is about 42 MB. As a result my server breaks often down during peak-hours.

My solution:

I found out that all domains had the option "SLL Access" set. This option is not usefull in my case as my clients cannot use this feature (no IP-based domains).

The result is that httpd uses now "only" 10 MB per instance and the number of open file-handles has decreased drastically.

Try the command "/usr/sbin/lsof" as superuser and be amazed....
 
That fits with how I was told apache operates. Their desire is to make it as fast as possible so at launch of each child it opens everything that it might need so it doesn't have to waste any time opening it later. Of course this makes launching slower which it why it always keeps a number of "spare" children on hand to handle spikes in requests.

With three hundred domains, that means that each child has what two file handles per domain plus all the appropriate libraries. At any rate you have definitely passed the 256 descripter/process mark so it would appear that linux is dynamically growing that limit for you.

That's good news. I wonder what the "real" limit is. Anyone else stuff more than 300 domains on a server? How much ram is in the server?
 
In fact, there are 3 handles per (sub)domain, example:
/var/log/httpd/domains/xxxxxx.com.uuuuu.log
/var/log/httpd/domains/xxxxxx.com.uuuuu.error.log
/var/log/httpd/domains/xxxxxx.com.uuuuu.bytes

And in my case there were 6 (for port 80 and port 443).

The server in trouble has 512 MB, and I 've ordered 512 MB more, but I think I can do without it.

If necessary, I will remove the (per domain) error-log files and do the splitting in a separate process. But, again, I think it is a design error to open error_log-files if no errors are detected.

Anyone else who has SSL on without reason? Or is it just me being careless...
 
Back
Top