apache original config?

jmstacey

Verified User
Joined
Feb 12, 2004
Messages
4,106
Location
Colorado
I discovered the original httpd.conf file in /usr/local/directadmin/data/templates
and was curious to the differences compared to my current one
so I compared the two and modified mine with the default one.

The MaxClients was set to 150 in my conf but 400 in the default one and keepalive was off for me. I changed both of them and everything seemed fine except a leap in usage from .10 to .40
After a few days it started causing problem and finally apache just became unreliable and would freeze the server and I'd end up rebooting via APC. Eventually Apache couldn't even be shutdown without a entire server reeboot. I reset everything back to my original and its back to normal now.

I'm just wondering if anybody else has played around with this. How does your config match the one DA originally used to setup your system? (assuming thats what it is) How did your server handle the changes if there were any?
I'm assuming the differences were because the httpd.conf in the templates directory is updated when directadmin is updated. And if that is the real template used if I were to reinstall, my system would be very unstable.
 
Hello,

We increased the MaxClients setting because many servers were reaching 150 quite easily. As for the KeepAlive being turned to on, the setting is supposed to greatly *improve* performance.. because an apache child is not spawned for each connection. Instead the same child can handle many requests from the same user, thus decreasing the overhead for forking new apache processes. I'm not sure why it would increase the load on your system.. we havn't noticed any problems on our servers with it.

John
 
First I turned KeepAlive on only.

100 processes average 15% usage 0.36 avg.
it went up to
200 processes average 9% usage 0.6 avg.

Apache server status goes from around 15 idle servers down to a constant zero and apache server status cpu usage increases from an average of .5% up to 15% It is also like http requests are taking longer so I ran some tests using
/usr/sbin/ab on my phpmyadmin/index.php
I ran several tests and the average requests per second that it could handle was 0.20/sec which is pathetic and disgracful. I turned keepalive off and ran the same tests and this time it scored an average of 90 requests per second.
I then set max clients to 450 and ran the test again, this scored an average of 92 requests per second and processes and cpu usage returned as if it was set to 150

Together they any are a deadly combo making the server unresponsive. Any ideas to the extreme performance loss when keepalive is activated?
 
Turning on Keepalive for highly dynamic sites is useless and more resource consuming than necessary. Only consider using keepalive if your website is mostly static, with lots of images. Also the default keepalive time of 15 seconds is a bit long, so you may want to limit that to 5 seconds or so. Also the maximum request per keepalive session should probably be bumped up to 500 or better for better performance.
 
I found my problem and a solution when I encountered constant segfault errors. TurckMMCache was causing it, once I disabled that the problem has been solved although now there are abou 350 idle processes, not sure if its hurting yet though. Although server load went up to 1.6
 
Back
Top