PHP-FPM nightmares

I've decided to rent a beefy server for this site....... They are prepared to pay for something good, eg. 32gb ram, 8 core.....might be overkill, but....

Now, I'm just an apache, php, mysql, proftp guy....... So, can I ask you all , what configuration would be best?
 
Well W3TC didn't do anything to help
Have you tried mixing dynamic cached (basic), object cache, and also database cache from it,
the case object cache can slow down some sites (common site that full of article), but your client site is forum (highly dynamic)), i think object cached will give a advantage for it
 
I've decided to rent a beefy server for this site....... They are prepared to pay for something good, eg. 32gb ram, 8 core.....might be overkill, but....

Now, I'm just an apache, php, mysql, proftp guy....... So, can I ask you all , what configuration would be best?
I just moved from apache to nginx + PHP-FPM, i feel faster when using nginx.
Maybe if you activated cache plugin, nginx will take advantage for your client site

i saw some article said. there litespeed is faster and the litespeed webserver also developed their cache plugin for WOrdPress, but i not have experience for speed testing it


they not add Nginx+W3TC in that article, maybe they hidden some things from it
Nginx + W3TC is Free or priceless alternative when compared with Litespeed Webserver
 
Last edited:
This is more Time-To-First-Byte related, but for slow Wordpress sites in particular, we've had a noticable improvement by switching from apache to nginx, installing the wp-rocket Wordpress plugin and editing our nginx config as follows:


For permalinks to keep working after switching to nginx, you'll need to edit php_nginx.conf:

Code:
cd /usr/local/directadmin/data/templates/custom
cp ../nginx_php.conf .
nano /usr/local/directadmin/data/templates/custom/nginx_php.conf

#Add the following lines at the bottom:

location / {
try_files $uri $uri/ /index.php?$args ;
}

#Rewrite configs:

chown diradmin:diradmin nginx_php.conf
cd /usr/local/directadmin/custombuild
./build rewrite_confs
 
Last edited:
I see nginx hates .htaccess, what's the alternative to it, as in, let the client do changes without me needing to edit configs?
 
it's hard to say how you can fix it.

simple way to help it using cdn. like CloudFlare.
 
It does go through CF proxy, but no cdn........

It took me a few days to set up the new server........ and guess what...... some bits are slow, and the rest is fast..... My client might've pin-pointed it to MegaMenu.... Anyone else use it with latest WP?

I go out of my way to help my clients, yes, I'm an one-man show but I enjoy helping people if I can..... even weekends!

This has been a nightmare!
 
Last edited:
The site is actually fast now, it's just the max_input_vars for the menu limit...... It's set at 1500, even tried 15000 for a joke.... Still doesn't solve anything ?‍♂️

(No, there's no suhosin installed 'yet')
 
I am using apache_nginx php7.4 with opcache as a base for most.

Some VPS'
with the W3TC with Redis for bigger Woosites. A lot of the documentation I have read states to use W3TC. If you took this off i would ad it back and go through each section slowly. Also, it won't help immediately to will take 20 mins or so for it to ramp up.

Do you have redis installed?

I also tuned Mariadb 10.4 with mysqltuner.

The sites I have
max_input_vars
is 3000 is the recommended standard for WP,
 
Yes, optimization need time to generate cache, no matter what cache method you used, as @bdacus01 said
The cache method uses will vary with type ur client site.
It's should test every method that meet the site type and take awhile to see the changes

Also pre gzip-cached and asset files will help reduce CPU usage for each request, with nginx will do easy do it with with-http_gzip_static_module
In Apcahe, W3TC will create .htaces rules for it, you can copy the rules (wp-content/caches/minify/) folder they create at your custom asset folder and make small modification,

For fast setup, i would recommended
1599961788697.png
Using Disk Basic, because ur client site is highly dynamic (forum site), basic cache create cached for each part of ur site design
Disk Enhanched will create full html for each URL, so its good for highly static site like blog or documentation
 
Last edited:
I'm bumping this thread...... After every few weeks, the server comes to a halt, well, will not serve pages until I restart apache..... When this happens and look on the DA Service Monitor, apache has loads of processes (near 2gb), php-fpm has loads of processes (near 3gb).

I'm at a loss..... One site on an 8 core, 32GB ram server?‍♂️

My pmp event settings are:
Code:
<IfModule mpm_event_module>
    StartServers             6
    MinSpareThreads         32
    MaxSpareThreads        128
    ThreadsPerChild         64
    ServerLimit             32
    MaxRequestWorkers     2048
    MaxConnectionsPerChild   10000
</IfModule>
 
Ok, I am linking the site (hope it's fine to)


It's behind Cloudflare, but still having issues. Sometimes it states host down. ?‍♂️

The site is heavy with content, so the loading speed is affected even though it is a beefy server.......

Anyone see what it could be, as I am really tearing my hair out.......
 
Do you see any errors in

Code:
/usr/local/phpXX/var/log/php-fpm.log

i.e. errors like this:

Code:
WARNING: [pool XXX] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers)

You can make some specific php-fpm related config changes which I haven't seen mentioned here in

Code:
/usr/local/phpXX/etc/php-fpm.conf
 
Only
Code:
 WARNING: [pool xxx] server reached max_children setting (10), consider raising it
Will increase it


Have upped it to 40 from 10, might be overkill but.....
 
Last edited:
This warning is pretty important. From the moment max_children is exceeded, your websites become unavailable. You can quickly make them available again by restarting php-fpm. You should increase max_children according to how much RAM your server has. One php-fpm process usually uses about 50-80MB of RAM. Don't increase it by too much or your server could go out of memory when serving many requests.
 
Last edited:
Back
Top