pm = ondemand
pm.max_children = 70
pm.process_idle_timeout = 60
pm.max_requests = 1000
pm = dynamic
pm.max_children = 70
pm.start_servers = 20
pm.min_spare_servers = 20
pm.max_spare_servers = 50
pm.max_requests = 1000
opcache.memory_consumption=256
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10099
opcache.revalidate_freq=60
opcache.validate_timestamps=1
opcache.fast_shutdown=1
opcache.enable_cli=1
opcache.validate_permission=1
pm = dynamic
pm.max_children = 400
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.max_requests = 0
opcache.revalidate_freq=0
opcache.validate_timestamps=1
opcache.validate_root=1
Prevents name collisions in chroot'ed environments. This should be enabled in all chroot'ed environments to prevent access to files outside the chroot.
opcache.memory_consumption=256
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10099
opcache.revalidate_freq=60
opcache.validate_timestamps=1
opcache.fast_shutdown=1
opcache.enable_cli=1
opcache.validate_permission=1
opcache.validate_root=1
opcache.use_cwd=1
opcache.revalidate_path=1
opcache.enable_file_override=1
opcache.memory_consumption
opcache.max_accelerated_files
I guess I can deactivate opcache.enable_cli when I am using PHP-FPM, because it is using fastcgi and not cli?
I have tested opcache, and it greatly improve the performance. Now I only need to figure out how much memory and files I should give on:
Code:Because I don't think opcache does much good if there is not enough memory for all sites. There is like 400+ WordPress sites on one of the servers, and also some others like Drupal. I wonder how much memory and files I should consider for opcache for each WordPress site? Approximately?[/QUOTE] Of course it depends but usually nothing more than 256MB will provide improvement. The problem is that the larger the cache is - the slower it is supported. So at one point you start wasting ram memory and cpu time for caching something, which is not used. When the memory is full, opcache frees it by removing the oldest scripts in the cache. If one file is hit very frequently, it "restarts" as the newest in the cache. So files which are frequently accessed (the files on the busy sites) will remain in the cache while less frequently accessed files will eventually drop from it. Basically the deal is like that - do you need to keep a file which is accessed two-three times per hour persistently in the cache? Definitely it does not benefit anything - it is actually bad, because it just wastes ram and (a little) cpu time. So better let it drop out from it.
[..]Of course it depends but usually nothing more than 256MB will provide improvement. The problem is that the larger the cache is - the slower it is supported. So at one point you start wasting ram memory and cpu time for caching something, which is not used.[..].