There are tons of articles on how to optimize FastCGI setup. Yet, all of them talk about settings for a single site. The situation is much different in shared web hosting, where there are hundreds of sites running. Understanding & optimizing mod_fcgid settings can make a difference between fast and stable PHP hosting and frequent 503 & 500 errors. Unlike suPHP that executes new PHP process for each request, FCGID starts a bunch of processes per customer, and re-uses them for each PHP request.
mod_fcgid is an option in majority of hosting control panels, yet most of the control panels don’t alter default settings. And the default settings in mod_fcgid are made for a single site. You can find more info on mod_fcgid settings here:
When setting up mod_fcgid, you should care about several configuration options:
FcgidMinProcessesPerClass — should always be 0
“PerClass” means per user in shared hosting, and 0 means that there might be no processes for a particular user, if that user is not active.
FcgidMaxProcessesPerClass — default 100, this means that a single customer can have 100 php requests served at the same time. Which in turn means there will be 100 PHP processes just for that user. This is way to high for shared hosting. I would recommend values anywhere from 8 to 20. Note: if more requests comes in at the same time, they will be queued, and not rejected.
FcgidMaxProcesses — this is the total number of processes FCGID will start, for all users. That is what will prevent OOM issues. The more RAM you have, the higher you can set the value.
If you set this value too low, you will get 500 errors, as FCGID will not be able to create new processes to serve requests. This value also depends on the size of PHP processes (which in turn depends on extensions that you have enabled for PHP) as the larger the process the faster you will him OOM. You can try playing with following numbers depending on your RAM: 8GB — about 150, 16GB – 300
Also, make sure you monitor apache error logs. If you see “can’t apply process slot for error” — it means you are hitting FcgidMaxProcesses
FcgidIdleTimeout — default 300, number of seconds process would stay idle until it gets killed. The higher the number, the lower CPU usage, and the higher the chance of hitting FcgidMaxProcesses limit (as processes live longer). I would recommend putting it at 60 at first
FcgidIdleScanInterval — this should be adjusted as well, to about a half of what FcgidIdleTimeout is set.
FcgidProcessLifeTime: default – 1 hour, should be — anywhere from 120 seconds to 300 seconds (double idle time). It is there to make sure that processes do get killed after some time, if they are not busy (even for short periods of time).
The shorter idle timeout/process life time, the less chance that you will hit FcgidMaxProcesses limit, but the more load you will put on the system.
Most values can be same from server to server, but you might want to change FcgidMaxProcesses depending on the amount of RAM you have.
Example FCGID settings for shared hosts:
This article was written by CloudLinux. Please view their website at http://www.cloudlinux.com