Nginx + PHP FASTCGI FAILS - how to debug ?
r at roze.lv
Fri Feb 12 17:08:01 MSK 2010
> So, both php-cgi and php-fpm appear to not behave well with nginx.
First of all - when running php in fastcgi mode you should understand that more or less the webserver is taken out of picture.
The php fastcgi service is a standalone process which listens on port or socket and doesnt actually care what webserver (as far as
it can 'talk fastcgi') is used (of course there are some differences of how the webservers handle this or that situation/error
(like lighttpd for example disables the backend for some time (configurable) if its not reachable and doesnt try to reconnect at
But the basic principles which have worked for us (on a pretty big site with thousands of requests per sec) are such:
1. Use PHP-FPM! ( http://php-fpm.org/ ) while at the moment it requires you to patch and build your php from source (rather than
installing some distro package) even the php developers themselves have admited that current php process manager is bassicaly shite.
And for a good reason they have at last included the php-fpm in the 5.3 tree (still experimental though).
Without php-fpm you bassically have no control over your fastcgi processes (except the 'PHP_FCGI_MAX_REQUESTS') the master process
can die / childs can get stuck in infinite loops or eat all of your ram in case of a leaking code / extension.
2. The typical problem we have encountered when php pages suddenly stop processing is either all the forked childs are doing some
long (unintended) running scripts (as the inbuilt max_max_execution_time doesnt always work (if at all) as expected) or just have
been hanged so the master process has no free childs to assign the incomming request.
Thats why you:
- spawn more than just few childs. While the typical approach is to like go by cpu core count we have experienced that adding some
multiplier like 3 - 4x works better as the php code tends usually to wait more from external resources (DBs etc) rather than
- use the great features of php-fpm to monitor which scripts take too long to execute and kill those who are taking too long.
Like we use:
Which means that requests taking more than 30 seconds to compute will be logged (backtraced) and those taking longer than minute
killed by force.
Has helped to to find all the infinite loops and other weird issues created by php coders or also opcode accelerators like eA ( look
at this for example http://www.eaccelerator.net/ticket/381 )
- tune the process_control_timeout and emergency_restart_threshold settings so the php childs get respawned in case there are memory
leaks/errors or the child gets stuck.
3. At last to debug whats the child is actually doing is easilly done with 'strace' .. Use 'ps aux' to see the process numbers and
see what php child is taking cpu and then attach with 'strace -p [pid]' and you can take a look what is the process doing (if
More information about the nginx