fastcgi_read_timeout with PHP backend

Maxim Dounin mdounin at mdounin.ru
Mon May 27 10:19:47 UTC 2013


Hello!

On Sat, May 25, 2013 at 01:01:32PM -0400, B.R. wrote:

> Hello,
> 
> I am trying to understand how fastcgi_read_timout works in Nginx.
> 
> Here is what I wanna do:
> I list files (few MB each) on a distant place which I copy one by one
> (loop) on the local disk through PHP.
> I do not know the amount of files I need to copy, thus I do not know the
> total amount of time I need for the script to finish its execution. What I
> know is that I can ensure is a processing time limit per file.
> I would like my script not to be forcefully interrupted by either sides
> (PHP or Nginx) before completion.
> 
> 
> What I did so far:
> - PHP has a 'max_execution_time' of 30s (default?). In the loop copying
> files, I use the set_time_limit() procedure to reinitialize the limit
> before each file copy, hence each file processing has 30s to go: way enough!
> 
> - The problem seems to lie on the Nginx side, with the
> 'fastcgi_read_timeout' configuration entry.
> I can't ensure what maximum time I need, and I would like not to use
> way-off values such as 2 weeks or 1 year there. ;o)
> What I understood from the
> documentation<http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_read_timeout>is
> that the timeout is reinitialized after a successful read: am I right?

Yes.

> The challenge is now to cut any buffering occurring on the PHP side and let
> Nginx manage it (since the buffering will occur after content is being read
> from the backend). Here is what I did:
> * PHP's zlib.output_compression is deactivated by default in PHP
> * I deactivated PHP's output_buffering (default is 4096 bytes)
> * I am using the PHP flush() procedure at the end of each iteration of the
> copying loop, after a message is written to the output
> 
> 
> Current state:
> * The script seems to still be cut after the expiration of the
> 'fastcgi_read_timout' limit (confirmed by the error log entry 'upstream
> timed out (110: Connection timed out) while reading upstream')
> * The PHP loop is entered several times since multiple files have been
> copied
> * The output sent to the browser is cut before any output from the loop
> appears
> 
> It seems that there is still some unwanted buffering on the PHP side.
> I also note that the PHP's flush() procedure doesn't seem to work since the
> output in the browser doesn't contain any message written after eahc file
> copy.

There is buffering on nginx side, too, which may prevent last part 
of the response from appearing in the output as seen by a browser.  
It doesn't explain why read timeout isn't reset though.

> Am I misunderstanding something about Nginx here (especially about the
> 'fastcgi_read_timeout' directive)?

Your understanding looks correct.

> Have you any intel/piece of advice on hte matter?

You may try looking into debug log, see 
http://nginx.org/en/docs/debugging_log.html, and/or tcpdump 
between nginx and php.  It should help to examine what actually is 
seen by nginx from php.

-- 
Maxim Dounin
http://nginx.org/en/donation.html



More information about the nginx mailing list