fastcgi_read_timeout with PHP backend
B.R.
reallfqq-nginx at yahoo.fr
Mon May 27 01:31:32 UTC 2013
No ideas?
---
*B. R.*
On Sat, May 25, 2013 at 1:01 PM, B.R. <reallfqq-nginx at yahoo.fr> wrote:
> Hello,
>
> I am trying to understand how fastcgi_read_timout works in Nginx.
>
> Here is what I wanna do:
> I list files (few MB each) on a distant place which I copy one by one
> (loop) on the local disk through PHP.
> I do not know the amount of files I need to copy, thus I do not know the
> total amount of time I need for the script to finish its execution. What I
> know is that I can ensure is a processing time limit per file.
> I would like my script not to be forcefully interrupted by either sides
> (PHP or Nginx) before completion.
>
>
> What I did so far:
> - PHP has a 'max_execution_time' of 30s (default?). In the loop copying
> files, I use the set_time_limit() procedure to reinitialize the limit
> before each file copy, hence each file processing has 30s to go: way enough!
>
> - The problem seems to lie on the Nginx side, with the
> 'fastcgi_read_timeout' configuration entry.
> I can't ensure what maximum time I need, and I would like not to use
> way-off values such as 2 weeks or 1 year there. ;o)
> What I understood from the documentation<http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_read_timeout>is that the timeout is reinitialized after a successful read: am I right?
>
> The challenge is now to cut any buffering occurring on the PHP side and
> let Nginx manage it (since the buffering will occur after content is being
> read from the backend). Here is what I did:
> * PHP's zlib.output_compression is deactivated by default in PHP
> * I deactivated PHP's output_buffering (default is 4096 bytes)
> * I am using the PHP flush() procedure at the end of each iteration of the
> copying loop, after a message is written to the output
>
>
> Current state:
> * The script seems to still be cut after the expiration of the
> 'fastcgi_read_timout' limit (confirmed by the error log entry 'upstream
> timed out (110: Connection timed out) while reading upstream')
> * The PHP loop is entered several times since multiple files have been
> copied
> * The output sent to the browser is cut before any output from the loop
> appears
>
> It seems that there is still some unwanted buffering on the PHP side.
> I also note that the PHP's flush() procedure doesn't seem to work since
> the output in the browser doesn't contain any message written after eahc
> file copy.
>
> Am I misunderstanding something about Nginx here (especially about the
> 'fastcgi_read_timeout' directive)?
> Have you any intel/piece of advice on hte matter?
>
> Thanks,
> ---
> *B. R.*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20130526/1746d90d/attachment.html>
More information about the nginx
mailing list