fastcgi_read_timeout with PHP backend

Steve Holdoway steve at greengecko.co.nz
Mon May 27 01:46:04 UTC 2013


Write a script that lists the remote files, then checks for the
existence of the file locally, and copy it if it doesn't exist? That way
no internal loop is used - use a different exit code to note whether
there was one copied, or there were none ready.

That way you scale for a single file transfer. There's nothing to be
gained from looping internally - well performance-wise that is.

Steve

On Sun, 2013-05-26 at 21:31 -0400, B.R. wrote:
> No ideas?
> 
> ---
> B. R.
> 
> 
> On Sat, May 25, 2013 at 1:01 PM, B.R. <reallfqq-nginx at yahoo.fr> wrote:
>         Hello,
>         
>         
>         I am trying to understand how fastcgi_read_timout works in
>         Nginx.
>         
>         
>         Here is what I wanna do:
>         
>         I list files (few MB each) on a distant place which I copy one
>         by one (loop) on the local disk through PHP.
>         
>         I do not know the amount of files I need to copy, thus I do
>         not know the total amount of time I need for the script to
>         finish its execution. What I know is that I can ensure is a
>         processing time limit per file.
>         
>         I would like my script not to be forcefully interrupted by
>         either sides (PHP or Nginx) before completion.
>         
>         
>         
>         What I did so far:
>         
>         - PHP has a 'max_execution_time' of 30s (default?). In the
>         loop copying files, I use the set_time_limit() procedure to
>         reinitialize the limit before each file copy, hence each file
>         processing has 30s to go: way enough!
>         
>         
>         - The problem seems to lie on the Nginx side, with the
>         'fastcgi_read_timeout' configuration entry.
>         
>         I can't ensure what maximum time I need, and I would like not
>         to use way-off values such as 2 weeks or 1 year there. ;o)
>         
>         What I understood from the documentation is that the timeout
>         is reinitialized after a successful read: am I right?
>         
>         
>         The challenge is now to cut any buffering occurring on the PHP
>         side and let Nginx manage it (since the buffering will occur
>         after content is being read from the backend). Here is what I
>         did:
>         
>         * PHP's zlib.output_compression is deactivated by default in
>         PHP
>         
>         * I deactivated PHP's output_buffering (default is 4096 bytes)
>         
>         * I am using the PHP flush() procedure at the end of each
>         iteration of the copying loop, after a message is written to
>         the output
>         
>         
>         
>         Current state:
>         
>         * The script seems to still be cut after the expiration of the
>         'fastcgi_read_timout' limit (confirmed by the error log entry
>         'upstream timed out (110: Connection timed out) while reading
>         upstream')
>         
>         * The PHP loop is entered several times since multiple files
>         have been copied
>         
>         * The output sent to the browser is cut before any output from
>         the loop appears
>         
>         
>         It seems that there is still some unwanted buffering on the
>         PHP side.
>         
>         I also note that the PHP's flush() procedure doesn't seem to
>         work since the output in the browser doesn't contain any
>         message written after eahc file copy.
>         
>         
>         Am I misunderstanding something about Nginx here (especially
>         about the 'fastcgi_read_timeout' directive)?
>         
>         Have you any intel/piece of advice on hte matter?
>         
>         Thanks,
>         
>         ---
>         B. R.
> 
> 
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

-- 
Steve Holdoway BSc(Hons) MNZCS <steve at greengecko.co.nz>
http://www.greengecko.co.nz
MSN: steve at greengecko.co.nz
Skype: sholdowa



More information about the nginx mailing list