fastcgi_read_timeout with PHP backend

B.R. reallfqq-nginx at yahoo.fr
Mon May 27 02:38:52 UTC 2013


One way or another, even if an external script is called, PHP will need to
wait for the scripts completion, making the parallelization impossible or
at least useless (since, to wait for a return code of an external script is
still blocking).

I am not trying to find a workaround, I need to know how the
fastcgi_reload_timeout works (if I understood it properly), if I properly
disabled PHP buffering for my example case and how eventually to control
those timeouts.
I'd like to address the central problem here, not closing my eyes on it.

---
*B. R.*


On Sun, May 26, 2013 at 10:24 PM, Steve Holdoway <steve at greengecko.co.nz>wrote:

> Surely, you're still serialising the transfer with a loop?
>
> On Sun, 2013-05-26 at 22:11 -0400, B.R. wrote:
> > Thanks for your answer.
> >
> > I didn't go into specifics because my problem doesn't rely at the
> > application-level logic.
> >
> > What you describe is what my script does already.
> >
> >
> > However in this particular case I have 16 files weighting each a few
> > MB which need to be transfered back at once.
> >
> >
> > PHP allocates 30s for each loop turn (far enough to copy the file +
> > echo some output message about successes/failed completion).
> >
> > Nginx cuts the execution avec fastcgi_read_timeout time even with my
> > efforts to cut down any buffering on PHP side (thus forcing the output
> > to be sent to Nginx to reinitialize the timeout counter).
> >
> > That Nginx action is the center of my attention right now. How can I
> > get read of it in a scalable fashion (ie no fastcgi_read_time =
> > 9999999) ?
> > ---
> > B. R.
> >
> >
> >
> >
> > On Sun, May 26, 2013 at 9:46 PM, Steve Holdoway
> > <steve at greengecko.co.nz> wrote:
> >         Write a script that lists the remote files, then checks for
> >         the
> >         existence of the file locally, and copy it if it doesn't
> >         exist? That way
> >         no internal loop is used - use a different exit code to note
> >         whether
> >         there was one copied, or there were none ready.
> >
> >         That way you scale for a single file transfer. There's nothing
> >         to be
> >         gained from looping internally - well performance-wise that
> >         is.
> >
> >         Steve
> >
> >         On Sun, 2013-05-26 at 21:31 -0400, B.R. wrote:
> >         > No ideas?
> >         >
> >         > ---
> >         > B. R.
> >         >
> >         >
> >         > On Sat, May 25, 2013 at 1:01 PM, B.R.
> >         <reallfqq-nginx at yahoo.fr> wrote:
> >         >         Hello,
> >         >
> >         >
> >         >         I am trying to understand how fastcgi_read_timout
> >         works in
> >         >         Nginx.
> >         >
> >         >
> >         >         Here is what I wanna do:
> >         >
> >         >         I list files (few MB each) on a distant place which
> >         I copy one
> >         >         by one (loop) on the local disk through PHP.
> >         >
> >         >         I do not know the amount of files I need to copy,
> >         thus I do
> >         >         not know the total amount of time I need for the
> >         script to
> >         >         finish its execution. What I know is that I can
> >         ensure is a
> >         >         processing time limit per file.
> >         >
> >         >         I would like my script not to be forcefully
> >         interrupted by
> >         >         either sides (PHP or Nginx) before completion.
> >         >
> >         >
> >         >
> >         >         What I did so far:
> >         >
> >         >         - PHP has a 'max_execution_time' of 30s (default?).
> >         In the
> >         >         loop copying files, I use the set_time_limit()
> >         procedure to
> >         >         reinitialize the limit before each file copy, hence
> >         each file
> >         >         processing has 30s to go: way enough!
> >         >
> >         >
> >         >         - The problem seems to lie on the Nginx side, with
> >         the
> >         >         'fastcgi_read_timeout' configuration entry.
> >         >
> >         >         I can't ensure what maximum time I need, and I would
> >         like not
> >         >         to use way-off values such as 2 weeks or 1 year
> >         there. ;o)
> >         >
> >         >         What I understood from the documentation is that the
> >         timeout
> >         >         is reinitialized after a successful read: am I
> >         right?
> >         >
> >         >
> >         >         The challenge is now to cut any buffering occurring
> >         on the PHP
> >         >         side and let Nginx manage it (since the buffering
> >         will occur
> >         >         after content is being read from the backend). Here
> >         is what I
> >         >         did:
> >         >
> >         >         * PHP's zlib.output_compression is deactivated by
> >         default in
> >         >         PHP
> >         >
> >         >         * I deactivated PHP's output_buffering (default is
> >         4096 bytes)
> >         >
> >         >         * I am using the PHP flush() procedure at the end of
> >         each
> >         >         iteration of the copying loop, after a message is
> >         written to
> >         >         the output
> >         >
> >         >
> >         >
> >         >         Current state:
> >         >
> >         >         * The script seems to still be cut after the
> >         expiration of the
> >         >         'fastcgi_read_timout' limit (confirmed by the error
> >         log entry
> >         >         'upstream timed out (110: Connection timed out)
> >         while reading
> >         >         upstream')
> >         >
> >         >         * The PHP loop is entered several times since
> >         multiple files
> >         >         have been copied
> >         >
> >         >         * The output sent to the browser is cut before any
> >         output from
> >         >         the loop appears
> >         >
> >         >
> >         >         It seems that there is still some unwanted buffering
> >         on the
> >         >         PHP side.
> >         >
> >         >         I also note that the PHP's flush() procedure doesn't
> >         seem to
> >         >         work since the output in the browser doesn't contain
> >         any
> >         >         message written after eahc file copy.
> >         >
> >         >
> >         >         Am I misunderstanding something about Nginx here
> >         (especially
> >         >         about the 'fastcgi_read_timeout' directive)?
> >         >
> >         >         Have you any intel/piece of advice on hte matter?
> >         >
> >         >         Thanks,
> >         >
> >         >         ---
> >         >         B. R.
> >         >
> >         >
> >
> >         > _______________________________________________
> >         > nginx mailing list
> >         > nginx at nginx.org
> >         > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> >         --
> >         Steve Holdoway BSc(Hons) MNZCS <steve at greengecko.co.nz>
> >         http://www.greengecko.co.nz
> >         MSN: steve at greengecko.co.nz
> >         Skype: sholdowa
> >
> >         _______________________________________________
> >         nginx mailing list
> >         nginx at nginx.org
> >         http://mailman.nginx.org/mailman/listinfo/nginx
> >
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> --
> Steve Holdoway BSc(Hons) MNZCS <steve at greengecko.co.nz>
> http://www.greengecko.co.nz
> MSN: steve at greengecko.co.nz
> Skype: sholdowa
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20130526/7926ed36/attachment-0001.html>


More information about the nginx mailing list