fastcgi timeout at big requests

Michael Shadle mike503 at gmail.com
Fri Apr 10 19:54:04 MSD 2009


On Fri, Apr 10, 2009 at 5:04 AM, Robert Gabriel <lists at ruby-forum.com> wrote:
> I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
> mail that is 25M, no attachement, just a text mail so big. Im trying to
> read it, but fastcgi ends-up in
> 2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
> reset by peer) while reading response header from upstream, client:
>
> Before the problem was that php didnt have enough memory or
> max_execution_time was too low. I modified that and set
> keepalive_timeout to 32, but it just die even like this. Is it possible
> fastcgi is limited to how big is the request or something.
>
> How could I setup up nginx and/or php to be able to read that mail?

I would modify squirrelmail to use x-accel-redirect so it isn't using
readfile() or whatever that is keeping PHP busy churning on 25M
attachments :)





More information about the nginx mailing list