fastcgi timeout at big requests

Maxim Dounin mdounin at
Fri Apr 10 16:39:06 MSD 2009


On Fri, Apr 10, 2009 at 02:04:09PM +0200, Robert Gabriel wrote:

> I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
> mail that is 25M, no attachement, just a text mail so big. Im trying to
> read it, but fastcgi ends-up in
> 2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
> reset by peer) while reading response header from upstream, client:

>From the error message it seems that php died even before it was 
able to send header to nginx.

> Before the problem was that php didnt have enough memory or
> max_execution_time was too low. I modified that and set
> keepalive_timeout to 32, but it just die even like this.

keepalive_timeout is completely unrelated thing, it's only used by 
nginx for client connections.

> Is it possible
> fastcgi is limited to how big is the request or something.

No, at least not the protocol itself.

> How could I setup up nginx and/or php to be able to read that mail?

Try looking into php.  It seems that it just dies due to errors or 
you haven't tuned limits enough.

Maxim Dounin

More information about the nginx mailing list