fastcgi timeout at big requests

Robert Gabriel lists at ruby-forum.com
Fri Apr 10 17:24:18 MSD 2009


Maxim Dounin wrote:
> Hello!
> 
> On Fri, Apr 10, 2009 at 02:04:09PM +0200, Robert Gabriel wrote:
> 
>> I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
>> mail that is 25M, no attachement, just a text mail so big. Im trying to
>> read it, but fastcgi ends-up in
>> 2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
>> reset by peer) while reading response header from upstream, client:
> 
> From the error message it seems that php died even before it was
> able to send header to nginx.
> 
>> Before the problem was that php didnt have enough memory or
>> max_execution_time was too low. I modified that and set
>> keepalive_timeout to 32, but it just die even like this.
> 
> keepalive_timeout is completely unrelated thing, it's only used by
> nginx for client connections.
> 
>> Is it possible
>> fastcgi is limited to how big is the request or something.
> 
> No, at least not the protocol itself.
> 
>> How could I setup up nginx and/or php to be able to read that mail?
> 
> Try looking into php.  It seems that it just dies due to errors or
> you haven't tuned limits enough.
> 
> Maxim Dounin

Like I said before, in the logs at the begining I got this:

PHP Fatal error:  Allowed memory size of 67108864 bytes exhausted

I modified this to 128M and didnt complain anymore, but it did complain 
about the max_execution_time and it looked like this:

PHP Fatal error:  Maximum execution time of 60 seconds exceeded

I modified this to 240 seconds and after this didnt get any errors 
anymore from php, but got the erros in the error_log from nginx and it 
just didnt do anything... also the php-cgi process just hanged in the 
backgroud with 100% CPU usage. I had to stop fastcgi and kill then the 
process which was using 100% CPU.

So I really dont know what else to tune in php.ini
-- 
Posted via http://www.ruby-forum.com/.





More information about the nginx mailing list