Client body buffering with FastCGI
Maxim Dounin
mdounin at mdounin.ru
Thu Feb 17 20:05:06 MSK 2011
Hello!
On Thu, Feb 17, 2011 at 09:48:24AM -0500, Maxim Khitrov wrote:
> I'm trying to configure AjaXplorer, a PHP/Ajax file manager, to work
> behind nginx 0.8.54 on FreeBSD 7.3. The problem I'm running into is
> the inability to upload files more than ~64 MB in size. Ideally, I'd
> like to bump that limit up to 1 GB. I realize that HTTP is not ideal
> for this, but other transfer methods are not an option.
>
> PHP and nginx are both configured to accept 1 GB POST requests. As far
> as I can tell, nginx buffers the contents of the entire upload to disk
> before forwarding the request to the FastCGI process. This data is
> then read from disk and written back to disk by PHP. The whole
> write/read/write cycle is causing a timeout, first in nginx, and then
> in the PHP process (though there may also be some other problem that I
> haven't figured out yet).
Setting bigger timeouts should help. All timeouts in nginx are
configurable (proxy_connect_timeout, proxy_send_timeout,
proxy_read_timeout - and similar ones for other backend modules).
Though it sounds strange that nginx times out while writing
request to php as it should reset timer on any write operation.
Timeout may happen after writing request (read timeout) - i.e. if
php takes too long to process request, but you have to enlarge it
anyway then.
Which message do you see in nginx error log?
> For now, I'm curious whether there is a way to bypass the disk buffer
> and have nginx start sending the request as soon as it has all the
> headers? PHP can then buffer the entire request in memory and begin
> processing it as soon as the last byte is received.
No.
Maxim Dounin
More information about the nginx
mailing list