Client body buffering with FastCGI

Maxim Khitrov max at mxcrypt.com
Thu Feb 17 22:46:01 MSK 2011


On Thu, Feb 17, 2011 at 2:43 PM, Igor Sysoev <igor at sysoev.ru> wrote:
> On Feb 17, 2011, at 22:38 , Maxim Khitrov wrote:
>
>> On Thu, Feb 17, 2011 at 1:47 PM, Maxim Khitrov <max at mxcrypt.com> wrote:
>>> On Thu, Feb 17, 2011 at 12:05 PM, Maxim Dounin <mdounin at mdounin.ru> wrote:
>>>> Hello!
>>>>
>>>> On Thu, Feb 17, 2011 at 09:48:24AM -0500, Maxim Khitrov wrote:
>>>>
>>>>> I'm trying to configure AjaXplorer, a PHP/Ajax file manager, to work
>>>>> behind nginx 0.8.54 on FreeBSD 7.3. The problem I'm running into is
>>>>> the inability to upload files more than ~64 MB in size. Ideally, I'd
>>>>> like to bump that limit up to 1 GB. I realize that HTTP is not ideal
>>>>> for this, but other transfer methods are not an option.
>>>>>
>>>>> PHP and nginx are both configured to accept 1 GB POST requests. As far
>>>>> as I can tell, nginx buffers the contents of the entire upload to disk
>>>>> before forwarding the request to the FastCGI process. This data is
>>>>> then read from disk and written back to disk by PHP. The whole
>>>>> write/read/write cycle is causing a timeout, first in nginx, and then
>>>>> in the PHP process (though there may also be some other problem that I
>>>>> haven't figured out yet).
>>>>
>>>> Setting bigger timeouts should help.  All timeouts in nginx are
>>>> configurable (proxy_connect_timeout, proxy_send_timeout,
>>>> proxy_read_timeout - and similar ones for other backend modules).
>>>>
>>>> Though it sounds strange that nginx times out while writing
>>>> request to php as it should reset timer on any write operation.
>>>> Timeout may happen after writing request (read timeout) - i.e. if
>>>> php takes too long to process request, but you have to enlarge it
>>>> anyway then.
>>>
>>> I think the timeouts are a side effect; the problem seems to be
>>> between nginx and the FastCGI unix socket. I just ran two quick tests.
>>> All possible timeouts for PHP and nginx have been set to 60 seconds,
>>> memory and POST size limits are at 1 GB.
>>>
>>> First, I uploaded a 90 MB file. All went well - the upload finished in
>>> ~3 seconds, PHP took ~8 seconds to copy it to the final destination.
>>> So 11 seconds total from the time that I hit 'upload' until I got a
>>> success notification.
>>>
>>> Next, I tried to upload a 100 MB file. The upload took ~4 seconds, but
>>> then nothing... The server sat for 1 minute with CPU 100% idle. After
>>> that, nginx timed out. I had these 2 messages in the error log:
>>>
>>> 2011/02/17 13:14:21 [warn] 68428#0: *8 a client request body is
>>> buffered to a temporary file /srv/upload/tmp/0000000002
>>> 2011/02/17 13:15:25 [error] 68428#0: *8 upstream timed out (60:
>>> Operation timed out) while sending request to upstream
>>>
>>> As soon as the second message appeared, the PHP process began
>>> executing, copying 20 MB of the uploaded data to the final
>>> destination. The remaining 80 MB never made it. In my other tests, the
>>> amount of data saved varied between 20 and 60 MB.
>>>
>>> In other words, it looks like nginx receives the entire request and
>>> begins writing it out to the FastCGI socket. After copying a portion
>>> of the data, the transfer breaks. Nginx then times out and closes the
>>> socket, which causes PHP to begin executing this partially received
>>> request. I did verify that AjaXplorer code is not executed until the
>>> nginx timeout, so this software is not the problem. The fault is
>>> either with PHP, nginx, or the operating system.
>>>
>>> Any ideas on what could be preventing the entire request from being
>>> written out to the FastCGI socket? I have error_log set to 'debug',
>>> but the two messages above is all I'm getting.
>>>
>>> - Max
>>
>> I think I managed to solve the problem, but not find the answer as to
>> what causes it. I decided to see what would happen if the FastCGI
>> server listened on 127.0.0.1 rather than /tmp/php.sock. After making
>> the switch, I was able to upload 256 MB of data in 56 seconds without
>> any problems (repeated this 5 times just to be sure).
>>
>> Could there be a problem with how nginx opens unix sockets that would
>> cause some of the data for large requests to be lost?
>
> I believe it's unix socket issue, not nginx's one.

Understood, thanks.

- Max



More information about the nginx mailing list