upstream prematurely closed connection while reading response header from upstream
Jiri Horky
jiri.horky at gmail.com
Thu Oct 16 19:35:14 UTC 2014
Hi,
thanks for the quick response. I tried it with nginx/1.7.6 but
unfortunately, the errors still show up. However, I did not try to
confirm that these were with the same trace, but I strongly suspect so.
I will confirm that hopefully tomorrow. Anything other I should try?
Regards
Jiri Horky
On 10/16/2014 03:36 PM, Maxim Dounin wrote:
> Hello!
>
> On Thu, Oct 16, 2014 at 10:17:15AM +0200, Jiri Horky wrote:
>
>> Hi list,
>>
>> we are seeing sporadic nginx errors "upstream prematurely closed
>> connection while reading response header from upstream" with nginx/1.6.2
>> which seems to be some kind of race condition.
>> For debugging purposes we only setup 1 upstream server on a public IP
>> address of the same server as nginx, there is no keepalive configured
>> between nginx and the upstream server. The upstream HTTP server is
>> written in a way that it forcibly closes the connection when the
>> response status code is 303. This may be part of the problem as well.
> [...]
>
>> Now, we tracked down, that this only happens when FIN packet from
>> upstream server reaches nginx sooner than it's finished with parsing the
>> response (headers) and thus sooner than nginx closes the connection
>> itself. For example this packet order will trigger the problem:
>> No. Time Source SrcPrt Destination Protocol
>> Length Info
>> 25571 10.297569 1.1.1.1 35481 1.1.1.1 TCP 76 35481 > 8888 [SYN] Seq=0 Win=3072 Len=0 MSS=16396 SACK_PERM=1 TSval=1902164528 TSecr=0 WS=8192
>> 25572 10.297580 1.1.1.1 8888 1.1.1.1 TCP 76 8888 > 35481 [SYN, ACK] Seq=0 Ack=1 Win=3072 Len=0 MSS=16396 SACK_PERM=1 TSval=1902164528 TSecr=1902164528 WS=8192
>> 25573 10.297589 1.1.1.1 35481 1.1.1.1 TCP 68 35481 > 8888 [ACK] Seq=1 Ack=1 Win=8192 Len=0 TSval=1902164528 TSecr=1902164528
>> 25574 10.297609 1.1.1.1 35481 1.1.1.1 HTTP 1533 GET / HTTP/1.0
>> 25575 10.297617 1.1.1.1 8888 1.1.1.1 TCP 68 8888 > 35481 [ACK] Seq=1 Ack=1466 Win=8192 Len=0 TSval=1902164528 TSecr=1902164528
>> 25596 10.323092 1.1.1.1 8888 1.1.1.1 HTTP 480 HTTP/1.1 303 See Other
>> 25597 10.323106 1.1.1.1 35481 1.1.1.1 TCP 68 35481 > 8888 [ACK] Seq=1466 Ack=413 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
>> 25598 10.323161 1.1.1.1 8888 1.1.1.1 TCP 68 8888 > 35481 [FIN, ACK] Seq=413 Ack=1466 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
>> 25599 10.323167 1.1.1.1 35481 1.1.1.1 TCP 68 35481 > 8888 [FIN, ACK] Seq=1466 Ack=413 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
>> 25600 10.323180 1.1.1.1 8888 1.1.1.1 TCP 68 8888 > 35481 [ACK] Seq=414 Ack=1467 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
>> 25601 10.323189 1.1.1.1 35481 1.1.1.1 TCP 68 35481 > 8888 [ACK] Seq=1467 Ack=414 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
>>
>> Note that the upstream HTTP (port 8888) sends the FIN packet sooner than
>> nginx (port 35481 in this case).
> Looking into the packet trace I suspect this commit may be
> relevant to your case:
>
> http://hg.nginx.org/nginx/rev/9d3a9c45fc43
>
> Please test with nginx 1.7.3+ to see if it helps.
>
More information about the nginx
mailing list