nginx and upstream Content-Length

Maxim Dounin mdounin at
Fri Sep 30 09:59:07 UTC 2011


On Fri, Sep 30, 2011 at 11:11:53AM +0300, Yaniv Aknin wrote:

> In a recent thread on the uwsgi mailing list[1], I began suspecting that
> nginx will not honor an upstream's Content-Length header. i.e., if an
> upstream mentions a Content-Length of 1,000 bytes, but the connection is
> broken after 500 bytes, nginx will still happily serve this entity with a
> 200 OK status.

Status code 200 is irrelevant - as it's generally not possible to 
know if connection will be broken in advance (i.e. before sending 

> This may be a known bug in nginx, I wanted to be certain I indeed understand
> it correctly and raise the attention to it on the nginx mailing list -
> because I think this is a very serious bug with potentially disastrous
> consequences, as I describe below.
> I was able to confirm this both for uwsgi_pass and proxy_pass; if the
> upstream sets a Content-Length and then breaks the connection before that
> length was achieved, nginx will pass this onwards to the client.
> Furthermore, since the upstream protocol is HTTP 1.0 but the nginx-client
> protocl is HTTP 1.1 (with keepalive), the request will simply not terminate,
> because the client can't tell that the server has nothing more to send and
> nginx will not break the connection, despite the fact its connection with
> the upstream was broken and there's no chance this request will ever be
> fulfilled.
> Things get far worse with gzip compression on - nginx will remove the
> Content-Length header sent by the client and replace it with chunked
> encoding - /incorrect chunked encoding/, that will make the client believe
> it has the full entity, even though it has only a part of this.

Yes, this is a known problem.  Upstream module expects backend to 
behave properly, and if it misbehaves (or, more importantly, 
connection is broken for some reason) bad things may happen.

Upstream's module code needs carefull auditing to fix this.  It's 
somewhere in my TODO (though not very high).

Maxim Dounin

More information about the nginx mailing list