[PATCH] Proxy: set u->keepalive also if the whole body has already been read.
Jan Prachař
jan.prachar at gmail.com
Tue Sep 22 09:40:09 UTC 2020
On Po, 2020-09-21 at 22:07 +0300, Maxim Dounin wrote:
> @@ -1915,7 +1916,7 @@
> > || u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED
> > || ctx->head
> > || (!u->headers_in.chunked
> > - && u->headers_in.content_length_n == 0))
> > + && u->headers_in.content_length_n <= u->buffer.last-u->buffer.pos))
> > {
> > u->keepalive = !u->headers_in.connection_close;
> > }
>
> I can't say I like the idea of connection cacheability to depend
> on timing and buffer size factors.
>
> Further, the "<=" comparison looks wrong: we shouldn't cache
> connections if there are more data than expected.
Hello, I changed it in the following patch, in case you would like to use it. Now the
condition is more strict though, because previously was responsonses with Content-Lenght:
0 and non empty upstream buffer kept in connection cache.
# HG changeset patch
# User Jan Prachař <jan.prachar at gmail.com>
# Date 1600710589 -7200
# Mon Sep 21 19:49:49 2020 +0200
# Node ID e35b529b03781e64912e0d8a72bd0f957dc08cd2
# Parent 052ecc68d35038b1b4adde12efe6249a92055f09
set u->keepalive also if the whole body has already been read
diff -r 052ecc68d350 -r e35b529b0378 src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c Wed Sep 16 18:26:25 2020 +0300
+++ b/src/http/modules/ngx_http_proxy_module.c Mon Sep 21 19:49:49 2020 +0200
@@ -1905,7 +1905,8 @@
}
/*
- * set u->keepalive if response has no body; this allows to keep
+ * set u->keepalive if response has no body or if the whole body
+ * has been already read to u->buffer; this allows to keep
* connections alive in case of r->header_only or X-Accel-Redirect
*/
@@ -1915,7 +1916,7 @@
|| u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED
|| ctx->head
|| (!u->headers_in.chunked
- && u->headers_in.content_length_n == 0))
+ && u->headers_in.content_length_n == u->buffer.last - u->buffer.pos))
{
u->keepalive = !u->headers_in.connection_close;
}
More information about the nginx-devel
mailing list