Hi,
Some background: nginx 1.9.2, used as a cache, can get into the state
when it stops evicting the objects and eventually stops caching without
being able to recover. This happens when the disk is full. Consider the
following nginx.conf fragment:
proxy_cache_path /cache/nginx levels=1:2
keys_zone=c3:4096m max_size=8500g
inactive=30d use_temp_path=on;
proxy_temp_path /cache/nginx-tmp 1 2;
The disk is filled because the workers have been fetching the data from
the backend faster than the cache manager is able to evict:
$ df -h | grep cache
/dev/sdb1 8.7T 8.7T 16M 100% /cache
tmpfs 2.0G 0 2.0G 0% /cache/nginx-tmp
Since /cache and /cache/nginx-tmp are separate mount points, nginx has to
perform copy instead of rename. The copy functions fails due to ENOSPC,
but the ngx_ext_rename_file() does not clean up the failed target. At this
point, based on ngx_http_file_cache_sh_t::size, the cache manager believes
that the 8.5 TB threshold has not been crossed and nginx fails to recover.
Please find the patch attached.
--
Mindaugas
Hi developers,
I am using nginx with an OpenSSL engine (Safenet Luna) which is a
wrapper over PKCS#11.
The handles return by ENGINE_load_private_key cannot be used in child
processes, aka, workers due to PKCS#11, thus causing SSL connection
errors.
The private key seems to be loaded in ngx_ssl_certificate(); is there
a way to tell nginx to call this function per child process?
Thanks
Hello,
We have a use case where we need to discard request body before proxying
request to the upstream. To do this we call ngx_http_discard_request_body,
but it uses r->headers_in.content_length_n to store the amount of data nginx
wants to receive next time, so it won't be 0 until nginx read all bytes from the
client. So if proxy_request_buffering is set to off, nginx ends up sending
non-0 Content-Length header to the upstream without a body.
The following patch fixes this behavior.
# HG changeset patch
# User Daniil Bondarev <bondarev(a)amazon.com>
# Date 1438119116 25200
# Node ID ddefee93b698b9261a147a08f42a07810efa2dab
# Parent 341e4303d25be159d4773b819d0ec055ba711afb
Set Content-Length to 0 when proxying requests with discarded body
diff -r 341e4303d25b -r ddefee93b698 src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c Thu Jul 16 14:20:48 2015 +0300
+++ b/src/http/modules/ngx_http_proxy_module.c Tue Jul 28 14:31:56 2015 -0700
@@ -1221,6 +1221,9 @@
ctx->internal_body_length = body_len;
len += body_len;
+ } else if (r->discard_body) {
+ ctx->internal_body_length = 0;
+
} else if (r->headers_in.chunked && r->reading_body) {
ctx->internal_body_length = -1;
ctx->internal_chunked = 1;
Hi list:
This is v1 of the patchset the implementing the feature SSL Dynamic Record Sizing, inspiring by Google Front End (https://www.igvita.com/2013/10/24/optimizing-tls-record-size-and-buffering-…) .
There are 3 conditions, if true at the same time, may trigger SSL_write to send small record over the link, hard coded 1400 bytes at this time to keep it fit into MTU size. We just send out 3 of this small record at most to reduce framing overhead when serving large object, that is enough for browser to discovery other dependency of the page at top of html file. If the buffer chain is smaller than 4096 bytes, it will not justify the overhead of sending small record. After idle for 60s(hard coded at this moment), start all over again.
Any comments is welcome.
Regard
YM
hg export tip
# HG changeset patch
# User YM Chen <gzchenym(a)126.com>
# Date 1430828974 -28800
# Node ID 31bfe6403c340bdc4c04e8e87721736c07bceef8
# Parent 162b2d27d4e1ce45bb9217d6958348c64f726a28
[RFC] event/openssl: Add dynamic record size support for serving ssl trafic
SSL Dynamic Record Sizing is a long sought after feature for website that serving
huge amount of encrypted traffic. The rational behide this is that SSL record should
not overflow the congestion window at the beginning of slow-start period and by doing
so, we can let the browser decode the first ssl record within 1 rtt and establish other
connections to fetch other resources that are referenced at the top of the html file.
diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.c
--- a/src/event/ngx_event_openssl.c Wed Apr 29 14:59:02 2015 +0300
+++ b/src/event/ngx_event_openssl.c Tue May 05 20:29:34 2015 +0800
@@ -1508,6 +1508,11 @@
ngx_uint_t flush;
ssize_t send, size;
ngx_buf_t *buf;
+ ngx_msec_t last_sent_timer_diff;
+ ngx_uint_t loop_count;
+
+ last_sent_timer_diff = ngx_current_msec - c->ssl->last_write_msec;
+ loop_count = 0;
if (!c->ssl->buffer) {
@@ -1517,7 +1522,13 @@
continue;
}
- n = ngx_ssl_write(c, in->buf->pos, in->buf->last - in->buf->pos);
+ size = in->buf->last - in->buf->pos;
+
+ if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > 4096) {
+ size = 1400;
+ }
+
+ n = ngx_ssl_write(c, in->buf->pos, size);
if (n == NGX_ERROR) {
return NGX_CHAIN_ERROR;
@@ -1532,8 +1543,11 @@
if (in->buf->pos == in->buf->last) {
in = in->next;
}
+
+ loop_count ++;
}
+ c->ssl->last_write_msec = ngx_current_msec;
return in;
}
@@ -1614,9 +1628,14 @@
if (size == 0) {
buf->flush = 0;
c->buffered &= ~NGX_SSL_BUFFERED;
+ c->ssl->last_write_msec = ngx_current_msec;
return in;
}
+ if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > 4096) {
+ size = 1400;
+ }
+
n = ngx_ssl_write(c, buf->pos, size);
if (n == NGX_ERROR) {
@@ -1633,14 +1652,18 @@
break;
}
- flush = 0;
-
- buf->pos = buf->start;
- buf->last = buf->start;
+ if(buf->last == buf->pos) {
+ flush = 0;
+
+ buf->pos = buf->start;
+ buf->last = buf->start;
+ }
if (in == NULL || send == limit) {
break;
}
+
+ loop_count++;
}
buf->flush = flush;
@@ -1652,6 +1675,7 @@
c->buffered &= ~NGX_SSL_BUFFERED;
}
+ c->ssl->last_write_msec = ngx_current_msec;
return in;
}
diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.h
--- a/src/event/ngx_event_openssl.h Wed Apr 29 14:59:02 2015 +0300
+++ b/src/event/ngx_event_openssl.h Tue May 05 20:29:34 2015 +0800
@@ -51,6 +51,8 @@
ngx_buf_t *buf;
size_t buffer_size;
+ ngx_msec_t last_write_msec;
+
ngx_connection_handler_pt handler;
ngx_event_handler_pt saved_read_handler;
Hi all,
You may or may not be aware that we have recently made some changes to
the Nginx GitHub account and trees (https://github.com/nginx). Before I
go into details I should make it clear that the primary location for
Nginx code will be the Mercurial repositories (http://hg.nginx.org/).
The GitHub trees are more for convenience to the community.
On to the details:
The original GitHub tree basically contained the release tarballs
extracted. This has been moved to
https://github.com/nginx/nginx-releases and has been deprecated with no
more updates. If there is demand to resurrect it we can do this but for
now it will sit idle for people who have already forked it.
There is a new GitHub tree which is a mirror of the main Mercurial
repository: https://github.com/nginx/nginx
This is currently updated hourly from one of my servers here in the UK.
It cannot accept pull requests and any pull request will be
automatically closed with instructions on how to contribute (another
hourly script does this).
We hope this helps developers that are more familiar with git than
mercurial to access the bleeding-edge code. If anyone has any questions
please feel free to field them to me.
Happy Tuesday everyone! :)
Kind Regards
--
Andrew Hutchings (LinuxJedi)
Senior Developer Advocate
Nginx Inc.