Hello!
On Tue, Sep 18, 2018 at 08:12:20AM -0400, Thomas Ward wrote:
> Downstream in Ubuntu, it has been proposed to demote pcre3 and
> use pcre2 instead as it is newer.
> https://trac.nginx.org/nginx/ticket/720 shows it was marked 4
> years ago that NGINX does not support pcre2. Are there any
> plans to use pcre2 instead of pcre3?
There are no immediate plans.
When we last checked, there were no problems with PCRE, but PCRE2
wasn't available in most distributions we support, making the
switch mostly meaningless.
Also, it looks like PCRE2 is still not supported even by Exim,
which is the parent project of PCRE and PCRE2:
https://bugs.exim.org/show_bug.cgi?id=1878
As such, adding PCRE2 support to nginx looks premature.
--
Maxim Dounin
http://mdounin.ru/
# HG changeset patch
# User Peter Shchuchkin <peters(a)yandex.ru>
# Date 1540737213 -10800
# Sun Oct 28 17:33:33 2018 +0300
# Node ID 70c0d476999d9b893c644606624134248ac7abad
# Parent 874d47ac871a4b62fbe0ff5d469a8ad7bc5a4160
Allow using nodelay=N semantics in limit_req configuration
This allows to use reasonably low limits while not forcing delay on normal
users.
In addition to standard "burst=A nodelay" form now the following form of
limit_req may be used:
burst=A nodelay=B, where B must be 0 <= B <= burst
burst=A nodelay=0 means the same as just "burst=A"
burst=A nodelay=A means the same as "burst=A nodelay"
burst=A nodelay=B means the first B requests matching limit_zone variable will
not be delayed and next requests will be delayed. The delay is calculated
against excess over B thus B+1 request will have effective excess=1.
When using limit_req with nodelay the responsibility of limiting requests speed
is on the client.
If client don't want or can't correctly limit its speed it will get 503 errors
and you will get numerous messages in error and access logs.
When using limit_req without nodelay, then every request that comes faster then
expected speed will be delayed. This is not always convenient. Sometimes you
want to allow normal client to make a bunch of requests as fast as possible
while still having configured limit on request speed.
Using this new semantics you can get the best from two worlds. Specifying
burst=A nodelay=B you allow clients to make B requests without any delay (and
without warnings in error log). If B requests are exceeded by client then
further requests are delayed, effectively limiting client rps to desired limit
without returning 503 errors. Thus one can ensure maximum speed for clients
with expected usage profile and limit all other clients to certain speed
without errors.
diff -r 874d47ac871a -r 70c0d476999d src/http/modules/ngx_http_limit_req_module.c
--- a/src/http/modules/ngx_http_limit_req_module.c Fri Oct 19 13:50:36 2018 +0800
+++ b/src/http/modules/ngx_http_limit_req_module.c Sun Oct 28 17:33:33 2018 +0300
@@ -499,12 +499,11 @@
excess = *ep;
- if (excess == 0 || (*limit)->nodelay) {
+ if (excess == 0 || ((ngx_uint_t) excess < (*limit)->nodelay)) {
max_delay = 0;
-
} else {
ctx = (*limit)->shm_zone->data;
- max_delay = excess * 1000 / ctx->rate;
+ max_delay = (excess - (*limit)->nodelay) * 1000 / ctx->rate;
}
while (n--) {
@@ -544,11 +543,16 @@
ctx->node = NULL;
- if (limits[n].nodelay) {
+ /*
+ * Delay when:
+ * excess > 0, nodelay = 0
+ * excess > 0, nodelay > 0, excess >= nodelay
+ */
+ if ((ngx_uint_t) excess < limits[n].nodelay) {
continue;
}
- delay = excess * 1000 / ctx->rate;
+ delay = (excess - limits[n].nodelay) * 1000 / ctx->rate;
if (delay > max_delay) {
max_delay = delay;
@@ -875,7 +879,7 @@
{
ngx_http_limit_req_conf_t *lrcf = conf;
- ngx_int_t burst;
+ ngx_int_t burst, nodelay_val;
ngx_str_t *value, s;
ngx_uint_t i, nodelay;
ngx_shm_zone_t *shm_zone;
@@ -885,6 +889,7 @@
shm_zone = NULL;
burst = 0;
+ nodelay_val = -1;
nodelay = 0;
for (i = 1; i < cf->args->nelts; i++) {
@@ -915,7 +920,20 @@
continue;
}
- if (ngx_strcmp(value[i].data, "nodelay") == 0) {
+ if (ngx_strncmp(value[i].data, "nodelay=", 8) == 0) {
+
+ nodelay_val = ngx_atoi(value[i].data + 8, value[i].len - 8);
+ nodelay = 1;
+ if (nodelay_val < 0) {
+ ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+ "invalid nodelay \"%V\" - "
+ "must not be less then 0",
+ &value[i]);
+ return NGX_CONF_ERROR;
+ }
+
+ continue;
+ } else if (ngx_strcmp(value[i].data, "nodelay") == 0) {
nodelay = 1;
continue;
}
@@ -925,6 +943,23 @@
return NGX_CONF_ERROR;
}
+ if (nodelay) {
+ /* nodelay without explicit value */
+ if (nodelay_val < 0) {
+ nodelay_val = burst;
+ }
+
+ if (nodelay_val > burst) {
+ ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+ "invalid nodelay value: %i - "
+ "must not be greater then burst: %i",
+ nodelay_val, burst);
+ return NGX_CONF_ERROR;
+ }
+ } else {
+ nodelay_val = 0;
+ }
+
if (shm_zone == NULL) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
"\"%V\" must have \"zone\" parameter",
@@ -956,7 +991,7 @@
limit->shm_zone = shm_zone;
limit->burst = burst * 1000;
- limit->nodelay = nodelay;
+ limit->nodelay = nodelay_val * 1000;
return NGX_CONF_OK;
}
details: http://hg.nginx.org/nginx/rev/de50fa05fbeb
branches:
changeset: 7374:de50fa05fbeb
user: Maxim Dounin <mdounin(a)mdounin.ru>
date: Wed Oct 31 16:49:39 2018 +0300
description:
Cache: fixed minimum cache keys zone size limit.
Size of a shared memory zones must be at least two pages - one page
for slab allocator internal data, and another page for actual allocations.
Using 8192 instead is wrong, as there are systems with page sizes other
than 4096.
Note well that two pages is usually too low as well. In particular, cache
is likely to use two allocations of different sizes for global structures,
and at least four pages will be needed to properly allocate cache nodes.
Except in a few very special cases, with keys zone of just two pages nginx
won't be able to start. Other uses of shared memory impose a limit
of 8 pages, which provides some room for global allocations. This patch
doesn't try to address this though.
Inspired by ticket #1665.
diffstat:
src/http/ngx_http_file_cache.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diffs (12 lines):
diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c
--- a/src/http/ngx_http_file_cache.c
+++ b/src/http/ngx_http_file_cache.c
@@ -2427,7 +2427,7 @@ ngx_http_file_cache_set_slot(ngx_conf_t
s.data = p;
size = ngx_parse_size(&s);
- if (size > 8191) {
+ if (size >= (ssize_t) (2 * ngx_pagesize)) {
continue;
}
}