Hello!
On Tue, Sep 18, 2018 at 08:12:20AM -0400, Thomas Ward wrote:
> Downstream in Ubuntu, it has been proposed to demote pcre3 and
> use pcre2 instead as it is newer.
> https://trac.nginx.org/nginx/ticket/720 shows it was marked 4
> years ago that NGINX does not support pcre2. Are there any
> plans to use pcre2 instead of pcre3?
There are no immediate plans.
When we last checked, there were no problems with PCRE, but PCRE2
wasn't available in most distributions we support, making the
switch mostly meaningless.
Also, it looks like PCRE2 is still not supported even by Exim,
which is the parent project of PCRE and PCRE2:
https://bugs.exim.org/show_bug.cgi?id=1878
As such, adding PCRE2 support to nginx looks premature.
--
Maxim Dounin
http://mdounin.ru/
Hello,
we're running a gRPC service behind an NGINX load balancer and we
often see that the connection gets shut down with a GOAWAY message.
Since our request deadlines are quite small and it takes some time to
re-establish a connection this often leads to many failed requests.
After analyzing the situation we realized that it is caused by the
"http2_max_requests" option which defaults to 1000. After setting this
to a ridiculously high value our problems disappeared.
While there probably are use cases where limiting the life time of a
connection makes sense we were wondering why it is not possible to
disable this functionality and let a connection just live as long as
possible?
The following patch allows to do just that by setting http2_max_requests to 0.
Please let me know what you think.
Thanks,
Michael
# HG changeset patch
# User Michael Würtinger <michael.wuertinger(a)egym.com>
# Date 1561569724 -7200
# Wed Jun 26 19:22:04 2019 +0200
# Node ID d703d79897320bfc743b2ea6421e301985b2c7e4
# Parent 35ea9229c71a9207a24e51f327e1749e3accb26c
HTTP/2: allow unlimited number of requests in connection
Allows to disable the "http2_max_requests" limit by setting it to 0. This
enables very long living HTTP/2 connections which is desirable for some
use cases like gRPC connections.
diff -r 35ea9229c71a -r d703d7989732 src/http/v2/ngx_http_v2.c
--- a/src/http/v2/ngx_http_v2.c Tue Jun 25 15:19:45 2019 +0300
+++ b/src/http/v2/ngx_http_v2.c Wed Jun 26 19:22:04 2019 +0200
@@ -1173,7 +1173,9 @@
ngx_http_v2_set_dependency(h2c, node, depend, excl);
}
- if (h2c->connection->requests >= h2scf->max_requests) {
+ if (h2scf->max_requests > 0
+ && h2c->connection->requests >= h2scf->max_requests)
+ {
h2c->goaway = 1;
if (ngx_http_v2_send_goaway(h2c, NGX_HTTP_V2_NO_ERROR) == NGX_ERROR) {
@@ -4514,7 +4516,7 @@
h2scf = ngx_http_get_module_srv_conf(h2c->http_connection->conf_ctx,
ngx_http_v2_module);
- if (h2c->idle++ > 10 * h2scf->max_requests) {
+ if (h2c->idle++ > 10 * h2scf->max_requests && h2scf->max_requests > 0) {
ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0,
"http2 flood detected");
ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_NO_ERROR);
--
<https://egym.com>
When generating hashed data for "HTTP Basic" login auth protection, using bcrypt as the hash algorithm, one can vary the resultant hash strength by varying specify bcrypt's $cost, e.g.
php -r "echo password_hash('$my_pass', PASSWORD_BCRYPT, ['cost' => $cost]) . PHP_EOL;"
Of course, increased $cost requires increased encryption time.
E.g., on my desktop, the hash encryption times vary with cost as,
cost time
5 0m0.043s
6 0m0.055s
7 0m0.059s
8 0m0.075s
9 0m0.081s
10 0m0.110s
11 0m0.169s
12 0m0.285s
13 0m0.518s
14 0m0.785s
15 0m1.945s
16 0m3.782s
17 0m7.512s
18 0m14.973s
19 0m29.903s
20 0m59.735s
21 1m59.418s
22 3m58.792s
...
For site login usage, does *client* login time vary at all with the hash $cost?
Other than the initial, one-time hash generation, is there any login-performance reason NOT to use the highest hash $cost?
The ngx_http_slice_parse_content_range function assumes that the parsed buffer is null terminated. Since the buffer is an ngx_str_t, that assumption is false. If, by chance, the buffer is null terminated it is simply a matter of luck, and not design.
In particular, if the headers_out.content_range ngx_str_t was allocated in the ngx_http_range_filter_module then the buffer was allocated as a non-zero terminated buffer by ngx_pnalloc.
The fact that the buffer is not null terminated may lead to ngx_http_slice_parse_content_range returning an NGX_ERROR code after the buffer was successfully parsed, or, if
the caller is unfortunate, leading to a random memory access failure.
I've written a replacement function that uses the length of the ngx_str_t as a guard condition. This code works and passes all of the unit tests.
How should I submit the replacement?
Carey Gister
415-310-5304
Hi there!
In my day job I'm helping to get applications from traditional
environments running in cloud environments. Cloud native applications
are just "normal" applications, but there are a few properties that
they should satisfy (apart from resiliency and scalability).
For logging this boils down to what is prescribed by the 12-factor
app: The log output should be a continuous stream, i.e. simply log to
the terminal.
Now, as of today, at least on Debian based container images, the
behavior of Nginx is to write to /var/log/nginx/access.log and
/var/log/nginx/error.log by default. We try to compensate this by
making those files symbolic links to /dev/stdout and /dev/stderr.
We're doing this also, because there seem to be cases when a log entry
is written _before_ it is configured via the Nginx configuration file.
>From my perspective it would be advantageous to have Nginx write to
the terminal by default (i.e. no hardcoded log file locations) and
allow to override this behavior via the Nginx configuration file.
Is there any reason why the default behavior is not that way yet?
Peter