From mdounin at mdounin.ru Sat Apr 1 20:11:38 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 1 Apr 2023 23:11:38 +0300 Subject: [PATCH] HTTP: Add new uri_normalization_percent_decode option In-Reply-To: References: Message-ID: Hello! On Thu, Mar 30, 2023 at 05:19:08PM +0000, Michael Kourlas via nginx-devel wrote: > Hello, > > Thanks again for your comments. > > > This implies, basically, that there are 3 forms of the request > > URI: 1) fully encoded, as in $request_uri, 2) fully decoded, as in > > $uri now, and 3) "all-except-percent-and-reserved". To implement this > > correctly, it needs clear definition when each form is used, and > > it is going to be a non-trivial task to do this safely. > > I agree. A simple way to do this would be to make percent-decoding customizable > on a per-directive basis. The core use case I was hoping to support is > preserving encoded reserved characters in location matching (basically what was > proposed in [1]), so that is what I would like to focus on in a reworked > version of this patch. > > I propose the following: > > (1) The addition of a new variable called $uri_encoded_percent_and_reserved. As > discussed, this variable is a special version of the normalized URI ($uri) > that preserves any percent-encoded "%" or reserved characters. > > (2) Every transformation applied to $uri (e.g. from the "rewrite" directive, > internal redirects, etc.) is automatically applied to > $uri_encoded_percent_and_reserved as well. > > If this raises performance concerns, a new flag could be added to enable or > disable the availability of $uri_encoded_percent_and_reserved. You suggest that transformations of $uri are "automatically applied" to the non-fully-decoded variant. Consider the following rewrite: rewrite ^/(.*) /$1 break; Assuming request to "GET /foo%2fbar/", what $uri_encoded_percent_and_reserved do you expect after each of these rewrites? Similarly, consider the following rewrite: rewrite ^/foo/(.*) /$1 break; What $uri_encoded_percent_and_reserved is expected after the rewrite? > (3) The addition of a new optional parameter to the URI form of "location" > blocks called "match-source": > > location [ = | ~ | ~* | ^~ ] uri [match-source=uri|uri-encoded-percent-and-reserved] { > ... > } > > For example: > > location ~ ^/api/objects/[^/]+/subobjects(/.*)?$ match-source=uri-encoded-percent-and-reserved { > ... > } > > "match-source=uri" is the default and the current behaviour. When > "uri-encoded-percent-and-reserved" is used, the location matching for that > block uses $uri_encoded_percent_and_reserved rather than $uri. Nested location > blocks are not affected (unless they also use > "uri-encoded-percent-and-reserved"). > > In future it would be possible to use a similar pattern with other directives > that use $uri, such as "proxy_pass", but that can be done as part of a separate > patch. > > If you think this is a sensible approach, I will submit a revised patch > implementing it. Consider the following configuration: location /foo%2fbar/ match-source=uri-encoded-percent-and-reserved { ... } location /foo/bar/ match-source=uri { ... } The question is: which location is expected to be matched for the request "GET /foo%2fbar/"? Other questions include: - Which location is expected to be matched for the request "GET /foo%2Fbar/" (note that it is exactly equivalent to "GET /foo%2fbar/"). - Assuming static handling in the locations, what happens with the request "GET /foo%2fbar/..%2fbazz"? Note that the behaviour does not seem to be obvious, and it is an open question if it can be clarified to be safe. -- Maxim Dounin http://mdounin.ru/ From jordanc.carter at outlook.com Sun Apr 2 17:57:03 2023 From: jordanc.carter at outlook.com (J Carter) Date: Sun, 2 Apr 2023 18:57:03 +0100 Subject: [PATCH] Added keepalive_async_fails command Message-ID: Hello, I've also attached an example nginx.conf and test script that simulates the asynchronous close events. Two different test cases can be found within that, one with path /1 for single peer upstream and /2 for multi-peer. You should see 2 upstream addresses repeated in a row per-upstream-server in the access log by default, as it fails through the cached connections & next performs next upstream tries. Any feedback would be appreciated. # HG changeset patch # User jordanc.carter at outlook.com # Date 1680457073 -3600 # Sun Apr 02 18:37:53 2023 +0100 # Node ID 9ec4d7a8cdf6cdab00d09dff75fa6045f6f5533f # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 Added keepalive_async_fails command to keepalive load balancer module. This value determines the number suspected keepalive race events per-upstream-try that will be tolerated before a subsequent network connection error is considered a true failure. diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/event/ngx_event_connect.h --- a/src/event/ngx_event_connect.h Tue Mar 28 18:01:54 2023 +0300 +++ b/src/event/ngx_event_connect.h Sun Apr 02 18:37:53 2023 +0100 @@ -17,6 +17,7 @@ #define NGX_PEER_KEEPALIVE 1 #define NGX_PEER_NEXT 2 #define NGX_PEER_FAILED 4 +#define NGX_PEER_ASYNC_FAILED 8 typedef struct ngx_peer_connection_s ngx_peer_connection_t; @@ -41,6 +42,7 @@ ngx_str_t *name; ngx_uint_t tries; + ngx_uint_t async_fails; ngx_msec_t start_time; ngx_event_get_peer_pt get; diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/http/modules/ngx_http_upstream_keepalive_module.c --- a/src/http/modules/ngx_http_upstream_keepalive_module.c Tue Mar 28 18:01:54 2023 +0300 +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c Sun Apr 02 18:37:53 2023 +0100 @@ -13,6 +13,7 @@ typedef struct { ngx_uint_t max_cached; ngx_uint_t requests; + ngx_uint_t max_async_fails; ngx_msec_t time; ngx_msec_t timeout; @@ -108,6 +109,13 @@ offsetof(ngx_http_upstream_keepalive_srv_conf_t, requests), NULL }, + { ngx_string("keepalive_async_fails"), + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_upstream_keepalive_srv_conf_t, max_async_fails), + NULL }, + ngx_null_command }; @@ -160,6 +168,7 @@ ngx_conf_init_msec_value(kcf->time, 3600000); ngx_conf_init_msec_value(kcf->timeout, 60000); ngx_conf_init_uint_value(kcf->requests, 1000); + ngx_conf_init_uint_value(kcf->max_async_fails, 2); if (kcf->original_init_upstream(cf, us) != NGX_OK) { return NGX_ERROR; @@ -320,6 +329,21 @@ u = kp->upstream; c = pc->connection; + if (state & NGX_PEER_ASYNC_FAILED) { + pc->async_fails++; + + if (pc->async_fails == 2) { + pc->async_fails = 0; + state = NGX_PEER_FAILED; + + } else { + pc->tries++; + } + goto invalid; + } + + pc->async_fails = 0; + if (state & NGX_PEER_FAILED || c == NULL || c->read->eof @@ -529,6 +553,8 @@ conf->time = NGX_CONF_UNSET_MSEC; conf->timeout = NGX_CONF_UNSET_MSEC; conf->requests = NGX_CONF_UNSET_UINT; + conf->max_async_fails = NGX_CONF_UNSET_UINT; + return conf; } diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Mar 28 18:01:54 2023 +0300 +++ b/src/http/ngx_http_upstream.c Sun Apr 02 18:37:53 2023 +0100 @@ -4317,6 +4317,8 @@ { state = NGX_PEER_NEXT; + } else if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { + state = NGX_PEER_ASYNC_FAILED; } else { state = NGX_PEER_FAILED; } @@ -4330,11 +4332,6 @@ "upstream timed out"); } - if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { - /* TODO: inform balancer instead */ - u->peer.tries++; - } - switch (ft_type) { case NGX_HTTP_UPSTREAM_FT_TIMEOUT: @@ -4421,7 +4418,6 @@ return; } #endif - ngx_http_upstream_finalize_request(r, u, status); return; } diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/http/ngx_http_upstream_round_robin.c --- a/src/http/ngx_http_upstream_round_robin.c Tue Mar 28 18:01:54 2023 +0300 +++ b/src/http/ngx_http_upstream_round_robin.c Sun Apr 02 18:37:53 2023 +0100 @@ -297,6 +297,7 @@ r->upstream->peer.get = ngx_http_upstream_get_round_robin_peer; r->upstream->peer.free = ngx_http_upstream_free_round_robin_peer; r->upstream->peer.tries = ngx_http_upstream_tries(rrp->peers); + r->upstream->peer.async_fails = 0; #if (NGX_HTTP_SSL) r->upstream->peer.set_session = ngx_http_upstream_set_round_robin_peer_session; @@ -418,6 +419,7 @@ r->upstream->peer.get = ngx_http_upstream_get_round_robin_peer; r->upstream->peer.free = ngx_http_upstream_free_round_robin_peer; r->upstream->peer.tries = ngx_http_upstream_tries(rrp->peers); + r->upstream->peer.async_fails = 0; #if (NGX_HTTP_SSL) r->upstream->peer.set_session = ngx_http_upstream_empty_set_session; r->upstream->peer.save_session = ngx_http_upstream_empty_save_session; @@ -459,7 +461,10 @@ rrp->current = peer; - } else { + } else if (pc->async_fails > 0) { + peer = rrp->current; + } + else { /* there are several peers */ @@ -615,18 +620,7 @@ ngx_http_upstream_rr_peers_rlock(rrp->peers); ngx_http_upstream_rr_peer_lock(rrp->peers, peer); - if (rrp->peers->single) { - - peer->conns--; - - ngx_http_upstream_rr_peer_unlock(rrp->peers, peer); - ngx_http_upstream_rr_peers_unlock(rrp->peers); - - pc->tries = 0; - return; - } - - if (state & NGX_PEER_FAILED) { + if (state & NGX_PEER_FAILED && !rrp->peers->single) { now = ngx_time(); peer->fails++; -------------- next part -------------- #!/bin/sh sudo nginx -s stop; sudo nginx; sleep 1; seq 25 | xargs -I{} -P 25 curl localhost/$1 sleep 10; sudo iptables -A INPUT -p tcp --destination-port 8080 -j REJECT --reject-with tcp-reset sudo iptables -A INPUT -p tcp --destination-port 8081 -j REJECT --reject-with tcp-reset sudo iptables -A INPUT -p tcp --destination-port 8082 -j REJECT --reject-with tcp-reset sleep 5; curl localhost/$1 sleep 3; sudo iptables -D INPUT -p tcp --destination-port 8080 -j REJECT --reject-with tcp-reset sudo iptables -D INPUT -p tcp --destination-port 8081 -j REJECT --reject-with tcp-reset sudo iptables -D INPUT -p tcp --destination-port 8082 -j REJECT --reject-with tcp-reset tail -n 10 /usr/local/nginx/logs/access.log -------------- next part -------------- worker_processes 1; error_log logs/error.log info; events { worker_connections 1024; } http { log_format main '$time_local $http_drop $upstream_addr $upstream_status'; upstream backend1 { server 127.0.0.1:8081; keepalive 32; #keepalive_async_fails 3; } upstream backend2 { server 127.0.0.1:8080; server 127.0.0.1:8081; server 127.0.0.1:8082; keepalive 32; #keepalive_async_fails 3; } proxy_http_version 1.1; proxy_set_header Connection ""; server { listen 80; access_log logs/access.log main; location =/1 { proxy_pass http://backend1; } location =/2 { #proxy_next_upstream_tries 2; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_403 http_404 http_429 non_idempotent; proxy_bind 127.0.0.2; proxy_pass http://backend2; } } server { listen 8080; listen 8081; listen 8082; access_log off; error_log off; if ($http_drop) { return 444; } location / { #slow it down, allows keepalives to be established on 80; proxy_connect_timeout 5s; error_page 504 = @return-200; #blackhole ip proxy_pass http://198.51.100.1:9999; } location @return-200 { return 200 "OK"; } } } From jordanc.carter at outlook.com Sun Apr 2 18:31:16 2023 From: jordanc.carter at outlook.com (J Carter) Date: Sun, 2 Apr 2023 19:31:16 +0100 Subject: [PATCH] Added keepalive_async_fails command In-Reply-To: References: Message-ID: re-sending the patch as an attachment as the formatting is still weird, and fixed typo I spotted.. On 02/04/2023 18:57, J Carter wrote: > Hello, > > I've also attached an example nginx.conf and test script that > simulates the asynchronous close events. > Two different test cases can be found within that, one with path /1 > for single peer upstream and /2 for multi-peer. > > You should see 2 upstream addresses repeated in a row > per-upstream-server in the access log by default, as it fails > through the cached connections & next performs next upstream tries. > > Any feedback would be appreciated. > > # HG changeset patch > # User jordanc.carter at outlook.com > # Date 1680457073 -3600 > #      Sun Apr 02 18:37:53 2023 +0100 > # Node ID 9ec4d7a8cdf6cdab00d09dff75fa6045f6f5533f > # Parent  5f1d05a21287ba0290dd3a17ad501595b442a194 > Added keepalive_async_fails command to keepalive load balancer module. > This value determines the number suspected keepalive race events > per-upstream-try that will be tolerated before a subsequent network > connection > error is considered a true failure. > > diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/event/ngx_event_connect.h > --- a/src/event/ngx_event_connect.h    Tue Mar 28 18:01:54 2023 +0300 > +++ b/src/event/ngx_event_connect.h    Sun Apr 02 18:37:53 2023 +0100 > @@ -17,6 +17,7 @@ >  #define NGX_PEER_KEEPALIVE           1 >  #define NGX_PEER_NEXT                2 >  #define NGX_PEER_FAILED              4 > +#define NGX_PEER_ASYNC_FAILED        8 > > >  typedef struct ngx_peer_connection_s  ngx_peer_connection_t; > @@ -41,6 +42,7 @@ >      ngx_str_t                       *name; > >      ngx_uint_t                       tries; > +    ngx_uint_t                       async_fails; >      ngx_msec_t                       start_time; > >      ngx_event_get_peer_pt            get; > diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 > src/http/modules/ngx_http_upstream_keepalive_module.c > --- a/src/http/modules/ngx_http_upstream_keepalive_module.c    Tue Mar > 28 18:01:54 2023 +0300 > +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c    Sun Apr > 02 18:37:53 2023 +0100 > @@ -13,6 +13,7 @@ >  typedef struct { >      ngx_uint_t                         max_cached; >      ngx_uint_t                         requests; > +    ngx_uint_t                         max_async_fails; >      ngx_msec_t                         time; >      ngx_msec_t                         timeout; > > @@ -108,6 +109,13 @@ >        offsetof(ngx_http_upstream_keepalive_srv_conf_t, requests), >        NULL }, > > +     { ngx_string("keepalive_async_fails"), > +      NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, > +      ngx_conf_set_num_slot, > +      NGX_HTTP_SRV_CONF_OFFSET, > +      offsetof(ngx_http_upstream_keepalive_srv_conf_t, max_async_fails), > +      NULL }, > + >        ngx_null_command >  }; > > @@ -160,6 +168,7 @@ >      ngx_conf_init_msec_value(kcf->time, 3600000); >      ngx_conf_init_msec_value(kcf->timeout, 60000); >      ngx_conf_init_uint_value(kcf->requests, 1000); > +    ngx_conf_init_uint_value(kcf->max_async_fails, 2); > >      if (kcf->original_init_upstream(cf, us) != NGX_OK) { >          return NGX_ERROR; > @@ -320,6 +329,21 @@ >      u = kp->upstream; >      c = pc->connection; > > +    if (state & NGX_PEER_ASYNC_FAILED) { > +        pc->async_fails++; > + > +        if (pc->async_fails == 2) { > +            pc->async_fails = 0; > +            state = NGX_PEER_FAILED; > + > +        } else { > +            pc->tries++; > +        } > +        goto invalid; > +    } > + > +    pc->async_fails = 0; > + >      if (state & NGX_PEER_FAILED >          || c == NULL >          || c->read->eof > @@ -529,6 +553,8 @@ >      conf->time = NGX_CONF_UNSET_MSEC; >      conf->timeout = NGX_CONF_UNSET_MSEC; >      conf->requests = NGX_CONF_UNSET_UINT; > +    conf->max_async_fails = NGX_CONF_UNSET_UINT; > + > >      return conf; >  } > diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c    Tue Mar 28 18:01:54 2023 +0300 > +++ b/src/http/ngx_http_upstream.c    Sun Apr 02 18:37:53 2023 +0100 > @@ -4317,6 +4317,8 @@ >          { >              state = NGX_PEER_NEXT; > > +        } else if (u->peer.cached && ft_type == > NGX_HTTP_UPSTREAM_FT_ERROR) { > +            state = NGX_PEER_ASYNC_FAILED; >          } else { >              state = NGX_PEER_FAILED; >          } > @@ -4330,11 +4332,6 @@ >                        "upstream timed out"); >      } > > -    if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { > -        /* TODO: inform balancer instead */ > -        u->peer.tries++; > -    } > - >      switch (ft_type) { > >      case NGX_HTTP_UPSTREAM_FT_TIMEOUT: > @@ -4421,7 +4418,6 @@ >              return; >          } >  #endif > - >          ngx_http_upstream_finalize_request(r, u, status); >          return; >      } > diff -r 5f1d05a21287 -r 9ec4d7a8cdf6 > src/http/ngx_http_upstream_round_robin.c > --- a/src/http/ngx_http_upstream_round_robin.c    Tue Mar 28 18:01:54 > 2023 +0300 > +++ b/src/http/ngx_http_upstream_round_robin.c    Sun Apr 02 18:37:53 > 2023 +0100 > @@ -297,6 +297,7 @@ >      r->upstream->peer.get = ngx_http_upstream_get_round_robin_peer; >      r->upstream->peer.free = ngx_http_upstream_free_round_robin_peer; >      r->upstream->peer.tries = ngx_http_upstream_tries(rrp->peers); > +    r->upstream->peer.async_fails = 0; >  #if (NGX_HTTP_SSL) >      r->upstream->peer.set_session = > ngx_http_upstream_set_round_robin_peer_session; > @@ -418,6 +419,7 @@ >      r->upstream->peer.get = ngx_http_upstream_get_round_robin_peer; >      r->upstream->peer.free = ngx_http_upstream_free_round_robin_peer; >      r->upstream->peer.tries = ngx_http_upstream_tries(rrp->peers); > +    r->upstream->peer.async_fails = 0; >  #if (NGX_HTTP_SSL) >      r->upstream->peer.set_session = ngx_http_upstream_empty_set_session; >      r->upstream->peer.save_session = > ngx_http_upstream_empty_save_session; > @@ -459,7 +461,10 @@ > >          rrp->current = peer; > > -    } else { > +    } else if (pc->async_fails > 0) { > +        peer = rrp->current; > +    } > +    else { > >          /* there are several peers */ > > @@ -615,18 +620,7 @@ >      ngx_http_upstream_rr_peers_rlock(rrp->peers); >      ngx_http_upstream_rr_peer_lock(rrp->peers, peer); > > -    if (rrp->peers->single) { > - > -        peer->conns--; > - > -        ngx_http_upstream_rr_peer_unlock(rrp->peers, peer); > -        ngx_http_upstream_rr_peers_unlock(rrp->peers); > - > -        pc->tries = 0; > -        return; > -    } > - > -    if (state & NGX_PEER_FAILED) { > +    if (state & NGX_PEER_FAILED && !rrp->peers->single) { >          now = ngx_time(); > >          peer->fails++; > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- # HG changeset patch # User jordanc.carter at outlook.com # Date 1680459126 -3600 # Sun Apr 02 19:12:06 2023 +0100 # Node ID 4295bf4613e155e063914ac10f898dbe98ae4a54 # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 Added keepalive_async_fails command to keepalive load balancer module. This value determines the number suspected keepalive race events per-upstream-try that will be tolerated before a subsequent network connection error is considered a true failure. diff -r 5f1d05a21287 -r 4295bf4613e1 src/event/ngx_event_connect.h --- a/src/event/ngx_event_connect.h Tue Mar 28 18:01:54 2023 +0300 +++ b/src/event/ngx_event_connect.h Sun Apr 02 19:12:06 2023 +0100 @@ -17,6 +17,7 @@ #define NGX_PEER_KEEPALIVE 1 #define NGX_PEER_NEXT 2 #define NGX_PEER_FAILED 4 +#define NGX_PEER_ASYNC_FAILED 8 typedef struct ngx_peer_connection_s ngx_peer_connection_t; @@ -41,6 +42,7 @@ ngx_str_t *name; ngx_uint_t tries; + ngx_uint_t async_fails; ngx_msec_t start_time; ngx_event_get_peer_pt get; diff -r 5f1d05a21287 -r 4295bf4613e1 src/http/modules/ngx_http_upstream_keepalive_module.c --- a/src/http/modules/ngx_http_upstream_keepalive_module.c Tue Mar 28 18:01:54 2023 +0300 +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c Sun Apr 02 19:12:06 2023 +0100 @@ -13,6 +13,7 @@ typedef struct { ngx_uint_t max_cached; ngx_uint_t requests; + ngx_uint_t max_async_fails; ngx_msec_t time; ngx_msec_t timeout; @@ -108,6 +109,13 @@ offsetof(ngx_http_upstream_keepalive_srv_conf_t, requests), NULL }, + { ngx_string("keepalive_async_fails"), + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_upstream_keepalive_srv_conf_t, max_async_fails), + NULL }, + ngx_null_command }; @@ -160,6 +168,7 @@ ngx_conf_init_msec_value(kcf->time, 3600000); ngx_conf_init_msec_value(kcf->timeout, 60000); ngx_conf_init_uint_value(kcf->requests, 1000); + ngx_conf_init_uint_value(kcf->max_async_fails, 2); if (kcf->original_init_upstream(cf, us) != NGX_OK) { return NGX_ERROR; @@ -320,6 +329,21 @@ u = kp->upstream; c = pc->connection; + if (state & NGX_PEER_ASYNC_FAILED) { + pc->async_fails++; + + if (pc->async_fails == kp->conf->max_async_fails) { + pc->async_fails = 0; + state = NGX_PEER_FAILED; + + } else { + pc->tries++; + } + goto invalid; + } + + pc->async_fails = 0; + if (state & NGX_PEER_FAILED || c == NULL || c->read->eof @@ -529,6 +553,8 @@ conf->time = NGX_CONF_UNSET_MSEC; conf->timeout = NGX_CONF_UNSET_MSEC; conf->requests = NGX_CONF_UNSET_UINT; + conf->max_async_fails = NGX_CONF_UNSET_UINT; + return conf; } diff -r 5f1d05a21287 -r 4295bf4613e1 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Mar 28 18:01:54 2023 +0300 +++ b/src/http/ngx_http_upstream.c Sun Apr 02 19:12:06 2023 +0100 @@ -4317,6 +4317,8 @@ { state = NGX_PEER_NEXT; + } else if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { + state = NGX_PEER_ASYNC_FAILED; } else { state = NGX_PEER_FAILED; } @@ -4330,11 +4332,6 @@ "upstream timed out"); } - if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { - /* TODO: inform balancer instead */ - u->peer.tries++; - } - switch (ft_type) { case NGX_HTTP_UPSTREAM_FT_TIMEOUT: @@ -4421,7 +4418,6 @@ return; } #endif - ngx_http_upstream_finalize_request(r, u, status); return; } diff -r 5f1d05a21287 -r 4295bf4613e1 src/http/ngx_http_upstream_round_robin.c --- a/src/http/ngx_http_upstream_round_robin.c Tue Mar 28 18:01:54 2023 +0300 +++ b/src/http/ngx_http_upstream_round_robin.c Sun Apr 02 19:12:06 2023 +0100 @@ -297,6 +297,7 @@ r->upstream->peer.get = ngx_http_upstream_get_round_robin_peer; r->upstream->peer.free = ngx_http_upstream_free_round_robin_peer; r->upstream->peer.tries = ngx_http_upstream_tries(rrp->peers); + r->upstream->peer.async_fails = 0; #if (NGX_HTTP_SSL) r->upstream->peer.set_session = ngx_http_upstream_set_round_robin_peer_session; @@ -418,6 +419,7 @@ r->upstream->peer.get = ngx_http_upstream_get_round_robin_peer; r->upstream->peer.free = ngx_http_upstream_free_round_robin_peer; r->upstream->peer.tries = ngx_http_upstream_tries(rrp->peers); + r->upstream->peer.async_fails = 0; #if (NGX_HTTP_SSL) r->upstream->peer.set_session = ngx_http_upstream_empty_set_session; r->upstream->peer.save_session = ngx_http_upstream_empty_save_session; @@ -459,7 +461,10 @@ rrp->current = peer; - } else { + } else if (pc->async_fails > 0) { + peer = rrp->current; + } + else { /* there are several peers */ @@ -615,18 +620,7 @@ ngx_http_upstream_rr_peers_rlock(rrp->peers); ngx_http_upstream_rr_peer_lock(rrp->peers, peer); - if (rrp->peers->single) { - - peer->conns--; - - ngx_http_upstream_rr_peer_unlock(rrp->peers, peer); - ngx_http_upstream_rr_peers_unlock(rrp->peers); - - pc->tries = 0; - return; - } - - if (state & NGX_PEER_FAILED) { + if (state & NGX_PEER_FAILED && !rrp->peers->single) { now = ngx_time(); peer->fails++; From mdounin at mdounin.ru Mon Apr 3 01:42:21 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Apr 2023 04:42:21 +0300 Subject: [PATCH] Added keepalive_async_fails command In-Reply-To: References: Message-ID: Hello! On Sun, Apr 02, 2023 at 06:57:03PM +0100, J Carter wrote: > I've also attached an example nginx.conf and test script that > simulates the asynchronous close events. > Two different test cases can be found within that, one with path > /1 for single peer upstream and /2 for multi-peer. > > You should see 2 upstream addresses repeated in a row > per-upstream-server in the access log by default, as it fails > through the cached connections & next performs next upstream > tries. > > Any feedback would be appreciated. > > # HG changeset patch > # User jordanc.carter at outlook.com > # Date 1680457073 -3600 > # Sun Apr 02 18:37:53 2023 +0100 > # Node ID 9ec4d7a8cdf6cdab00d09dff75fa6045f6f5533f > # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 > Added keepalive_async_fails command to keepalive load balancer module. > This value determines the number suspected keepalive race events > per-upstream-try that will be tolerated before a subsequent network connection > error is considered a true failure. It looks like you are trying to address issues with re-trying requests to upstream servers when errors not distinguishable from asynchronous close events happens, as outlined in ticket #2421 (https://trac.nginx.org/nginx/ticket/2421). I don't think that introducing another directive for this is a good idea. Rather, I would consider modifying the behaviour to better behave in case of such errors. In particular, the following approaches look promising: - Allow at most one additional try. - Allow an additional try only if there is a single peer, so normally request is not going to be retried at all. - Don't use cached connections if an error considered to be an asynchronous close event happens. Given that since nginx 1.15.3 we have keepalive_timeout in the upstream blocks to mitigate potential asynchronous close events, even something simple like combination of (1) and (2) might be good enough. With (3), things are going to be as correct as it can be. [...] -- Maxim Dounin http://mdounin.ru/ From jordanc.carter at outlook.com Mon Apr 3 05:15:02 2023 From: jordanc.carter at outlook.com (J Carter) Date: Mon, 3 Apr 2023 06:15:02 +0100 Subject: [PATCH] Added keepalive_async_fails command In-Reply-To: References: Message-ID: Hello Maxim, Thank you for the feedback. I think the points you made are fair - a new directive is possibly overkill for this issue. A single peer going through all of it's (many) cached connections when there is is a non-asynchronous close connection error is where I've personally seen the current behavior be most problematic - so this make sense and your direction would still resolve this. Point 3 would increase overhead/add latency for the average case, as a new connection would need to be created (assuming additional cached connections are likely to exist, and to not fail). I'd prefer to avoid that unless there are strong objections, although implementing this would be fairly trivial if there are. However, for point 2 -  I do have one question that I'd like your opinion on regarding multi-peer upstreams - should a (suspected) asynchronous close event count as a failure in terms of the logic in upstream_round_robin's free? In lieu of double trying for multi-peer, it seems like it may be desirable to avoid counting these as 'real' failures given all the effects imparted through passive health checks  - such as triggering a failure increment (and/or timeout) as well as adjusting weights downwards if the weighs are set for that upstream peer. On the other hand, the ambiguity in the cause of the error means not counting failures at all for connection errors that involve a cached connection. On 03/04/2023 02:42, Maxim Dounin wrote: > Hello! > > On Sun, Apr 02, 2023 at 06:57:03PM +0100, J Carter wrote: > >> I've also attached an example nginx.conf and test script that >> simulates the asynchronous close events. >> Two different test cases can be found within that, one with path >> /1 for single peer upstream and /2 for multi-peer. >> >> You should see 2 upstream addresses repeated in a row >> per-upstream-server in the access log by default, as it fails >> through the cached connections & next performs next upstream >> tries. >> >> Any feedback would be appreciated. >> >> # HG changeset patch >> # User jordanc.carter at outlook.com >> # Date 1680457073 -3600 >> # Sun Apr 02 18:37:53 2023 +0100 >> # Node ID 9ec4d7a8cdf6cdab00d09dff75fa6045f6f5533f >> # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 >> Added keepalive_async_fails command to keepalive load balancer module. >> This value determines the number suspected keepalive race events >> per-upstream-try that will be tolerated before a subsequent network connection >> error is considered a true failure. > It looks like you are trying to address issues with re-trying > requests to upstream servers when errors not distinguishable from > asynchronous close events happens, as outlined in ticket #2421 > (https://trac.nginx.org/nginx/ticket/2421). > > I don't think that introducing another directive for this is a > good idea. Rather, I would consider modifying the behaviour to > better behave in case of such errors. In particular, the > following approaches look promising: > > - Allow at most one additional try. > > - Allow an additional try only if there is a single peer, so > normally request is not going to be retried at all. > > - Don't use cached connections if an error considered to be an > asynchronous close event happens. > > Given that since nginx 1.15.3 we have keepalive_timeout in the > upstream blocks to mitigate potential asynchronous close events, > even something simple like combination of (1) and (2) might be > good enough. With (3), things are going to be as correct as it > can be. > > [...] > From Michael.Kourlas at solace.com Mon Apr 3 18:33:21 2023 From: Michael.Kourlas at solace.com (Michael Kourlas) Date: Mon, 3 Apr 2023 18:33:21 +0000 Subject: [PATCH] HTTP: Add new uri_normalization_percent_decode option In-Reply-To: References: Message-ID: Hello, Thanks again for your feedback. > Consider the following rewrite: > > rewrite ^/(.*) /$1 break; > > Assuming request to "GET /foo%2fbar/", what > $uri_encoded_percent_and_reserved do you expect after each of > these rewrites? I do not think that rewrite does anything in practice. Following the rewrite, I would expect $uri to remain unchanged at its current value of "/foo/bar/" and $uri_encoded_percent_and_reserved to similarly remain unchanged at its current value of "/foo%2fbar/". > Similarly, consider the following rewrite: > > rewrite ^/foo/(.*) /$1 break; > > What $uri_encoded_percent_and_reserved is expected after the > rewrite? In this case the regular expression matches $uri but not $uri_encoded_percent_and_reserved. One could say that this just means that only $uri is updated, but that has the potential to cause confusion when a flag is used that changes the control flow (unless the user explicitly opts into this behaviour). This could be addressed by adding a "match-source" optional argument to "rewrite" with three values (and a default of "uri"): * "uri" - rewrite directive matches and changes $uri only * "uri_encoded_percent_and_reserved" - rewrite directive matches and changes $uri_encoded_percent_and_reserved only * "all" - rewrite directive matches and changes both (if only one is matched, directive is not applied) It might also be a good idea to add "uri_encoded_percent_and_reserved_regex" and "uri_encoded_percent_and_reserved_replacement" arguments to be used with "all", so that it is possible to use the same directive and flag even when needing to perform slightly different rewrites for $uri versus $uri_encoded_percent_and_reserved. > Consider the following configuration: > > location /foo%2fbar/ match-source=uri-encoded-percent-and-reserved { > ... > } > > location /foo/bar/ match-source=uri { > ... > } > > The question is: which location is expected to be matched for the > request "GET /foo%2fbar/"? Both blocks match their respective variables. Since the first block has the longest matching prefix, I expect it will be selected. > Which location is expected to be matched for the request "GET > /foo%2Fbar/" (note that it is exactly equivalent to "GET > /foo%2fbar/"). Only the second block matches its variable, so I expect it will be selected. Although paths are generally case sensitive, a percent-encoded character is not supposed to be, so this behaviour is unfortunate. One possibility is to automatically use case-insensitive matching for any part of a location prefix using "match-source=uri-encoded-percent-and-reserved" that is a percent encoded "%" or reserved character. Alternatively this behaviour could be documented with an instruction to use regular expressions instead. > Assuming static handling in the locations, what happens with the > request "GET /foo%2fbar/..%2fbazz"? The first block would be used. However, $uri would be used for static handling, so the path "/foo/bazz" would be looked up on the filesystem, not "/foo%2fbar/..%2fbazz". > Note that the behaviour does not seem to be obvious, and it is an > open question if it can be clarified to be safe. Fair enough. I am certainly happy to continue making changes to my proposal to address the specific concerns you raise. However, are you saying that you have broader overall concerns about safety, complexity, etc. that make a patch implementing the proposal unlikely to be accepted, even if all specific concerns are addressed? Thanks, Michael Kourlas ________________________________ Confidentiality notice This e-mail message and any attachment hereto contain confidential information which may be privileged and which is intended for the exclusive use of its addressee(s). If you receive this message in error, please inform sender immediately and destroy any copy thereof. Furthermore, any disclosure, distribution or copying of this message and/or any attachment hereto without the consent of the sender is strictly prohibited. Thank you. From mdounin at mdounin.ru Tue Apr 4 03:26:06 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Apr 2023 06:26:06 +0300 Subject: [PATCH] HTTP: Add new uri_normalization_percent_decode option In-Reply-To: References: Message-ID: Hello! On Mon, Apr 03, 2023 at 06:33:21PM +0000, Michael Kourlas via nginx-devel wrote: > > Consider the following rewrite: > > > > rewrite ^/(.*) /$1 break; > > > > Assuming request to "GET /foo%2fbar/", what > > $uri_encoded_percent_and_reserved do you expect after each of > > these rewrites? > > I do not think that rewrite does anything in practice. Following the rewrite, > I would expect $uri to remain unchanged at its current value of "/foo/bar/" and > $uri_encoded_percent_and_reserved to similarly remain unchanged at its current > value of "/foo%2fbar/". How do you expect this to be detected by the code? Direct comparison of the new URI as generated by the rewrite with the old URI? Bonus question: what happens with the following rewrite: rewrite ^/(.*) /bazz/$1 break; It is basically equivalent to the above except it certainly changes the URI. > > Similarly, consider the following rewrite: > > > > rewrite ^/foo/(.*) /$1 break; > > > > What $uri_encoded_percent_and_reserved is expected after the > > rewrite? > > In this case the regular expression matches $uri but not > $uri_encoded_percent_and_reserved. One could say that this just means that only > $uri is updated, but that has the potential to cause confusion when a flag is > used that changes the control flow (unless the user explicitly opts into this > behaviour). > > This could be addressed by adding a "match-source" optional argument to > "rewrite" with three values (and a default of "uri"): > * "uri" - rewrite directive matches and changes $uri only > * "uri_encoded_percent_and_reserved" - rewrite directive matches and changes > $uri_encoded_percent_and_reserved only > * "all" - rewrite directive matches and changes both (if only one is matched, > directive is not applied) > > It might also be a good idea to add "uri_encoded_percent_and_reserved_regex" > and "uri_encoded_percent_and_reserved_replacement" arguments to be used with > "all", so that it is possible to use the same directive and flag even when > needing to perform slightly different rewrites for $uri versus > $uri_encoded_percent_and_reserved. So, your original suggestion that "every transformation applied to $uri ... is automatically applied to $uri_encoded_percent_and_reserved as well" is no longer relevant, correct? Considering "match-source=uri", how do you expect the resulting $uri_encoded_percent_and_reserved to be set? From the description it looks like it is expected to remain unchanged, though for the above example this will result in $uri being "/bar/" and $uri_encoded_percent_and_reserved being "/foo%2fbar/", which looks certainly wrong. Is it really the intention? The same question applies to "match-source=uri_encoded_percent_and_reserved". > > Consider the following configuration: > > > > location /foo%2fbar/ match-source=uri-encoded-percent-and-reserved { > > ... > > } > > > > location /foo/bar/ match-source=uri { > > ... > > } > > > > The question is: which location is expected to be matched for the > > request "GET /foo%2fbar/"? > > Both blocks match their respective variables. Since the first block has the > longest matching prefix, I expect it will be selected. That's not how prefix locations work: they are matched based on the longest prefix, and order of locations does not matter (and not even preserved/known during matching, since matching uses a prefix tree). So your suggestion implies that ordered matching should be introduced for prefix locations, correct? > > Which location is expected to be matched for the request "GET > > /foo%2Fbar/" (note that it is exactly equivalent to "GET > > /foo%2fbar/"). > > Only the second block matches its variable, so I expect it will be selected. > > Although paths are generally case sensitive, a percent-encoded character is not > supposed to be, so this behaviour is unfortunate. One possibility is to > automatically use case-insensitive matching for any part of a location prefix > using "match-source=uri-encoded-percent-and-reserved" that is a percent > encoded "%" or reserved character. > > Alternatively this behaviour could be documented with an instruction to use > regular expressions instead. You mean, only allow "match-source=uri-encoded-percent-and-reserved" for locations given by regular expressions, correct? > > Assuming static handling in the locations, what happens with the > > request "GET /foo%2fbar/..%2fbazz"? > > The first block would be used. However, $uri would be used for static > handling, so the path "/foo/bazz" would be looked up on the filesystem, not > "/foo%2fbar/..%2fbazz". So it can be easily used to bypass security restrictions, such as in: location /foo%2fbar/ match-source=uri-encoded-percent-and-reserved { } location /admin/ { allow 127.0.0.1; deny all; } Note that the request "GET /foo%2fbar/..%2f..%2fadmin/secret" is going to access protected files in /admin/, while it shouldn't be allowed to. > > Note that the behaviour does not seem to be obvious, and it is an > > open question if it can be clarified to be safe. > > Fair enough. I am certainly happy to continue making changes to my proposal to > address the specific concerns you raise. However, are you saying that you have > broader overall concerns about safety, complexity, etc. that make a patch > implementing the proposal unlikely to be accepted, even if all specific > concerns are addressed? I've just asked some question on how your proposal is expected to work - and summarized that the behaviour suggested is certainly not obvious, since there are lots of questions on how it is expected to behave in various edge cases. But I certainly agree that there are issues with safety and complexity in the proposal. And I don't think that addressing any specific concerns will help here, since addressing them seem to result in increased complexity and more concerns. It might be a good idea to start from scratch instead of trying to fix the proposal. Just in case, below is the summary of what I think about the topic. Short version: All solutions I'm aware of suck. Don't use encoded slashes in URIs, it hurts. Long version: First of all, forget about "reserved characters". The only character that really matters is slash ("/"), as this is the only character which is indeed reserved in URI path nginx works with. There are 3 basic approaches to handling encoded slashes in URIs, mostly outlined in Apache's AllowEncodedSlashes directive (see https://httpd.apache.org/docs/2.4/mod/core.html#allowencodedslashes for details): 1. Reject them. This what Apache does by default (though with questionable error code). 2. Decode them and assume equivalent to non-encoded slashes. This is what nginx does. Apache does something similar with "AllowEncodedSlashes On", but given the note in the directive description it looks like it does not implement the "assume equivalent" part. 3. Do not decode them and expect encoded slashes are corrupted if URI is re-encoded. This also implies that if URI is not re-encoded but proxied as is, restrictions like in "location /admin/ { deny all; }" can be bypassed if slashes are decoded by the backend server. All of the approaches have their pros and cons, with the reject one being the safest, while decode and do not decode resulting in different forms of corruptions, and not decoding resulting in potential security issues on proxying. I don't think that trying to combine different approaches in the normal location matching is going to work. Rather, this will result in unmanageable complexity and will be unsafe, as in the above proposal. In nginx, the only implemented approach is "decode". It does, however, also provides the original request URI in the $request_uri variable, so the "reject" approach can be trivially implemented in the configuration: if ($request_uri ~* "%2f") { return 400; } Similarly, as already mentioned in #2225, one can do something like this: if ($request_uri ~ "^/api/objects/[^/]+/subobjects(/.*)?$") { ... } Further, nginx makes it possible to proxy the request without any URI modifications, as in "proxy_pass http://upstream;" (note no URI component after the upstream server name), so requests with encoded slashes can be inspected and proxied without corruption. This might be already enough for most, if not all, tasks involving encoded slashes: they can be handled without corruption if things are properly configured. Given the above options, this is more than usually available. Note that $request_uri matching doesn't imply various normalizations nginx usually does on URI before matching locations, such as decoding encoded characters, so something like "GET /%61pi/objects/..." won't be matched. This can be improved by providing an additional variable, or by a smart enough urldecode() function (see ticket #52), or by an arbitrary normalization written in an embedded language, such as Perl or njs. Or it might not worth the effort though, and just rejecting such non-normalized requests would be a good enough solution for most use cases. Similarly, it might be possible to explicitly implement the "do not decode" approach, with something like "decode_slashes off;" (similarly to "merge_slashes off;"). I tend to think that it is going to be just another "never use it, it is not secure" directive, much like "merge_slashes", and the mere existence of this directive is not going to help anyone. (Also, it might be actually a good idea to remove "merge_slashes" instead.) Bonus game: The dot (".") character is not reserved in URIs, yet it has special meaning when used in "/./" and "/../" constructs. And escaping dots won't help: since "." is not reserved, "/%2e/" is exactly equivalent to "/./", and it is in turn equivalent to "/". As such, APIs designed with arbitrary object names in URIs and assuming escaping is enough simply won't be able to work for these names. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From e.grebenshchikov at nginx.com Tue Apr 4 16:17:22 2023 From: e.grebenshchikov at nginx.com (=?iso-8859-1?q?Eugene_Grebenschikov?=) Date: Tue, 04 Apr 2023 09:17:22 -0700 Subject: [PATCH] Tests: fixed warning in case of a closed stream Message-ID: <1197c152215b640aef0b.1680625042@DHNVMN3.> # HG changeset patch # User Eugene Grebenschikov # Date 1680624896 25200 # Tue Apr 04 09:14:56 2023 -0700 # Node ID 1197c152215b640aef0bdd3c3072f686298347b3 # Parent 0351dee227a8341e442feeb03920a46b259adeb5 Tests: fixed warning in case of a closed stream. diff -r 0351dee227a8 -r 1197c152215b lib/Test/Nginx/Stream.pm --- a/lib/Test/Nginx/Stream.pm Tue Mar 28 01:36:32 2023 +0400 +++ b/lib/Test/Nginx/Stream.pm Tue Apr 04 09:14:56 2023 -0700 @@ -65,8 +65,8 @@ $s->blocking(0); while (IO::Select->new($s)->can_write($extra{write_timeout} || 1.5)) { my $n = $s->syswrite($message); + last unless $n; log_out(substr($message, 0, $n)); - last unless $n; $message = substr($message, $n); last unless length $message; From mdounin at mdounin.ru Wed Apr 5 00:16:32 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Apr 2023 03:16:32 +0300 Subject: [PATCH] Tests: fixed warning in case of a closed stream In-Reply-To: <1197c152215b640aef0b.1680625042@DHNVMN3.> References: <1197c152215b640aef0b.1680625042@DHNVMN3.> Message-ID: Hello! On Tue, Apr 04, 2023 at 09:17:22AM -0700, Eugene Grebenschikov wrote: > # HG changeset patch > # User Eugene Grebenschikov > # Date 1680624896 25200 > # Tue Apr 04 09:14:56 2023 -0700 > # Node ID 1197c152215b640aef0bdd3c3072f686298347b3 > # Parent 0351dee227a8341e442feeb03920a46b259adeb5 > Tests: fixed warning in case of a closed stream. > > diff -r 0351dee227a8 -r 1197c152215b lib/Test/Nginx/Stream.pm > --- a/lib/Test/Nginx/Stream.pm Tue Mar 28 01:36:32 2023 +0400 > +++ b/lib/Test/Nginx/Stream.pm Tue Apr 04 09:14:56 2023 -0700 > @@ -65,8 +65,8 @@ > $s->blocking(0); > while (IO::Select->new($s)->can_write($extra{write_timeout} || 1.5)) { > my $n = $s->syswrite($message); > + last unless $n; > log_out(substr($message, 0, $n)); > - last unless $n; > > $message = substr($message, $n); > last unless length $message; Looks good to me. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Wed Apr 5 05:51:34 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 05 Apr 2023 05:51:34 +0000 Subject: [njs] VM: removed vm->global_items. Message-ID: details: https://hg.nginx.org/njs/rev/b2cacf654542 branches: changeset: 2079:b2cacf654542 user: Dmitry Volyntsev date: Tue Apr 04 22:17:26 2023 -0700 description: VM: removed vm->global_items. diffstat: src/njs_vm.c | 15 ++++++++------- src/njs_vm.h | 1 - 2 files changed, 8 insertions(+), 8 deletions(-) diffs (71 lines): diff -r 8dcea0ba0bf8 -r b2cacf654542 src/njs_vm.c --- a/src/njs_vm.c Wed Mar 29 20:28:33 2023 -0700 +++ b/src/njs_vm.c Tue Apr 04 22:17:26 2023 -0700 @@ -145,6 +145,7 @@ njs_vm_destroy(njs_vm_t *vm) njs_int_t njs_vm_compile(njs_vm_t *vm, u_char **start, u_char *end) { + size_t global_items; njs_int_t ret; njs_str_t ast; njs_chb_t chain; @@ -156,6 +157,8 @@ njs_vm_compile(njs_vm_t *vm, u_char **st vm->codes = NULL; + global_items = (vm->global_scope != NULL) ? vm->global_scope->items : 0; + ret = njs_parser_init(vm, &parser, vm->global_scope, &vm->options.file, *start, end, 0); if (njs_slow_path(ret != NJS_OK)) { @@ -202,9 +205,7 @@ njs_vm_compile(njs_vm_t *vm, u_char **st return NJS_ERROR; } - vm->global_scope = scope; - - if (scope->items > vm->global_items) { + if (scope->items > global_items) { global = vm->levels[NJS_LEVEL_GLOBAL]; new = njs_scope_make(vm, scope->items); @@ -215,8 +216,8 @@ njs_vm_compile(njs_vm_t *vm, u_char **st vm->levels[NJS_LEVEL_GLOBAL] = new; if (global != NULL) { - while (vm->global_items != 0) { - vm->global_items--; + while (global_items != 0) { + global_items--; *new++ = *global++; } @@ -228,7 +229,7 @@ njs_vm_compile(njs_vm_t *vm, u_char **st vm->start = generator.code_start; vm->variables_hash = &scope->variables; - vm->global_items = scope->items; + vm->global_scope = scope; if (vm->options.disassemble) { njs_disassembler(vm); @@ -351,7 +352,7 @@ njs_vm_clone(njs_vm_t *vm, njs_external_ goto fail; } - global = njs_scope_make(nvm, nvm->global_items); + global = njs_scope_make(nvm, nvm->global_scope->items); if (njs_slow_path(global == NULL)) { goto fail; } diff -r 8dcea0ba0bf8 -r b2cacf654542 src/njs_vm.h --- a/src/njs_vm.h Wed Mar 29 20:28:33 2023 -0700 +++ b/src/njs_vm.h Tue Apr 04 22:17:26 2023 -0700 @@ -126,7 +126,6 @@ struct njs_vm_s { njs_arr_t *scope_absolute; njs_value_t **levels[NJS_LEVEL_MAX]; - size_t global_items; njs_external_ptr_t external; From xeioex at nginx.com Wed Apr 5 05:51:36 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 05 Apr 2023 05:51:36 +0000 Subject: [njs] VM: removed vm->variables_hash. Message-ID: details: https://hg.nginx.org/njs/rev/294b69d82ea4 branches: changeset: 2080:294b69d82ea4 user: Dmitry Volyntsev date: Tue Apr 04 22:19:48 2023 -0700 description: VM: removed vm->variables_hash. diffstat: src/njs_builtin.c | 4 ++-- src/njs_shell.c | 7 ++++--- src/njs_vm.c | 1 - src/njs_vm.h | 1 - 4 files changed, 6 insertions(+), 7 deletions(-) diffs (67 lines): diff -r b2cacf654542 -r 294b69d82ea4 src/njs_builtin.c --- a/src/njs_builtin.c Tue Apr 04 22:17:26 2023 -0700 +++ b/src/njs_builtin.c Tue Apr 04 22:19:48 2023 -0700 @@ -588,7 +588,7 @@ njs_vm_expression_completions(njs_vm_t * var_node.key = (uintptr_t) lhq.value; - node = njs_rbtree_find(vm->variables_hash, &var_node.node); + node = njs_rbtree_find(&vm->global_scope->variables, &var_node.node); if (njs_slow_path(node == NULL)) { return NULL; } @@ -1018,7 +1018,7 @@ njs_global_this_prop_handler(njs_vm_t *v var_node.key = (uintptr_t) lhq.value; - rb_node = njs_rbtree_find(vm->variables_hash, &var_node.node); + rb_node = njs_rbtree_find(&vm->global_scope->variables, &var_node.node); if (rb_node == NULL) { return NJS_DECLINED; } diff -r b2cacf654542 -r 294b69d82ea4 src/njs_shell.c --- a/src/njs_shell.c Tue Apr 04 22:17:26 2023 -0700 +++ b/src/njs_shell.c Tue Apr 04 22:19:48 2023 -0700 @@ -1205,8 +1205,8 @@ njs_completion_generator(const char *tex cmpl->length = njs_strlen(text); cmpl->suffix_completions = NULL; - if (vm->variables_hash != NULL) { - cmpl->node = njs_rbtree_min(vm->variables_hash); + if (vm->global_scope != NULL) { + cmpl->node = njs_rbtree_min(&vm->global_scope->variables); } } @@ -1214,7 +1214,8 @@ next: switch (cmpl->phase) { case NJS_COMPLETION_VAR: - variables = vm->variables_hash; + variables = (vm->global_scope != NULL) ? &vm->global_scope->variables + : NULL; if (variables == NULL) { njs_next_phase(cmpl); diff -r b2cacf654542 -r 294b69d82ea4 src/njs_vm.c --- a/src/njs_vm.c Tue Apr 04 22:17:26 2023 -0700 +++ b/src/njs_vm.c Tue Apr 04 22:19:48 2023 -0700 @@ -228,7 +228,6 @@ njs_vm_compile(njs_vm_t *vm, u_char **st njs_scope_value_set(vm, njs_scope_global_this_index(), &vm->global_value); vm->start = generator.code_start; - vm->variables_hash = &scope->variables; vm->global_scope = scope; if (vm->options.disassemble) { diff -r b2cacf654542 -r 294b69d82ea4 src/njs_vm.h --- a/src/njs_vm.h Tue Apr 04 22:17:26 2023 -0700 +++ b/src/njs_vm.h Tue Apr 04 22:19:48 2023 -0700 @@ -132,7 +132,6 @@ struct njs_vm_s { njs_native_frame_t *top_frame; njs_frame_t *active_frame; - njs_rbtree_t *variables_hash; njs_lvlhsh_t keywords_hash; njs_lvlhsh_t values_hash; From xeioex at nginx.com Wed Apr 5 05:51:38 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 05 Apr 2023 05:51:38 +0000 Subject: [njs] Tests: improved shell_test portability to different environments. Message-ID: details: https://hg.nginx.org/njs/rev/da32f93aa990 branches: changeset: 2081:da32f93aa990 user: Dmitry Volyntsev date: Tue Apr 04 22:19:49 2023 -0700 description: Tests: improved shell_test portability to different environments. diffstat: auto/expect | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r 294b69d82ea4 -r da32f93aa990 auto/expect --- a/auto/expect Tue Apr 04 22:19:48 2023 -0700 +++ b/auto/expect Tue Apr 04 22:19:49 2023 -0700 @@ -22,6 +22,7 @@ if [ $njs_found = yes -a $NJS_HAVE_READL shell_test: njs test/shell_test.exp INPUTRC=test/inputrc PATH=$NJS_BUILD_DIR:\$(PATH) \ + LANG=en_US.UTF-8 TERM= \ expect -f test/shell_test.exp END From arut at nginx.com Thu Apr 6 11:41:33 2023 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Thu, 06 Apr 2023 15:41:33 +0400 Subject: [PATCH] Stream: allow waiting on a blocked QUIC stream (ticket #2479) Message-ID: <078d2beff084108a10b6.1680781293@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1680781188 -14400 # Thu Apr 06 15:39:48 2023 +0400 # Branch quic # Node ID 078d2beff084108a10b6b0549d1696561cdee141 # Parent f68fdb01714121017a91a60370c074e59b730239 Stream: allow waiting on a blocked QUIC stream (ticket #2479). Previously, waiting on a shared connection was not allowed, because the only type of such connection was plain UDP. However, QUIC stream connections are also shared since they share socket descriptor with the listen connection. Meanwhile, it's perfectly normal to wait on such connections. The issue manifested itself with stream write errors when the amount of data exceeded stream buffer size or flow control. Now no error is triggered and Stream write module is allowed to wait for buffer space to become available. diff --git a/src/stream/ngx_stream_write_filter_module.c b/src/stream/ngx_stream_write_filter_module.c --- a/src/stream/ngx_stream_write_filter_module.c +++ b/src/stream/ngx_stream_write_filter_module.c @@ -277,7 +277,12 @@ ngx_stream_write_filter(ngx_stream_sessi *out = chain; if (chain) { - if (c->shared) { + if (c->shared +#if (NGX_STREAM_QUIC) + && c->quic == NULL +#endif + ) + { ngx_log_error(NGX_LOG_ALERT, c->log, 0, "shared connection is busy"); return NGX_ERROR; From Michael.Kourlas at solace.com Thu Apr 6 14:26:26 2023 From: Michael.Kourlas at solace.com (Michael Kourlas) Date: Thu, 6 Apr 2023 14:26:26 +0000 Subject: [PATCH] HTTP: Add new uri_normalization_percent_decode option In-Reply-To: References: Message-ID: Hello, Thanks for your lengthy explanation -- it's much appreciated. I'll find a way to support my use case in the upstream server instead. Best, Michael Kourlas ________________________________ Confidentiality notice This e-mail message and any attachment hereto contain confidential information which may be privileged and which is intended for the exclusive use of its addressee(s). If you receive this message in error, please inform sender immediately and destroy any copy thereof. Furthermore, any disclosure, distribution or copying of this message and/or any attachment hereto without the consent of the sender is strictly prohibited. Thank you. From pluknet at nginx.com Thu Apr 6 16:17:07 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 6 Apr 2023 20:17:07 +0400 Subject: [PATCH] Stream: allow waiting on a blocked QUIC stream (ticket #2479) In-Reply-To: <078d2beff084108a10b6.1680781293@arut-laptop> References: <078d2beff084108a10b6.1680781293@arut-laptop> Message-ID: <4534CC2F-A9DB-46EE-95C2-BBC9182F2EF0@nginx.com> > On 6 Apr 2023, at 15:41, Roman Arutyunyan wrote: > > # HG changeset patch > # User Roman Arutyunyan > # Date 1680781188 -14400 > # Thu Apr 06 15:39:48 2023 +0400 > # Branch quic > # Node ID 078d2beff084108a10b6b0549d1696561cdee141 > # Parent f68fdb01714121017a91a60370c074e59b730239 > Stream: allow waiting on a blocked QUIC stream (ticket #2479). > > Previously, waiting on a shared connection was not allowed, because the only > type of such connection was plain UDP. However, QUIC stream connections are > also shared since they share socket descriptor with the listen connection. > Meanwhile, it's perfectly normal to wait on such connections. > > The issue manifested itself with stream write errors when the amount of data > exceeded stream buffer size or flow control. Now no error is triggered > and Stream write module is allowed to wait for buffer space to become available. > > diff --git a/src/stream/ngx_stream_write_filter_module.c b/src/stream/ngx_stream_write_filter_module.c > --- a/src/stream/ngx_stream_write_filter_module.c > +++ b/src/stream/ngx_stream_write_filter_module.c > @@ -277,7 +277,12 @@ ngx_stream_write_filter(ngx_stream_sessi > *out = chain; > > if (chain) { > - if (c->shared) { > + if (c->shared > +#if (NGX_STREAM_QUIC) > + && c->quic == NULL > +#endif > + ) > + { > ngx_log_error(NGX_LOG_ALERT, c->log, 0, > "shared connection is busy"); > return NGX_ERROR; Looks good. -- Sergey Kandaurov From pluknet at nginx.com Fri Apr 7 15:05:09 2023 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Fri, 07 Apr 2023 19:05:09 +0400 Subject: [PATCH] Tests: stick ssl_sni_reneg.t with TLSv1.2 Message-ID: # HG changeset patch # User Sergey Kandaurov # Date 1680879680 -14400 # Fri Apr 07 19:01:20 2023 +0400 # Node ID f1f9fe0d4d7e2cda34cfb85721e4888f5991df49 # Parent 1197c152215b640aef0bdd3c3072f686298347b3 Tests: stick ssl_sni_reneg.t with TLSv1.2. To make it run after enabling TLSv1.3 by default in nginx 1.23.4. diff --git a/ssl_sni_reneg.t b/ssl_sni_reneg.t --- a/ssl_sni_reneg.t +++ b/ssl_sni_reneg.t @@ -55,6 +55,7 @@ http { ssl_certificate_key localhost.key; ssl_certificate localhost.crt; + ssl_protocols TLSv1.2; server { listen 127.0.0.1:8080 ssl; @@ -93,13 +94,6 @@ foreach my $name ('localhost') { } $t->run(); - -{ - my (undef, $ssl) = get_ssl_socket(8080); - plan(skip_all => "TLS 1.3 forbids renegotiation") - if Net::SSLeay::version($ssl) > 0x0303; -} - $t->plan(8); ############################################################################### From mdounin at mdounin.ru Sat Apr 8 21:05:48 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 9 Apr 2023 00:05:48 +0300 Subject: [PATCH] Tests: stick ssl_sni_reneg.t with TLSv1.2 In-Reply-To: References: Message-ID: Hello! On Fri, Apr 07, 2023 at 07:05:09PM +0400, Sergey Kandaurov wrote: > # HG changeset patch > # User Sergey Kandaurov > # Date 1680879680 -14400 > # Fri Apr 07 19:01:20 2023 +0400 > # Node ID f1f9fe0d4d7e2cda34cfb85721e4888f5991df49 > # Parent 1197c152215b640aef0bdd3c3072f686298347b3 > Tests: stick ssl_sni_reneg.t with TLSv1.2. > > To make it run after enabling TLSv1.3 by default in nginx 1.23.4. > > diff --git a/ssl_sni_reneg.t b/ssl_sni_reneg.t > --- a/ssl_sni_reneg.t > +++ b/ssl_sni_reneg.t > @@ -55,6 +55,7 @@ http { > > ssl_certificate_key localhost.key; > ssl_certificate localhost.crt; > + ssl_protocols TLSv1.2; > > server { > listen 127.0.0.1:8080 ssl; > @@ -93,13 +94,6 @@ foreach my $name ('localhost') { > } > > $t->run(); > - > -{ > - my (undef, $ssl) = get_ssl_socket(8080); > - plan(skip_all => "TLS 1.3 forbids renegotiation") > - if Net::SSLeay::version($ssl) > 0x0303; > -} > - > $t->plan(8); > > ############################################################################### Looks good. -- Maxim Dounin http://mdounin.ru/ From jordanc.carter at outlook.com Sun Apr 9 20:52:46 2023 From: jordanc.carter at outlook.com (J Carter) Date: Sun, 9 Apr 2023 21:52:46 +0100 Subject: [PATCH] Asynchronous close event handling for single peer upstreams Message-ID: Hello, resubmitting with the changes suggested. # HG changeset patch # User jordanc.carter at outlook.com # Date 1681067934 -3600 # Sun Apr 09 20:18:54 2023 +0100 # Node ID c8dcf584b36505e42bd2ea2965c1020069adb677 # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 Asynchronous close event handling for single peer upstreams Limits single peer upstreams to a single retry when consecutive asynchronous close events are encountered. diff -r 5f1d05a21287 -r c8dcf584b365 src/event/ngx_event_connect.h --- a/src/event/ngx_event_connect.h Tue Mar 28 18:01:54 2023 +0300 +++ b/src/event/ngx_event_connect.h Sun Apr 09 20:18:54 2023 +0100 @@ -17,6 +17,7 @@ #define NGX_PEER_KEEPALIVE 1 #define NGX_PEER_NEXT 2 #define NGX_PEER_FAILED 4 +#define NGX_PEER_ASYNC_FAILED 8 typedef struct ngx_peer_connection_s ngx_peer_connection_t; @@ -64,6 +65,7 @@ unsigned transparent:1; unsigned so_keepalive:1; unsigned down:1; + unsigned async_failed:1; /* ngx_connection_log_error_e */ unsigned log_error:2; diff -r 5f1d05a21287 -r c8dcf584b365 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Mar 28 18:01:54 2023 +0300 +++ b/src/http/ngx_http_upstream.c Sun Apr 09 20:18:54 2023 +0100 @@ -4317,6 +4317,9 @@ { state = NGX_PEER_NEXT; + } else if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { + state = NGX_PEER_FAILED | NGX_PEER_ASYNC_FAILED; + } else { state = NGX_PEER_FAILED; } @@ -4330,11 +4333,6 @@ "upstream timed out"); } - if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { - /* TODO: inform balancer instead */ - u->peer.tries++; - } - switch (ft_type) { case NGX_HTTP_UPSTREAM_FT_TIMEOUT: @@ -4421,7 +4419,6 @@ return; } #endif - ngx_http_upstream_finalize_request(r, u, status); return; } diff -r 5f1d05a21287 -r c8dcf584b365 src/http/ngx_http_upstream_round_robin.c --- a/src/http/ngx_http_upstream_round_robin.c Tue Mar 28 18:01:54 2023 +0300 +++ b/src/http/ngx_http_upstream_round_robin.c Sun Apr 09 20:18:54 2023 +0100 @@ -616,14 +616,14 @@ ngx_http_upstream_rr_peer_lock(rrp->peers, peer); if (rrp->peers->single) { - - peer->conns--; + pc->tries = 0; - ngx_http_upstream_rr_peer_unlock(rrp->peers, peer); - ngx_http_upstream_rr_peers_unlock(rrp->peers); + if (state & NGX_PEER_ASYNC_FAILED && pc->async_failed == 0) { + pc->tries = 2; + pc->async_failed = 1; + } - pc->tries = 0; - return; + goto cleanup; } if (state & NGX_PEER_FAILED) { @@ -659,6 +659,7 @@ } } + cleanup: peer->conns--; ngx_http_upstream_rr_peer_unlock(rrp->peers, peer); -------------- next part -------------- A non-text attachment was scrubbed... Name: hgexport Type: application/octet-stream Size: 3002 bytes Desc: not available URL: From arut at nginx.com Mon Apr 10 11:47:35 2023 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Mon, 10 Apr 2023 15:47:35 +0400 Subject: [PATCH] QUIC: removed TLSv1.3 requirement from README Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1681127095 -14400 # Mon Apr 10 15:44:55 2023 +0400 # Branch quic # Node ID b14b0c9887fbf22e24bd0d0449a261ced466f78c # Parent 9ea62b6250f225578f703da5e230853a7a84df7d QUIC: removed TLSv1.3 requirement from README. TLSv1.3 is enabled by default since d1cf09451ae8. diff --git a/README b/README --- a/README +++ b/README @@ -119,10 +119,6 @@ 3. Configuration ssl_early_data on; - Make sure that TLS 1.3 is configured which is required for QUIC: - - ssl_protocols TLSv1.3; - To enable GSO (Generic Segmentation Offloading): quic_gso on; @@ -175,7 +171,6 @@ Example configuration: ssl_certificate certs/example.com.crt; ssl_certificate_key certs/example.com.key; - ssl_protocols TLSv1.3; location / { # required for browsers to direct them into quic port From mdounin at mdounin.ru Mon Apr 10 15:29:40 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Apr 2023 18:29:40 +0300 Subject: [PATCH] QUIC: removed TLSv1.3 requirement from README In-Reply-To: References: Message-ID: Hello! On Mon, Apr 10, 2023 at 03:47:35PM +0400, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1681127095 -14400 > # Mon Apr 10 15:44:55 2023 +0400 > # Branch quic > # Node ID b14b0c9887fbf22e24bd0d0449a261ced466f78c > # Parent 9ea62b6250f225578f703da5e230853a7a84df7d > QUIC: removed TLSv1.3 requirement from README. > > TLSv1.3 is enabled by default since d1cf09451ae8. > > diff --git a/README b/README > --- a/README > +++ b/README > @@ -119,10 +119,6 @@ 3. Configuration > > ssl_early_data on; > > - Make sure that TLS 1.3 is configured which is required for QUIC: > - > - ssl_protocols TLSv1.3; > - > To enable GSO (Generic Segmentation Offloading): > > quic_gso on; > @@ -175,7 +171,6 @@ Example configuration: > > ssl_certificate certs/example.com.crt; > ssl_certificate_key certs/example.com.key; > - ssl_protocols TLSv1.3; > > location / { > # required for browsers to direct them into quic port Looks good. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Mon Apr 10 15:43:41 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 10 Apr 2023 19:43:41 +0400 Subject: [PATCH] QUIC: removed TLSv1.3 requirement from README In-Reply-To: References: Message-ID: <55D9AC0C-BB8F-4DEE-8D98-AAEB8964757F@nginx.com> > On 10 Apr 2023, at 15:47, Roman Arutyunyan wrote: > > # HG changeset patch > # User Roman Arutyunyan > # Date 1681127095 -14400 > # Mon Apr 10 15:44:55 2023 +0400 > # Branch quic > # Node ID b14b0c9887fbf22e24bd0d0449a261ced466f78c > # Parent 9ea62b6250f225578f703da5e230853a7a84df7d > QUIC: removed TLSv1.3 requirement from README. > > TLSv1.3 is enabled by default since d1cf09451ae8. Please use the "README" prefix for consistency. > > diff --git a/README b/README > --- a/README > +++ b/README > @@ -119,10 +119,6 @@ 3. Configuration > > ssl_early_data on; > > - Make sure that TLS 1.3 is configured which is required for QUIC: > - > - ssl_protocols TLSv1.3; > - > To enable GSO (Generic Segmentation Offloading): > > quic_gso on; > @@ -175,7 +171,6 @@ Example configuration: > > ssl_certificate certs/example.com.crt; > ssl_certificate_key certs/example.com.key; > - ssl_protocols TLSv1.3; > > location / { > # required for browsers to direct them into quic port Looks good, as well. I've had pondering to retain the mention of TLSv1.3 in some form, but 1) with merge approaching, README will be evicted anyway soon 2) there is a configuration check for TLSv1.3 -- Sergey Kandaurov From xeioex at nginx.com Mon Apr 10 16:54:28 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 10 Apr 2023 16:54:28 +0000 Subject: [njs] Version 0.7.12. Message-ID: details: https://hg.nginx.org/njs/rev/a1faa64d4972 branches: changeset: 2082:a1faa64d4972 user: Dmitry Volyntsev date: Mon Apr 10 09:50:19 2023 -0700 description: Version 0.7.12. diffstat: CHANGES | 17 +++++++++++++++++ 1 files changed, 17 insertions(+), 0 deletions(-) diffs (24 lines): diff -r da32f93aa990 -r a1faa64d4972 CHANGES --- a/CHANGES Tue Apr 04 22:19:49 2023 -0700 +++ b/CHANGES Mon Apr 10 09:50:19 2023 -0700 @@ -1,3 +1,20 @@ +Changes with njs 0.7.12 10 Apr 2023 + + nginx modules: + + *) Bugfix: fixed Headers() constructor in Fetch API. + + Core: + + *) Feature: added Hash.copy() method in "crypto" module. + + *) Feature: added "zlib" module. + + *) Improvement: added support for export {name as default} + statement. + + *) Bugfix: fixed Number constructor according to the spec. + Changes with njs 0.7.11 9 Mar 2023 nginx modules: From xeioex at nginx.com Mon Apr 10 16:54:30 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 10 Apr 2023 16:54:30 +0000 Subject: [njs] Added tag 0.7.12 for changeset a1faa64d4972 Message-ID: details: https://hg.nginx.org/njs/rev/a421d49d1d5c branches: changeset: 2083:a421d49d1d5c user: Dmitry Volyntsev date: Mon Apr 10 09:53:24 2023 -0700 description: Added tag 0.7.12 for changeset a1faa64d4972 diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r a1faa64d4972 -r a421d49d1d5c .hgtags --- a/.hgtags Mon Apr 10 09:50:19 2023 -0700 +++ b/.hgtags Mon Apr 10 09:53:24 2023 -0700 @@ -61,3 +61,4 @@ 0000000000000000000000000000000000000000 0000000000000000000000000000000000000000 0.7.10 3a1b46d51f040f5e7b9b81c3b2b312a2d272f0a3 0.7.10 26dd3824b9f343e2768609c1b673f788e3a5e154 0.7.11 +a1faa64d4972020413fd168e2b542bcc150819c0 0.7.12 From maxim at nginx.com Mon Apr 10 21:11:44 2023 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 10 Apr 2023 14:11:44 -0700 Subject: [PATCH] QUIC: removed TLSv1.3 requirement from README In-Reply-To: References: Message-ID: On 10.04.2023 04:47, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1681127095 -14400 > # Mon Apr 10 15:44:55 2023 +0400 > # Branch quic > # Node ID b14b0c9887fbf22e24bd0d0449a261ced466f78c > # Parent 9ea62b6250f225578f703da5e230853a7a84df7d > QUIC: removed TLSv1.3 requirement from README. > > TLSv1.3 is enabled by default since d1cf09451ae8. > > diff --git a/README b/README > --- a/README > +++ b/README > @@ -119,10 +119,6 @@ 3. Configuration > > ssl_early_data on; > > - Make sure that TLS 1.3 is configured which is required for QUIC: > - > - ssl_protocols TLSv1.3; > - > To enable GSO (Generic Segmentation Offloading): > [...] Well, TLSv1.3 is still required. You just don't need to add it to the list of ssl_protocols. I would remove it from the config example but keep a note that QUIC relies on TLSv1.3. -- Maxim Konovalov From pluknet at nginx.com Mon Apr 10 21:39:23 2023 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Tue, 11 Apr 2023 01:39:23 +0400 Subject: [PATCH] QUIC: fixed OpenSSL compat layer with OpenSSL master branch Message-ID: # HG changeset patch # User Sergey Kandaurov # Date 1681162552 -14400 # Tue Apr 11 01:35:52 2023 +0400 # Branch quic # Node ID e058f1f9d40f185e19098d65a47e3be128d4cb46 # Parent 9ea62b6250f225578f703da5e230853a7a84df7d QUIC: fixed OpenSSL compat layer with OpenSSL master branch. The layer is enabled as a fallback if the QUIC support is configured and the BoringSSL API wasn't detected, or when using the --with-openssl option, also compatible with QuicTLS and LibreSSL. For the latter, the layer is assumed to be present if QUIC was requested, so it needs to be undefined to prevent QUIC API redefinition as appropriate. A previously used approach to test the TLSEXT_TYPE_quic_transport_parameters macro doesn't work with OpenSSL 3.2 master branch where this macro appeared with incompatible QUIC API. To fix the build there, the test is revised to pass only for QuicTLS and LibreSSL. diff --git a/src/event/quic/ngx_event_quic_openssl_compat.h b/src/event/quic/ngx_event_quic_openssl_compat.h --- a/src/event/quic/ngx_event_quic_openssl_compat.h +++ b/src/event/quic/ngx_event_quic_openssl_compat.h @@ -7,7 +7,8 @@ #ifndef _NGX_EVENT_QUIC_OPENSSL_COMPAT_H_INCLUDED_ #define _NGX_EVENT_QUIC_OPENSSL_COMPAT_H_INCLUDED_ -#ifdef TLSEXT_TYPE_quic_transport_parameters +#if defined SSL_R_MISSING_QUIC_TRANSPORT_PARAMETERS_EXTENSION \ + || defined LIBRESSL_VERSION_NUMBER #undef NGX_QUIC_OPENSSL_COMPAT #else From mdounin at mdounin.ru Tue Apr 11 13:48:13 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Apr 2023 16:48:13 +0300 Subject: nginx-1.24.0 draft Message-ID: Hello! Below are patches for the nginx-1.24.0 release, and corresponding changes to the site. # HG changeset patch # User Maxim Dounin # Date 1681177300 -10800 # Tue Apr 11 04:41:40 2023 +0300 # Branch stable-1.24 # Node ID 05cf7574d94bb980428cbb63aa488631c24b8000 # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 Stable branch. diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1023004 -#define NGINX_VERSION "1.23.4" +#define nginx_version 1024000 +#define NGINX_VERSION "1.24.0" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD # HG changeset patch # User Maxim Dounin # Date 1681177534 -10800 # Tue Apr 11 04:45:34 2023 +0300 # Branch stable-1.24 # Node ID 420f96a6f7ac612b2b11750139cf8f4959803717 # Parent 05cf7574d94bb980428cbb63aa488631c24b8000 nginx-1.24.0-RELEASE diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,20 @@ + + + + +Стабильная ветка 1.24.x. + + +1.24.x stable branch. + + + + + + # HG changeset patch # User Maxim Dounin # Date 1681177534 -10800 # Tue Apr 11 04:45:34 2023 +0300 # Branch stable-1.24 # Node ID a4bbb03659dbc4a71cfa5a4dc5e00889ef76d2e6 # Parent 420f96a6f7ac612b2b11750139cf8f4959803717 release-1.24.0 tag diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -472,3 +472,4 @@ a63d0a70afea96813ba6667997bc7d68b5863f0d aa901551a7ebad1e8b0f8c11cb44e3424ba29707 release-1.23.2 ff3afd1ce6a6b65057741df442adfaa71a0e2588 release-1.23.3 ac779115ed6ee4f3039e9aea414a54e560450ee2 release-1.23.4 +420f96a6f7ac612b2b11750139cf8f4959803717 release-1.24.0 Site changes: # HG changeset patch # User Maxim Dounin # Date 1681178829 -10800 # Tue Apr 11 05:07:09 2023 +0300 # Node ID 583e46a19af473885e8f2d43fa6b31ed0f238892 # Parent b9ba7c498d95156861857809bc205aea1e8b445a nginx-1.24.0 diff --git a/text/en/CHANGES b/text/en/CHANGES-1.24 copy from text/en/CHANGES copy to text/en/CHANGES-1.24 --- a/text/en/CHANGES +++ b/text/en/CHANGES-1.24 @@ -1,4 +1,9 @@ +Changes with nginx 1.24.0 11 Apr 2023 + + *) 1.24.x stable branch. + + Changes with nginx 1.23.4 28 Mar 2023 *) Change: now TLSv1.3 protocol is enabled by default. diff --git a/text/ru/CHANGES.ru b/text/ru/CHANGES.ru-1.24 copy from text/ru/CHANGES.ru copy to text/ru/CHANGES.ru-1.24 --- a/text/ru/CHANGES.ru +++ b/text/ru/CHANGES.ru-1.24 @@ -1,4 +1,9 @@ +Изменения в nginx 1.24.0 11.04.2023 + + *) Стабильная ветка 1.24.x. + + Изменения в nginx 1.23.4 28.03.2023 *) Изменение: теперь протокол TLSv1.3 разрешён по умолчанию. diff --git a/xml/index.xml b/xml/index.xml --- a/xml/index.xml +++ b/xml/index.xml @@ -7,6 +7,23 @@ + + +nginx-1.24.0 +stable version has been released, +incorporating new features and bug fixes from the 1.23.x mainline branch — +including +improved handling of multiple header lines with identical names, +memory usage optimization in configurations with SSL proxying, +better sanity checking of the + directive +protocol parameters, +TLSv1.3 +protocol enabled by default, +and more. + + + njs-0.7.12 diff --git a/xml/versions.xml b/xml/versions.xml --- a/xml/versions.xml +++ b/xml/versions.xml @@ -10,6 +10,14 @@ + + + + + + + + @@ -18,7 +26,7 @@ - + -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Tue Apr 11 14:05:14 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 11 Apr 2023 18:05:14 +0400 Subject: nginx-1.24.0 draft In-Reply-To: References: Message-ID: > On 11 Apr 2023, at 17:48, Maxim Dounin wrote: > > Hello! > > Below are patches for the nginx-1.24.0 release, and corresponding > changes to the site. > > # HG changeset patch > # User Maxim Dounin > # Date 1681177300 -10800 > # Tue Apr 11 04:41:40 2023 +0300 > # Branch stable-1.24 > # Node ID 05cf7574d94bb980428cbb63aa488631c24b8000 > # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 > Stable branch. > > diff --git a/src/core/nginx.h b/src/core/nginx.h > --- a/src/core/nginx.h > +++ b/src/core/nginx.h > @@ -9,8 +9,8 @@ > #define _NGINX_H_INCLUDED_ > > > -#define nginx_version 1023004 > -#define NGINX_VERSION "1.23.4" > +#define nginx_version 1024000 > +#define NGINX_VERSION "1.24.0" > #define NGINX_VER "nginx/" NGINX_VERSION > > #ifdef NGX_BUILD > # HG changeset patch > # User Maxim Dounin > # Date 1681177534 -10800 > # Tue Apr 11 04:45:34 2023 +0300 > # Branch stable-1.24 > # Node ID 420f96a6f7ac612b2b11750139cf8f4959803717 > # Parent 05cf7574d94bb980428cbb63aa488631c24b8000 > nginx-1.24.0-RELEASE > > diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml > --- a/docs/xml/nginx/changes.xml > +++ b/docs/xml/nginx/changes.xml > @@ -5,6 +5,20 @@ > > > > + > + > + > + > +Стабильная ветка 1.24.x. > + > + > +1.24.x stable branch. > + > + > + > + > + > + > > > > # HG changeset patch > # User Maxim Dounin > # Date 1681177534 -10800 > # Tue Apr 11 04:45:34 2023 +0300 > # Branch stable-1.24 > # Node ID a4bbb03659dbc4a71cfa5a4dc5e00889ef76d2e6 > # Parent 420f96a6f7ac612b2b11750139cf8f4959803717 > release-1.24.0 tag > > diff --git a/.hgtags b/.hgtags > --- a/.hgtags > +++ b/.hgtags > @@ -472,3 +472,4 @@ a63d0a70afea96813ba6667997bc7d68b5863f0d > aa901551a7ebad1e8b0f8c11cb44e3424ba29707 release-1.23.2 > ff3afd1ce6a6b65057741df442adfaa71a0e2588 release-1.23.3 > ac779115ed6ee4f3039e9aea414a54e560450ee2 release-1.23.4 > +420f96a6f7ac612b2b11750139cf8f4959803717 release-1.24.0 > > > Site changes: > > # HG changeset patch > # User Maxim Dounin > # Date 1681178829 -10800 > # Tue Apr 11 05:07:09 2023 +0300 > # Node ID 583e46a19af473885e8f2d43fa6b31ed0f238892 > # Parent b9ba7c498d95156861857809bc205aea1e8b445a > nginx-1.24.0 > > diff --git a/text/en/CHANGES b/text/en/CHANGES-1.24 > copy from text/en/CHANGES > copy to text/en/CHANGES-1.24 > --- a/text/en/CHANGES > +++ b/text/en/CHANGES-1.24 > @@ -1,4 +1,9 @@ > > +Changes with nginx 1.24.0 11 Apr 2023 > + > + *) 1.24.x stable branch. > + > + > Changes with nginx 1.23.4 28 Mar 2023 > > *) Change: now TLSv1.3 protocol is enabled by default. > diff --git a/text/ru/CHANGES.ru b/text/ru/CHANGES.ru-1.24 > copy from text/ru/CHANGES.ru > copy to text/ru/CHANGES.ru-1.24 > --- a/text/ru/CHANGES.ru > +++ b/text/ru/CHANGES.ru-1.24 > @@ -1,4 +1,9 @@ > > +Изменения в nginx 1.24.0 11.04.2023 > + > + *) Стабильная ветка 1.24.x. > + > + > Изменения в nginx 1.23.4 28.03.2023 > > *) Изменение: теперь протокол TLSv1.3 разрешён по умолчанию. > diff --git a/xml/index.xml b/xml/index.xml > --- a/xml/index.xml > +++ b/xml/index.xml > @@ -7,6 +7,23 @@ > > > > + wrong date > + > +nginx-1.24.0 > +stable version has been released, > +incorporating new features and bug fixes from the 1.23.x mainline branch — > +including > +improved handling of multiple header lines with identical names, > +memory usage optimization in configurations with SSL proxying, > +better sanity checking of the > + directive > +protocol parameters, > +TLSv1.3 > +protocol enabled by default, > +and more. worth to mention TLS tickets keys rotation? > + > + > + > > > njs-0.7.12 > diff --git a/xml/versions.xml b/xml/versions.xml > --- a/xml/versions.xml > +++ b/xml/versions.xml > @@ -10,6 +10,14 @@ > > > > + > + > + > + > + > + > + > + > > > > @@ -18,7 +26,7 @@ > > > > - > + > > > > -- Sergey Kandaurov From arut at nginx.com Tue Apr 11 14:29:45 2023 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 11 Apr 2023 18:29:45 +0400 Subject: [PATCH] QUIC: removed TLSv1.3 requirement from README In-Reply-To: References: Message-ID: <20230411142945.xqfevqj5crsml5b5@N00W24XTQX> Hi, On Mon, Apr 10, 2023 at 02:11:44PM -0700, Maxim Konovalov wrote: > On 10.04.2023 04:47, Roman Arutyunyan wrote: > > # HG changeset patch > > # User Roman Arutyunyan > > # Date 1681127095 -14400 > > # Mon Apr 10 15:44:55 2023 +0400 > > # Branch quic > > # Node ID b14b0c9887fbf22e24bd0d0449a261ced466f78c > > # Parent 9ea62b6250f225578f703da5e230853a7a84df7d > > QUIC: removed TLSv1.3 requirement from README. > > > > TLSv1.3 is enabled by default since d1cf09451ae8. > > > > diff --git a/README b/README > > --- a/README > > +++ b/README > > @@ -119,10 +119,6 @@ 3. Configuration > > ssl_early_data on; > > - Make sure that TLS 1.3 is configured which is required for QUIC: > > - > > - ssl_protocols TLSv1.3; > > - > > To enable GSO (Generic Segmentation Offloading): > [...] > > Well, TLSv1.3 is still required. You just don't need to add it to the list > of ssl_protocols. I would remove it from the config example but keep a note > that QUIC relies on TLSv1.3. We can keep a note, but I'd like to avoid the directive following the note. -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1681223360 -14400 # Tue Apr 11 18:29:20 2023 +0400 # Branch quic # Node ID 8347620e0e762c5dea99247dc70fbbffd0c6b175 # Parent 9ea62b6250f225578f703da5e230853a7a84df7d README: revised TLSv1.3 requirement for QUIC. TLSv1.3 is enabled by default since d1cf09451ae8. diff --git a/README b/README --- a/README +++ b/README @@ -119,10 +119,6 @@ 3. Configuration ssl_early_data on; - Make sure that TLS 1.3 is configured which is required for QUIC: - - ssl_protocols TLSv1.3; - To enable GSO (Generic Segmentation Offloading): quic_gso on; @@ -135,6 +131,8 @@ 3. Configuration quic_host_key ; + QUIC requires TLSv1.3 protocol, which is enabled by the default + by "ssl_protocols" directive. By default, GSO Linux-specific optimization [10] is disabled. Enable it in case a corresponding network interface is configured to @@ -175,7 +173,6 @@ Example configuration: ssl_certificate certs/example.com.crt; ssl_certificate_key certs/example.com.key; - ssl_protocols TLSv1.3; location / { # required for browsers to direct them into quic port From mdounin at mdounin.ru Tue Apr 11 14:29:57 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Apr 2023 17:29:57 +0300 Subject: nginx-1.24.0 draft In-Reply-To: References: Message-ID: Hello! On Tue, Apr 11, 2023 at 06:05:14PM +0400, Sergey Kandaurov wrote: > > On 11 Apr 2023, at 17:48, Maxim Dounin wrote: > > > > Hello! > > > > Below are patches for the nginx-1.24.0 release, and corresponding > > changes to the site. > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1681177300 -10800 > > # Tue Apr 11 04:41:40 2023 +0300 > > # Branch stable-1.24 > > # Node ID 05cf7574d94bb980428cbb63aa488631c24b8000 > > # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 > > Stable branch. > > > > diff --git a/src/core/nginx.h b/src/core/nginx.h > > --- a/src/core/nginx.h > > +++ b/src/core/nginx.h > > @@ -9,8 +9,8 @@ > > #define _NGINX_H_INCLUDED_ > > > > > > -#define nginx_version 1023004 > > -#define NGINX_VERSION "1.23.4" > > +#define nginx_version 1024000 > > +#define NGINX_VERSION "1.24.0" > > #define NGINX_VER "nginx/" NGINX_VERSION > > > > #ifdef NGX_BUILD > > # HG changeset patch > > # User Maxim Dounin > > # Date 1681177534 -10800 > > # Tue Apr 11 04:45:34 2023 +0300 > > # Branch stable-1.24 > > # Node ID 420f96a6f7ac612b2b11750139cf8f4959803717 > > # Parent 05cf7574d94bb980428cbb63aa488631c24b8000 > > nginx-1.24.0-RELEASE > > > > diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml > > --- a/docs/xml/nginx/changes.xml > > +++ b/docs/xml/nginx/changes.xml > > @@ -5,6 +5,20 @@ > > > > > > > > + > > + > > + > > + > > +Стабильная ветка 1.24.x. > > + > > + > > +1.24.x stable branch. > > + > > + > > + > > + > > + > > + > > > > > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1681177534 -10800 > > # Tue Apr 11 04:45:34 2023 +0300 > > # Branch stable-1.24 > > # Node ID a4bbb03659dbc4a71cfa5a4dc5e00889ef76d2e6 > > # Parent 420f96a6f7ac612b2b11750139cf8f4959803717 > > release-1.24.0 tag > > > > diff --git a/.hgtags b/.hgtags > > --- a/.hgtags > > +++ b/.hgtags > > @@ -472,3 +472,4 @@ a63d0a70afea96813ba6667997bc7d68b5863f0d > > aa901551a7ebad1e8b0f8c11cb44e3424ba29707 release-1.23.2 > > ff3afd1ce6a6b65057741df442adfaa71a0e2588 release-1.23.3 > > ac779115ed6ee4f3039e9aea414a54e560450ee2 release-1.23.4 > > +420f96a6f7ac612b2b11750139cf8f4959803717 release-1.24.0 > > > > > > Site changes: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1681178829 -10800 > > # Tue Apr 11 05:07:09 2023 +0300 > > # Node ID 583e46a19af473885e8f2d43fa6b31ed0f238892 > > # Parent b9ba7c498d95156861857809bc205aea1e8b445a > > nginx-1.24.0 > > > > diff --git a/text/en/CHANGES b/text/en/CHANGES-1.24 > > copy from text/en/CHANGES > > copy to text/en/CHANGES-1.24 > > --- a/text/en/CHANGES > > +++ b/text/en/CHANGES-1.24 > > @@ -1,4 +1,9 @@ > > > > +Changes with nginx 1.24.0 11 Apr 2023 > > + > > + *) 1.24.x stable branch. > > + > > + > > Changes with nginx 1.23.4 28 Mar 2023 > > > > *) Change: now TLSv1.3 protocol is enabled by default. > > diff --git a/text/ru/CHANGES.ru b/text/ru/CHANGES.ru-1.24 > > copy from text/ru/CHANGES.ru > > copy to text/ru/CHANGES.ru-1.24 > > --- a/text/ru/CHANGES.ru > > +++ b/text/ru/CHANGES.ru-1.24 > > @@ -1,4 +1,9 @@ > > > > +Изменения в nginx 1.24.0 11.04.2023 > > + > > + *) Стабильная ветка 1.24.x. > > + > > + > > Изменения в nginx 1.23.4 28.03.2023 > > > > *) Изменение: теперь протокол TLSv1.3 разрешён по умолчанию. > > diff --git a/xml/index.xml b/xml/index.xml > > --- a/xml/index.xml > > +++ b/xml/index.xml > > @@ -7,6 +7,23 @@ > > > > > > > > + > > wrong date Fixed, thanks. > > + > > +nginx-1.24.0 > > +stable version has been released, > > +incorporating new features and bug fixes from the 1.23.x mainline branch — > > +including > > +improved handling of multiple header lines with identical names, > > +memory usage optimization in configurations with SSL proxying, > > +better sanity checking of the > > + directive > > +protocol parameters, > > +TLSv1.3 > > +protocol enabled by default, > > +and more. > > worth to mention TLS tickets keys rotation? Yes, sure. diff --git a/xml/index.xml b/xml/index.xml --- a/xml/index.xml +++ b/xml/index.xml @@ -7,7 +7,7 @@ - + nginx-1.24.0 stable version has been released, @@ -20,6 +20,10 @@ better sanity checking of the protocol parameters, TLSv1.3 protocol enabled by default, +automatic rotation of TLS session tickets encryption keys +when using shared memory in the + +directive, and more. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Tue Apr 11 14:36:51 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 11 Apr 2023 18:36:51 +0400 Subject: nginx-1.24.0 draft In-Reply-To: References: Message-ID: <33FF34EA-74C7-4915-A95D-7F8F8BEFBFD9@nginx.com> > On 11 Apr 2023, at 18:29, Maxim Dounin wrote: > > Hello! > > On Tue, Apr 11, 2023 at 06:05:14PM +0400, Sergey Kandaurov wrote: > >>> On 11 Apr 2023, at 17:48, Maxim Dounin wrote: >>> >>> Hello! >>> >>> Below are patches for the nginx-1.24.0 release, and corresponding >>> changes to the site. >>> >>> [..] >>> >>> Site changes: >>> >>> # HG changeset patch >>> # User Maxim Dounin >>> # Date 1681178829 -10800 >>> # Tue Apr 11 05:07:09 2023 +0300 >>> # Node ID 583e46a19af473885e8f2d43fa6b31ed0f238892 >>> # Parent b9ba7c498d95156861857809bc205aea1e8b445a >>> nginx-1.24.0 >>> >>> diff --git a/text/en/CHANGES b/text/en/CHANGES-1.24 >>> copy from text/en/CHANGES >>> copy to text/en/CHANGES-1.24 >>> --- a/text/en/CHANGES >>> +++ b/text/en/CHANGES-1.24 >>> @@ -1,4 +1,9 @@ >>> >>> +Changes with nginx 1.24.0 11 Apr 2023 >>> + >>> + *) 1.24.x stable branch. >>> + >>> + >>> Changes with nginx 1.23.4 28 Mar 2023 >>> >>> *) Change: now TLSv1.3 protocol is enabled by default. >>> diff --git a/text/ru/CHANGES.ru b/text/ru/CHANGES.ru-1.24 >>> copy from text/ru/CHANGES.ru >>> copy to text/ru/CHANGES.ru-1.24 >>> --- a/text/ru/CHANGES.ru >>> +++ b/text/ru/CHANGES.ru-1.24 >>> @@ -1,4 +1,9 @@ >>> >>> +Изменения в nginx 1.24.0 11.04.2023 >>> + >>> + *) Стабильная ветка 1.24.x. >>> + >>> + >>> Изменения в nginx 1.23.4 28.03.2023 >>> >>> *) Изменение: теперь протокол TLSv1.3 разрешён по умолчанию. >>> diff --git a/xml/index.xml b/xml/index.xml >>> --- a/xml/index.xml >>> +++ b/xml/index.xml >>> @@ -7,6 +7,23 @@ >>> >>> >>> >>> + >> >> wrong date > > Fixed, thanks. > >>> + >>> +nginx-1.24.0 >>> +stable version has been released, >>> +incorporating new features and bug fixes from the 1.23.x mainline branch — >>> +including >>> +improved handling of multiple header lines with identical names, >>> +memory usage optimization in configurations with SSL proxying, >>> +better sanity checking of the >>> + directive >>> +protocol parameters, >>> +TLSv1.3 >>> +protocol enabled by default, >>> +and more. >> >> worth to mention TLS tickets keys rotation? > > Yes, sure. > > diff --git a/xml/index.xml b/xml/index.xml > --- a/xml/index.xml > +++ b/xml/index.xml > @@ -7,7 +7,7 @@ > > > > - > + > > nginx-1.24.0 > stable version has been released, > @@ -20,6 +20,10 @@ better sanity checking of the > protocol parameters, > TLSv1.3 > protocol enabled by default, > +automatic rotation of TLS session tickets encryption keys > +when using shared memory in the > + > +directive, > and more. > > > Good for me, thanks. -- Sergey Kandaurov From maxim at nginx.com Tue Apr 11 14:45:50 2023 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 11 Apr 2023 07:45:50 -0700 Subject: [PATCH] QUIC: removed TLSv1.3 requirement from README In-Reply-To: <20230411142945.xqfevqj5crsml5b5@N00W24XTQX> References: <20230411142945.xqfevqj5crsml5b5@N00W24XTQX> Message-ID: On 11.04.2023 07:29, Roman Arutyunyan wrote: > Hi, > > On Mon, Apr 10, 2023 at 02:11:44PM -0700, Maxim Konovalov wrote: >> On 10.04.2023 04:47, Roman Arutyunyan wrote: >>> # HG changeset patch >>> # User Roman Arutyunyan >>> # Date 1681127095 -14400 >>> # Mon Apr 10 15:44:55 2023 +0400 >>> # Branch quic >>> # Node ID b14b0c9887fbf22e24bd0d0449a261ced466f78c >>> # Parent 9ea62b6250f225578f703da5e230853a7a84df7d >>> QUIC: removed TLSv1.3 requirement from README. >>> >>> TLSv1.3 is enabled by default since d1cf09451ae8. >>> >>> diff --git a/README b/README >>> --- a/README >>> +++ b/README >>> @@ -119,10 +119,6 @@ 3. Configuration >>> ssl_early_data on; >>> - Make sure that TLS 1.3 is configured which is required for QUIC: >>> - >>> - ssl_protocols TLSv1.3; >>> - >>> To enable GSO (Generic Segmentation Offloading): >> [...] >> >> Well, TLSv1.3 is still required. You just don't need to add it to the list >> of ssl_protocols. I would remove it from the config example but keep a note >> that QUIC relies on TLSv1.3. > > We can keep a note, but I'd like to avoid the directive following the note. > Looks good! -- Maxim Konovalov From mdounin at mdounin.ru Tue Apr 11 14:58:42 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Apr 2023 17:58:42 +0300 Subject: nginx-1.24.0 draft In-Reply-To: <33FF34EA-74C7-4915-A95D-7F8F8BEFBFD9@nginx.com> References: <33FF34EA-74C7-4915-A95D-7F8F8BEFBFD9@nginx.com> Message-ID: Hello! On Tue, Apr 11, 2023 at 06:36:51PM +0400, Sergey Kandaurov wrote: > > > On 11 Apr 2023, at 18:29, Maxim Dounin wrote: > > > > Hello! > > > > On Tue, Apr 11, 2023 at 06:05:14PM +0400, Sergey Kandaurov wrote: > > > >>> On 11 Apr 2023, at 17:48, Maxim Dounin wrote: > >>> > >>> Hello! > >>> > >>> Below are patches for the nginx-1.24.0 release, and corresponding > >>> changes to the site. > >>> > >>> [..] > >>> > >>> Site changes: > >>> > >>> # HG changeset patch > >>> # User Maxim Dounin > >>> # Date 1681178829 -10800 > >>> # Tue Apr 11 05:07:09 2023 +0300 > >>> # Node ID 583e46a19af473885e8f2d43fa6b31ed0f238892 > >>> # Parent b9ba7c498d95156861857809bc205aea1e8b445a > >>> nginx-1.24.0 > >>> > >>> diff --git a/text/en/CHANGES b/text/en/CHANGES-1.24 > >>> copy from text/en/CHANGES > >>> copy to text/en/CHANGES-1.24 > >>> --- a/text/en/CHANGES > >>> +++ b/text/en/CHANGES-1.24 > >>> @@ -1,4 +1,9 @@ > >>> > >>> +Changes with nginx 1.24.0 11 Apr 2023 > >>> + > >>> + *) 1.24.x stable branch. > >>> + > >>> + > >>> Changes with nginx 1.23.4 28 Mar 2023 > >>> > >>> *) Change: now TLSv1.3 protocol is enabled by default. > >>> diff --git a/text/ru/CHANGES.ru b/text/ru/CHANGES.ru-1.24 > >>> copy from text/ru/CHANGES.ru > >>> copy to text/ru/CHANGES.ru-1.24 > >>> --- a/text/ru/CHANGES.ru > >>> +++ b/text/ru/CHANGES.ru-1.24 > >>> @@ -1,4 +1,9 @@ > >>> > >>> +Изменения в nginx 1.24.0 11.04.2023 > >>> + > >>> + *) Стабильная ветка 1.24.x. > >>> + > >>> + > >>> Изменения в nginx 1.23.4 28.03.2023 > >>> > >>> *) Изменение: теперь протокол TLSv1.3 разрешён по умолчанию. > >>> diff --git a/xml/index.xml b/xml/index.xml > >>> --- a/xml/index.xml > >>> +++ b/xml/index.xml > >>> @@ -7,6 +7,23 @@ > >>> > >>> > >>> > >>> + > >> > >> wrong date > > > > Fixed, thanks. > > > >>> + > >>> +nginx-1.24.0 > >>> +stable version has been released, > >>> +incorporating new features and bug fixes from the 1.23.x mainline branch — > >>> +including > >>> +improved handling of multiple header lines with identical names, > >>> +memory usage optimization in configurations with SSL proxying, > >>> +better sanity checking of the > >>> + directive > >>> +protocol parameters, > >>> +TLSv1.3 > >>> +protocol enabled by default, > >>> +and more. > >> > >> worth to mention TLS tickets keys rotation? > > > > Yes, sure. > > > > diff --git a/xml/index.xml b/xml/index.xml > > --- a/xml/index.xml > > +++ b/xml/index.xml > > @@ -7,7 +7,7 @@ > > > > > > > > - > > + > > > > nginx-1.24.0 > > stable version has been released, > > @@ -20,6 +20,10 @@ better sanity checking of the > > protocol parameters, > > TLSv1.3 > > protocol enabled by default, > > +automatic rotation of TLS session tickets encryption keys > > +when using shared memory in the > > + > > +directive, > > and more. > > > > > > > > Good for me, thanks. Pushed to: http://mdounin.ru/hg/nginx http://mdounin.ru/hg/nginx.org Release files: http://mdounin.ru/temp/nginx-1.24.0.tar.gz http://mdounin.ru/temp/nginx-1.24.0.tar.gz.asc http://mdounin.ru/temp/nginx-1.24.0.zip http://mdounin.ru/temp/nginx-1.24.0.zip.asc -- Maxim Dounin http://mdounin.ru/ From thresh at nginx.com Tue Apr 11 15:55:40 2023 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 11 Apr 2023 15:55:40 +0000 Subject: [nginx] Stable branch. Message-ID: details: https://hg.nginx.org/nginx/rev/05cf7574d94b branches: stable-1.24 changeset: 8157:05cf7574d94b user: Maxim Dounin date: Tue Apr 11 04:41:40 2023 +0300 description: Stable branch. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 5f1d05a21287 -r 05cf7574d94b src/core/nginx.h --- a/src/core/nginx.h Tue Mar 28 18:01:54 2023 +0300 +++ b/src/core/nginx.h Tue Apr 11 04:41:40 2023 +0300 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1023004 -#define NGINX_VERSION "1.23.4" +#define nginx_version 1024000 +#define NGINX_VERSION "1.24.0" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From thresh at nginx.com Tue Apr 11 15:55:43 2023 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 11 Apr 2023 15:55:43 +0000 Subject: [nginx] nginx-1.24.0-RELEASE Message-ID: details: https://hg.nginx.org/nginx/rev/420f96a6f7ac branches: stable-1.24 changeset: 8158:420f96a6f7ac user: Maxim Dounin date: Tue Apr 11 04:45:34 2023 +0300 description: nginx-1.24.0-RELEASE diffstat: docs/xml/nginx/changes.xml | 14 ++++++++++++++ 1 files changed, 14 insertions(+), 0 deletions(-) diffs (24 lines): diff -r 05cf7574d94b -r 420f96a6f7ac docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml Tue Apr 11 04:41:40 2023 +0300 +++ b/docs/xml/nginx/changes.xml Tue Apr 11 04:45:34 2023 +0300 @@ -5,6 +5,20 @@ + + + + +Стабильная ветка 1.24.x. + + +1.24.x stable branch. + + + + + + From thresh at nginx.com Tue Apr 11 15:55:46 2023 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 11 Apr 2023 15:55:46 +0000 Subject: [nginx] release-1.24.0 tag Message-ID: details: https://hg.nginx.org/nginx/rev/a4bbb03659db branches: stable-1.24 changeset: 8159:a4bbb03659db user: Maxim Dounin date: Tue Apr 11 04:45:34 2023 +0300 description: release-1.24.0 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r 420f96a6f7ac -r a4bbb03659db .hgtags --- a/.hgtags Tue Apr 11 04:45:34 2023 +0300 +++ b/.hgtags Tue Apr 11 04:45:34 2023 +0300 @@ -472,3 +472,4 @@ a63d0a70afea96813ba6667997bc7d68b5863f0d aa901551a7ebad1e8b0f8c11cb44e3424ba29707 release-1.23.2 ff3afd1ce6a6b65057741df442adfaa71a0e2588 release-1.23.3 ac779115ed6ee4f3039e9aea414a54e560450ee2 release-1.23.4 +420f96a6f7ac612b2b11750139cf8f4959803717 release-1.24.0 From xeioex at nginx.com Wed Apr 12 01:42:37 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 12 Apr 2023 01:42:37 +0000 Subject: [njs] Version bump. Message-ID: details: https://hg.nginx.org/njs/rev/46c0af7318b8 branches: changeset: 2084:46c0af7318b8 user: Dmitry Volyntsev date: Mon Apr 10 23:06:29 2023 -0700 description: Version bump. diffstat: src/njs.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r a421d49d1d5c -r 46c0af7318b8 src/njs.h --- a/src/njs.h Mon Apr 10 09:53:24 2023 -0700 +++ b/src/njs.h Mon Apr 10 23:06:29 2023 -0700 @@ -11,8 +11,8 @@ #include -#define NJS_VERSION "0.7.12" -#define NJS_VERSION_NUMBER 0x00070c +#define NJS_VERSION "0.8.0" +#define NJS_VERSION_NUMBER 0x000800 #include /* STDOUT_FILENO, STDERR_FILENO */ From xeioex at nginx.com Wed Apr 12 01:42:39 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 12 Apr 2023 01:42:39 +0000 Subject: [njs] Simplified functional stack unwinding. Message-ID: details: https://hg.nginx.org/njs/rev/29ddc56f7aa5 branches: changeset: 2085:29ddc56f7aa5 user: Dmitry Volyntsev date: Mon Apr 10 23:06:34 2023 -0700 description: Simplified functional stack unwinding. diffstat: src/njs_function.c | 73 +++++++++++++---------------------------------------- src/njs_function.h | 18 ------------- src/njs_vm.c | 5 +-- src/njs_vm.h | 3 +- src/njs_vmcode.c | 9 ++---- 5 files changed, 24 insertions(+), 84 deletions(-) diffs (235 lines): diff -r 46c0af7318b8 -r 29ddc56f7aa5 src/njs_function.c --- a/src/njs_function.c Mon Apr 10 23:06:29 2023 -0700 +++ b/src/njs_function.c Mon Apr 10 23:06:34 2023 -0700 @@ -620,7 +620,7 @@ njs_function_native_call(njs_vm_t *vm) { njs_int_t ret; njs_function_t *function; - njs_native_frame_t *native, *previous; + njs_native_frame_t *native; njs_function_native_t call; native = vm->top_frame; @@ -656,17 +656,9 @@ njs_function_native_call(njs_vm_t *vm) return ret; } - if (ret == NJS_DECLINED) { - return NJS_OK; - } - - previous = njs_function_previous_frame(native); + njs_vm_scopes_restore(vm, native); - njs_vm_scopes_restore(vm, native, previous); - - if (!native->skip) { - *native->retval = vm->retval; - } + *native->retval = vm->retval; njs_function_frame_free(vm, native); @@ -700,20 +692,10 @@ njs_function_frame_invoke(njs_vm_t *vm, void njs_function_frame_free(njs_vm_t *vm, njs_native_frame_t *native) { - njs_native_frame_t *previous; - - do { - previous = native->previous; - - /* GC: free frame->local, etc. */ - - if (native->size != 0) { - vm->spare_stack_size += native->size; - njs_mp_free(vm->mem_pool, native); - } - - native = previous; - } while (native->skip); + if (native->size != 0) { + vm->spare_stack_size += native->size; + njs_mp_free(vm->mem_pool, native); + } } @@ -1236,10 +1218,9 @@ static njs_int_t njs_function_prototype_call(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused) { - njs_int_t ret; - njs_function_t *function; - const njs_value_t *this; - njs_native_frame_t *frame; + njs_int_t ret; + njs_value_t retval; + const njs_value_t *this; if (!njs_is_function(&args[0])) { njs_type_error(vm, "\"this\" argument is not a function"); @@ -1255,24 +1236,15 @@ njs_function_prototype_call(njs_vm_t *vm nargs = 0; } - frame = vm->top_frame; - - /* Skip the "call" method frame. */ - frame->skip = 1; - - function = njs_function(&args[0]); - - ret = njs_function_frame(vm, function, this, &args[2], nargs, 0); + ret = njs_function_call(vm, njs_function(&args[0]), this, &args[2], nargs, + &retval); if (njs_slow_path(ret != NJS_OK)) { return ret; } - ret = njs_function_frame_invoke(vm, frame->retval); - if (njs_slow_path(ret != NJS_OK)) { - return ret; - } + njs_value_assign(&vm->retval, &retval); - return NJS_DECLINED; + return NJS_OK; } @@ -1282,8 +1254,7 @@ njs_function_prototype_apply(njs_vm_t *v { int64_t i, length; njs_int_t ret; - njs_frame_t *frame; - njs_value_t *this, *arr_like; + njs_value_t retval, *this, *arr_like; njs_array_t *arr; njs_function_t *func; @@ -1332,22 +1303,14 @@ njs_function_prototype_apply(njs_vm_t *v activate: - /* Skip the "apply" method frame. */ - vm->top_frame->skip = 1; - - frame = (njs_frame_t *) vm->top_frame; - - ret = njs_function_frame(vm, func, this, args, length, 0); + ret = njs_function_call(vm, func, this, args, length, &retval); if (njs_slow_path(ret != NJS_OK)) { return ret; } - ret = njs_function_frame_invoke(vm, frame->native.retval); - if (njs_slow_path(ret != NJS_OK)) { - return ret; - } + njs_value_assign(&vm->retval, &retval); - return NJS_DECLINED; + return NJS_OK; } diff -r 46c0af7318b8 -r 29ddc56f7aa5 src/njs_function.h --- a/src/njs_function.h Mon Apr 10 23:06:29 2023 -0700 +++ b/src/njs_function.h Mon Apr 10 23:06:34 2023 -0700 @@ -65,9 +65,6 @@ struct njs_native_frame_s { uint8_t native; /* 1 bit */ /* Function is called as constructor with "new" keyword. */ uint8_t ctor; /* 1 bit */ - - /* Skip the Function.call() and Function.apply() methods frames. */ - uint8_t skip; /* 1 bit */ }; @@ -161,21 +158,6 @@ njs_function_frame(njs_vm_t *vm, njs_fun } -njs_inline njs_native_frame_t * -njs_function_previous_frame(njs_native_frame_t *frame) -{ - njs_native_frame_t *previous; - - do { - previous = frame->previous; - frame = previous; - - } while (frame->skip); - - return frame; -} - - njs_inline njs_int_t njs_function_call(njs_vm_t *vm, njs_function_t *function, const njs_value_t *this, const njs_value_t *args, diff -r 46c0af7318b8 -r 29ddc56f7aa5 src/njs_vm.c --- a/src/njs_vm.c Mon Apr 10 23:06:29 2023 -0700 +++ b/src/njs_vm.c Mon Apr 10 23:06:34 2023 -0700 @@ -442,12 +442,11 @@ njs_vm_invoke(njs_vm_t *vm, njs_function void -njs_vm_scopes_restore(njs_vm_t *vm, njs_native_frame_t *native, - njs_native_frame_t *previous) +njs_vm_scopes_restore(njs_vm_t *vm, njs_native_frame_t *native) { njs_frame_t *frame; - vm->top_frame = previous; + vm->top_frame = native->previous; if (native->function->native) { return; diff -r 46c0af7318b8 -r 29ddc56f7aa5 src/njs_vm.h --- a/src/njs_vm.h Mon Apr 10 23:06:29 2023 -0700 +++ b/src/njs_vm.h Mon Apr 10 23:06:34 2023 -0700 @@ -239,8 +239,7 @@ struct njs_vm_shared_s { }; -void njs_vm_scopes_restore(njs_vm_t *vm, njs_native_frame_t *frame, - njs_native_frame_t *previous); +void njs_vm_scopes_restore(njs_vm_t *vm, njs_native_frame_t *frame); njs_int_t njs_builtin_objects_create(njs_vm_t *vm); njs_int_t njs_builtin_objects_clone(njs_vm_t *vm, njs_value_t *global); diff -r 46c0af7318b8 -r 29ddc56f7aa5 src/njs_vmcode.c --- a/src/njs_vmcode.c Mon Apr 10 23:06:29 2023 -0700 +++ b/src/njs_vmcode.c Mon Apr 10 23:06:34 2023 -0700 @@ -1853,7 +1853,7 @@ error: lambda_call = (native == &vm->active_frame->native); - njs_vm_scopes_restore(vm, native, previous); + njs_vm_scopes_restore(vm, native); if (native->size != 0) { vm->spare_stack_size += native->size; @@ -2674,8 +2674,7 @@ njs_function_new_object(njs_vm_t *vm, nj static njs_jump_off_t njs_vmcode_return(njs_vm_t *vm, njs_value_t *invld, njs_value_t *retval) { - njs_frame_t *frame; - njs_native_frame_t *previous; + njs_frame_t *frame; frame = (njs_frame_t *) vm->top_frame; @@ -2688,9 +2687,7 @@ njs_vmcode_return(njs_vm_t *vm, njs_valu } } - previous = njs_function_previous_frame(&frame->native); - - njs_vm_scopes_restore(vm, &frame->native, previous); + njs_vm_scopes_restore(vm, &frame->native); *frame->native.retval = *retval; From pluknet at nginx.com Wed Apr 12 12:55:48 2023 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Wed, 12 Apr 2023 16:55:48 +0400 Subject: [PATCH 0 of 2] certificate compression Message-ID: Notably, long certificate chains are compressed better, with zlib demonstrating a slightly worse ratio. no zlib brotli zstd 1 .973 .964 .954 2 .907 .881 .877 3 .877 .853 .849 4 .856 .837 .836 5 .842 .827 .827 6 .835 .821 .822 Further, using ECDSA certificates (which itself produces Certificate TLS messages of a smaller size compared to RSA, apparently due to "using keys with small public key representations" (c) RFC 9001) allows to achieve better compression results. Applied to QUIC handshake, this may conserve an additional round trip when using long certificate chains with a not yet validated address. Testing on self-signed certificates demonstrates an additional round trip on a 5th RSA and 11th ECDSA certificate, real results may vary. === rsa === server datagrams sent w/ compression cert msg ratio 1 1252 177 1252 167 .98 2 1252 865 1252 747 .91 3 1252 1252 369 1252 1252 123 .88 4 1252 1252 1057 1252 1252 672 .86 5 1252 1252 1252 - 561 1252 1252 1210 .84 6 1252 1252 1252 - 1248 1252 1252 1252 - 578 .84 === ecdsa === 1 1200 1200 .90 2 1200 1200 .65 3 1252 178 1200 .56 4 1252 470 1200 .51 5 1252 760 1200 .48 6 1252 1053 1252 111 .47 7 1252 1252 158 1252 218 .45 8 1252 1252 450 1252 322 .44 9 1252 1252 740 1252 426 .43 A 1252 1252 1033 1252 529 .42 B 1252 1252 1252 - 139 1252 631 .42 C 1252 1252 1252 - 431 1252 737 .41 Feedback is welcome. From pluknet at nginx.com Wed Apr 12 12:55:49 2023 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Wed, 12 Apr 2023 16:55:49 +0400 Subject: [PATCH 1 of 2] SSL: support for TLSv1.3 certificate compression (RFC 8879) In-Reply-To: References: Message-ID: <06458cd5733cd2ffaa4e.1681304149@enoparse.local> # HG changeset patch # User Sergey Kandaurov # Date 1681304029 -14400 # Wed Apr 12 16:53:49 2023 +0400 # Node ID 06458cd5733cd2ffaa4e2d26d357524a0934a7eb # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 SSL: support for TLSv1.3 certificate compression (RFC 8879). Certificates are precompressed using the "ssl_certificate_compression" directive, disabled by default. A negotiated certificate-compression algorithm depends on the OpenSSL library builtin support. diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -847,6 +847,29 @@ ngx_ssl_password_callback(char *buf, int ngx_int_t +ngx_ssl_certificate_compression(ngx_conf_t *cf, ngx_ssl_t *ssl, + ngx_uint_t enable) +{ + if (!enable) { + return NGX_OK; + } + +#ifdef TLSEXT_comp_cert_none + + if (SSL_CTX_compress_certs(ssl->ctx, 0)) { + return NGX_OK; + } + +#endif + + ngx_log_error(NGX_LOG_WARN, ssl->log, 0, + "\"ssl_certificate_compression\" ignored, not supported"); + + return NGX_OK; +} + + +ngx_int_t ngx_ssl_ciphers(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *ciphers, ngx_uint_t prefer_server_ciphers) { diff --git a/src/event/ngx_event_openssl.h b/src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h +++ b/src/event/ngx_event_openssl.h @@ -189,6 +189,8 @@ ngx_int_t ngx_ssl_certificate(ngx_conf_t ngx_str_t *cert, ngx_str_t *key, ngx_array_t *passwords); ngx_int_t ngx_ssl_connection_certificate(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *cert, ngx_str_t *key, ngx_array_t *passwords); +ngx_int_t ngx_ssl_certificate_compression(ngx_conf_t *cf, ngx_ssl_t *ssl, + ngx_uint_t enable); ngx_int_t ngx_ssl_ciphers(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *ciphers, ngx_uint_t prefer_server_ciphers); diff --git a/src/http/modules/ngx_http_ssl_module.c b/src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c +++ b/src/http/modules/ngx_http_ssl_module.c @@ -121,6 +121,13 @@ static ngx_command_t ngx_http_ssl_comma 0, NULL }, + { ngx_string("ssl_certificate_compression"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_ssl_srv_conf_t, certificate_compression), + NULL }, + { ngx_string("ssl_dhparam"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, @@ -581,6 +588,7 @@ ngx_http_ssl_create_srv_conf(ngx_conf_t sscf->enable = NGX_CONF_UNSET; sscf->prefer_server_ciphers = NGX_CONF_UNSET; + sscf->certificate_compression = NGX_CONF_UNSET; sscf->early_data = NGX_CONF_UNSET; sscf->reject_handshake = NGX_CONF_UNSET; sscf->buffer_size = NGX_CONF_UNSET_SIZE; @@ -628,6 +636,9 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * ngx_conf_merge_value(conf->prefer_server_ciphers, prev->prefer_server_ciphers, 0); + ngx_conf_merge_value(conf->certificate_compression, + prev->certificate_compression, 0); + ngx_conf_merge_value(conf->early_data, prev->early_data, 0); ngx_conf_merge_value(conf->reject_handshake, prev->reject_handshake, 0); @@ -791,6 +802,13 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * { return NGX_CONF_ERROR; } + + if (ngx_ssl_certificate_compression(cf, &conf->ssl, + conf->certificate_compression) + != NGX_OK) + { + return NGX_CONF_ERROR; + } } conf->ssl.buffer_size = conf->buffer_size; diff --git a/src/http/modules/ngx_http_ssl_module.h b/src/http/modules/ngx_http_ssl_module.h --- a/src/http/modules/ngx_http_ssl_module.h +++ b/src/http/modules/ngx_http_ssl_module.h @@ -20,6 +20,7 @@ typedef struct { ngx_ssl_t ssl; ngx_flag_t prefer_server_ciphers; + ngx_flag_t certificate_compression; ngx_flag_t early_data; ngx_flag_t reject_handshake; diff --git a/src/mail/ngx_mail_ssl_module.c b/src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c +++ b/src/mail/ngx_mail_ssl_module.c @@ -111,6 +111,13 @@ static ngx_command_t ngx_mail_ssl_comma 0, NULL }, + { ngx_string("ssl_certificate_compression"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, certificate_compression), + NULL }, + { ngx_string("ssl_dhparam"), NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, @@ -329,6 +336,7 @@ ngx_mail_ssl_create_conf(ngx_conf_t *cf) scf->passwords = NGX_CONF_UNSET_PTR; scf->conf_commands = NGX_CONF_UNSET_PTR; scf->prefer_server_ciphers = NGX_CONF_UNSET; + scf->certificate_compression = NGX_CONF_UNSET; scf->verify = NGX_CONF_UNSET_UINT; scf->verify_depth = NGX_CONF_UNSET_UINT; scf->builtin_session_cache = NGX_CONF_UNSET; @@ -359,6 +367,9 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, ngx_conf_merge_value(conf->prefer_server_ciphers, prev->prefer_server_ciphers, 0); + ngx_conf_merge_value(conf->certificate_compression, + prev->certificate_compression, 0); + ngx_conf_merge_bitmask_value(conf->protocols, prev->protocols, (NGX_CONF_BITMASK_SET |NGX_SSL_TLSv1|NGX_SSL_TLSv1_1 @@ -467,6 +478,13 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, return NGX_CONF_ERROR; } + if (ngx_ssl_certificate_compression(cf, &conf->ssl, + conf->certificate_compression) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + if (conf->verify) { if (conf->client_certificate.len == 0 && conf->verify != 3) { diff --git a/src/mail/ngx_mail_ssl_module.h b/src/mail/ngx_mail_ssl_module.h --- a/src/mail/ngx_mail_ssl_module.h +++ b/src/mail/ngx_mail_ssl_module.h @@ -22,6 +22,7 @@ typedef struct { ngx_flag_t enable; ngx_flag_t prefer_server_ciphers; + ngx_flag_t certificate_compression; ngx_ssl_t ssl; diff --git a/src/stream/ngx_stream_ssl_module.c b/src/stream/ngx_stream_ssl_module.c --- a/src/stream/ngx_stream_ssl_module.c +++ b/src/stream/ngx_stream_ssl_module.c @@ -114,6 +114,13 @@ static ngx_command_t ngx_stream_ssl_com 0, NULL }, + { ngx_string("ssl_certificate_compression"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_ssl_conf_t, certificate_compression), + NULL }, + { ngx_string("ssl_dhparam"), NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, @@ -674,6 +681,7 @@ ngx_stream_ssl_create_conf(ngx_conf_t *c scf->passwords = NGX_CONF_UNSET_PTR; scf->conf_commands = NGX_CONF_UNSET_PTR; scf->prefer_server_ciphers = NGX_CONF_UNSET; + scf->certificate_compression = NGX_CONF_UNSET; scf->verify = NGX_CONF_UNSET_UINT; scf->verify_depth = NGX_CONF_UNSET_UINT; scf->builtin_session_cache = NGX_CONF_UNSET; @@ -702,6 +710,9 @@ ngx_stream_ssl_merge_conf(ngx_conf_t *cf ngx_conf_merge_value(conf->prefer_server_ciphers, prev->prefer_server_ciphers, 0); + ngx_conf_merge_value(conf->certificate_compression, + prev->certificate_compression, 0); + ngx_conf_merge_bitmask_value(conf->protocols, prev->protocols, (NGX_CONF_BITMASK_SET |NGX_SSL_TLSv1|NGX_SSL_TLSv1_1 @@ -828,6 +839,13 @@ ngx_stream_ssl_merge_conf(ngx_conf_t *cf { return NGX_CONF_ERROR; } + + if (ngx_ssl_certificate_compression(cf, &conf->ssl, + conf->certificate_compression) + != NGX_OK) + { + return NGX_CONF_ERROR; + } } if (conf->verify) { diff --git a/src/stream/ngx_stream_ssl_module.h b/src/stream/ngx_stream_ssl_module.h --- a/src/stream/ngx_stream_ssl_module.h +++ b/src/stream/ngx_stream_ssl_module.h @@ -18,6 +18,7 @@ typedef struct { ngx_msec_t handshake_timeout; ngx_flag_t prefer_server_ciphers; + ngx_flag_t certificate_compression; ngx_ssl_t ssl; From pluknet at nginx.com Wed Apr 12 12:55:50 2023 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Wed, 12 Apr 2023 16:55:50 +0400 Subject: [PATCH 2 of 2] SSL: support for TLSv1.3 certificate compression with BoringSSL In-Reply-To: References: Message-ID: <09a8a2f9aa68656ee45f.1681304150@enoparse.local> # HG changeset patch # User Sergey Kandaurov # Date 1681304032 -14400 # Wed Apr 12 16:53:52 2023 +0400 # Node ID 09a8a2f9aa68656ee45fd90119d4402c6f707a6f # Parent 06458cd5733cd2ffaa4e2d26d357524a0934a7eb SSL: support for TLSv1.3 certificate compression with BoringSSL. Certificates are compressed with zlib and cached in SSL context exdata on the first callback invocation. diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -9,6 +9,10 @@ #include #include +#if (defined TLSEXT_cert_compression_zlib && NGX_ZLIB) +#include +#endif + #define NGX_SSL_PASSWORD_BUFFER_SIZE 4096 @@ -24,6 +28,10 @@ static EVP_PKEY *ngx_ssl_load_certificat ngx_str_t *key, ngx_array_t *passwords); static int ngx_ssl_password_callback(char *buf, int size, int rwflag, void *userdata); +#if (defined TLSEXT_cert_compression_zlib && NGX_ZLIB) +static int ngx_ssl_cert_compression_callback(ngx_ssl_conn_t *ssl_conn, + CBB *out, const uint8_t *in, size_t in_len); +#endif static int ngx_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store); static void ngx_ssl_info_callback(const ngx_ssl_conn_t *ssl_conn, int where, int ret); @@ -137,6 +145,7 @@ int ngx_ssl_ocsp_index; int ngx_ssl_certificate_index; int ngx_ssl_next_certificate_index; int ngx_ssl_certificate_name_index; +int ngx_ssl_certificate_comp_index; int ngx_ssl_stapling_index; @@ -247,6 +256,14 @@ ngx_ssl_init(ngx_log_t *log) return NGX_ERROR; } + ngx_ssl_certificate_comp_index = SSL_CTX_get_ex_new_index(0, NULL, NULL, + NULL, NULL); + if (ngx_ssl_certificate_comp_index == -1) { + ngx_ssl_error(NGX_LOG_ALERT, log, 0, + "SSL_CTX_get_ex_new_index() failed"); + return NGX_ERROR; + } + ngx_ssl_stapling_index = X509_get_ex_new_index(0, NULL, NULL, NULL, NULL); if (ngx_ssl_stapling_index == -1) { @@ -280,6 +297,14 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ return NGX_ERROR; } + if (SSL_CTX_set_ex_data(ssl->ctx, ngx_ssl_certificate_comp_index, NULL) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "SSL_CTX_set_ex_data() failed"); + return NGX_ERROR; + } + ssl->buffer_size = NGX_SSL_BUFSIZE; /* client side options */ @@ -860,6 +885,15 @@ ngx_ssl_certificate_compression(ngx_conf return NGX_OK; } +#elif (defined TLSEXT_cert_compression_zlib && NGX_ZLIB) + + if (SSL_CTX_add_cert_compression_alg(ssl->ctx, TLSEXT_cert_compression_zlib, + ngx_ssl_cert_compression_callback, + NULL)) + { + return NGX_OK; + } + #endif ngx_log_error(NGX_LOG_WARN, ssl->log, 0, @@ -869,6 +903,49 @@ ngx_ssl_certificate_compression(ngx_conf } +#if (defined TLSEXT_cert_compression_zlib && NGX_ZLIB) + +static int +ngx_ssl_cert_compression_callback(ngx_ssl_conn_t *ssl_conn, CBB *out, + const uint8_t *in, size_t in_len) +{ + SSL_CTX *ssl_ctx; + ngx_str_t *comp; + ngx_connection_t *c; + + ssl_ctx = SSL_get_SSL_CTX(ssl_conn); + comp = SSL_CTX_get_ex_data(ssl_ctx, ngx_ssl_certificate_comp_index); + + if (comp == NULL) { + c = ngx_ssl_get_connection((ngx_ssl_conn_t *) ssl_conn); + + comp = ngx_alloc(sizeof(ngx_str_t), c->log); + if (comp == NULL) { + return 0; + } + + comp->len = compressBound(in_len); + comp->data = ngx_alloc(comp->len, c->log); + if (comp->data == NULL) { + ngx_free(comp); + return 0; + } + + if (compress(comp->data, &comp->len, in, in_len) != Z_OK) { + ngx_free(comp->data); + ngx_free(comp); + return 0; + } + + SSL_CTX_set_ex_data(ssl_ctx, ngx_ssl_certificate_comp_index, comp); + } + + return CBB_add_bytes(out, comp->data, comp->len); +} + +#endif + + ngx_int_t ngx_ssl_ciphers(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *ciphers, ngx_uint_t prefer_server_ciphers) @@ -4832,7 +4909,8 @@ ngx_ssl_cleanup_ctx(void *data) { ngx_ssl_t *ssl = data; - X509 *cert, *next; + X509 *cert, *next; + ngx_str_t *comp; cert = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_certificate_index); @@ -4842,6 +4920,13 @@ ngx_ssl_cleanup_ctx(void *data) cert = next; } + comp = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_certificate_comp_index); + + if (comp != NULL) { + ngx_free(comp->data); + ngx_free(comp); + } + SSL_CTX_free(ssl->ctx); } From pluknet at nginx.com Wed Apr 12 13:44:29 2023 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Wed, 12 Apr 2023 17:44:29 +0400 Subject: [PATCH] Added stream modules realip and ssl_preread to win32 builds Message-ID: # HG changeset patch # User Sergey Kandaurov # Date 1681306935 -14400 # Wed Apr 12 17:42:15 2023 +0400 # Node ID bdfbd7ed2433d1a68d466f353983829b17f6df1f # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 Added stream modules realip and ssl_preread to win32 builds. diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -75,6 +75,8 @@ win32: --with-http_slice_module \ --with-mail \ --with-stream \ + --with-stream_realip_module \ + --with-stream_ssl_preread_module \ --with-openssl=$(OBJS)/lib/$(OPENSSL) \ --with-openssl-opt="no-asm no-tests -D_WIN32_WINNT=0x0501" \ --with-http_ssl_module \ From xeioex at nginx.com Thu Apr 13 01:28:44 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 13 Apr 2023 01:28:44 +0000 Subject: [njs] VM: removed unused NJS_VMCODE_OBJECT_COPY instruction. Message-ID: details: https://hg.nginx.org/njs/rev/b2bd614ce046 branches: changeset: 2086:b2bd614ce046 user: Dmitry Volyntsev date: Wed Apr 12 18:26:40 2023 -0700 description: VM: removed unused NJS_VMCODE_OBJECT_COPY instruction. diffstat: src/njs_disassembler.c | 2 - src/njs_vmcode.c | 56 -------------------------------------------------- src/njs_vmcode.h | 1 - 3 files changed, 0 insertions(+), 59 deletions(-) diffs (110 lines): diff -r 29ddc56f7aa5 -r b2bd614ce046 src/njs_disassembler.c --- a/src/njs_disassembler.c Mon Apr 10 23:06:34 2023 -0700 +++ b/src/njs_disassembler.c Wed Apr 12 18:26:40 2023 -0700 @@ -29,8 +29,6 @@ static njs_code_name_t code_names[] = { njs_str("REGEXP ") }, { NJS_VMCODE_TEMPLATE_LITERAL, sizeof(njs_vmcode_template_literal_t), njs_str("TEMPLATE LITERAL") }, - { NJS_VMCODE_OBJECT_COPY, sizeof(njs_vmcode_object_copy_t), - njs_str("OBJECT COPY ") }, { NJS_VMCODE_FUNCTION_COPY, sizeof(njs_vmcode_function_copy_t), njs_str("FUNCTION COPY ") }, diff -r 29ddc56f7aa5 -r b2bd614ce046 src/njs_vmcode.c --- a/src/njs_vmcode.c Mon Apr 10 23:06:34 2023 -0700 +++ b/src/njs_vmcode.c Wed Apr 12 18:26:40 2023 -0700 @@ -20,8 +20,6 @@ static njs_jump_off_t njs_vmcode_argumen static njs_jump_off_t njs_vmcode_regexp(njs_vm_t *vm, u_char *pc); static njs_jump_off_t njs_vmcode_template_literal(njs_vm_t *vm, njs_value_t *inlvd1, njs_value_t *inlvd2); -static njs_jump_off_t njs_vmcode_object_copy(njs_vm_t *vm, njs_value_t *value, - njs_value_t *invld); static njs_jump_off_t njs_vmcode_function_copy(njs_vm_t *vm, njs_value_t *value, njs_index_t retval); @@ -206,7 +204,6 @@ njs_vmcode_interpreter(njs_vm_t *vm, u_c NJS_GOTO_ROW(NJS_VMCODE_LEFT_SHIFT), NJS_GOTO_ROW(NJS_VMCODE_RIGHT_SHIFT), NJS_GOTO_ROW(NJS_VMCODE_UNSIGNED_RIGHT_SHIFT), - NJS_GOTO_ROW(NJS_VMCODE_OBJECT_COPY), NJS_GOTO_ROW(NJS_VMCODE_TEMPLATE_LITERAL), NJS_GOTO_ROW(NJS_VMCODE_PROPERTY_IN), NJS_GOTO_ROW(NJS_VMCODE_PROPERTY_DELETE), @@ -874,23 +871,6 @@ NEXT_LBL; njs_set_uint32(retval, njs_number_to_uint32(num) >> u32); NEXT; - CASE (NJS_VMCODE_OBJECT_COPY): - njs_vmcode_debug_opcode(); - - njs_vmcode_operand(vm, vmcode->operand2, value1); - - ret = njs_vmcode_object_copy(vm, value1, NULL); - - if (njs_slow_path(ret < 0 && ret >= NJS_PREEMPT)) { - goto error; - } - - njs_vmcode_operand(vm, vmcode->operand1, retval); - njs_release(vm, retval); - *retval = vm->retval; - - BREAK; - CASE (NJS_VMCODE_TEMPLATE_LITERAL): njs_vmcode_debug_opcode(); @@ -2042,42 +2022,6 @@ njs_vmcode_template_literal(njs_vm_t *vm static njs_jump_off_t -njs_vmcode_object_copy(njs_vm_t *vm, njs_value_t *value, njs_value_t *invld) -{ - njs_object_t *object; - njs_function_t *function; - - switch (value->type) { - - case NJS_OBJECT: - object = njs_object_value_copy(vm, value); - if (njs_slow_path(object == NULL)) { - return NJS_ERROR; - } - - break; - - case NJS_FUNCTION: - function = njs_function_value_copy(vm, value); - if (njs_slow_path(function == NULL)) { - return NJS_ERROR; - } - - break; - - default: - break; - } - - vm->retval = *value; - - njs_retain(value); - - return sizeof(njs_vmcode_object_copy_t); -} - - -static njs_jump_off_t njs_vmcode_function_copy(njs_vm_t *vm, njs_value_t *value, njs_index_t retidx) { njs_value_t *retval; diff -r 29ddc56f7aa5 -r b2bd614ce046 src/njs_vmcode.h --- a/src/njs_vmcode.h Mon Apr 10 23:06:34 2023 -0700 +++ b/src/njs_vmcode.h Wed Apr 12 18:26:40 2023 -0700 @@ -92,7 +92,6 @@ enum { NJS_VMCODE_LEFT_SHIFT, NJS_VMCODE_RIGHT_SHIFT, NJS_VMCODE_UNSIGNED_RIGHT_SHIFT, - NJS_VMCODE_OBJECT_COPY, NJS_VMCODE_TEMPLATE_LITERAL, NJS_VMCODE_PROPERTY_IN, NJS_VMCODE_PROPERTY_DELETE, From xeioex at nginx.com Thu Apr 13 01:28:46 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 13 Apr 2023 01:28:46 +0000 Subject: [njs] VM: simplified NJS_VMCODE_TEMPLATE_LITERAL instruction. Message-ID: details: https://hg.nginx.org/njs/rev/5665eebfd00c branches: changeset: 2087:5665eebfd00c user: Dmitry Volyntsev date: Wed Apr 12 18:26:42 2023 -0700 description: VM: simplified NJS_VMCODE_TEMPLATE_LITERAL instruction. diffstat: src/njs_vmcode.c | 40 +++++++++++++--------------------------- 1 files changed, 13 insertions(+), 27 deletions(-) diffs (78 lines): diff -r b2bd614ce046 -r 5665eebfd00c src/njs_vmcode.c --- a/src/njs_vmcode.c Wed Apr 12 18:26:40 2023 -0700 +++ b/src/njs_vmcode.c Wed Apr 12 18:26:42 2023 -0700 @@ -19,7 +19,7 @@ static njs_jump_off_t njs_vmcode_functio static njs_jump_off_t njs_vmcode_arguments(njs_vm_t *vm, u_char *pc); static njs_jump_off_t njs_vmcode_regexp(njs_vm_t *vm, u_char *pc); static njs_jump_off_t njs_vmcode_template_literal(njs_vm_t *vm, - njs_value_t *inlvd1, njs_value_t *inlvd2); + njs_value_t *retval); static njs_jump_off_t njs_vmcode_function_copy(njs_vm_t *vm, njs_value_t *value, njs_index_t retval); @@ -874,18 +874,14 @@ NEXT_LBL; CASE (NJS_VMCODE_TEMPLATE_LITERAL): njs_vmcode_debug_opcode(); - value2 = (njs_value_t *) vmcode->operand1; - - ret = njs_vmcode_template_literal(vm, NULL, value2); + njs_vmcode_operand(vm, vmcode->operand1, retval); + + ret = njs_vmcode_template_literal(vm, retval); if (njs_slow_path(ret < 0 && ret >= NJS_PREEMPT)) { goto error; } - njs_vmcode_operand(vm, vmcode->operand1, retval); - njs_release(vm, retval); - *retval = vm->retval; - BREAK; CASE (NJS_VMCODE_PROPERTY_IN): @@ -1987,11 +1983,9 @@ njs_vmcode_regexp(njs_vm_t *vm, u_char * static njs_jump_off_t -njs_vmcode_template_literal(njs_vm_t *vm, njs_value_t *invld1, - njs_value_t *retval) +njs_vmcode_template_literal(njs_vm_t *vm, njs_value_t *retval) { njs_array_t *array; - njs_value_t *value; njs_jump_off_t ret; static const njs_function_t concat = { @@ -1999,22 +1993,14 @@ njs_vmcode_template_literal(njs_vm_t *vm .u.native = njs_string_prototype_concat }; - value = njs_scope_valid_value(vm, (njs_index_t) retval); - - if (!njs_is_primitive(value)) { - array = njs_array(value); - - ret = njs_function_frame(vm, (njs_function_t *) &concat, - &njs_string_empty, array->start, - array->length, 0); - if (njs_slow_path(ret != NJS_OK)) { - return ret; - } - - ret = njs_function_frame_invoke(vm, value); - if (njs_slow_path(ret != NJS_OK)) { - return ret; - } + njs_assert(njs_is_array(retval)); + + array = njs_array(retval); + + ret = njs_function_call(vm, (njs_function_t *) &concat, &njs_string_empty, + array->start, array->length, retval); + if (njs_slow_path(ret != NJS_OK)) { + return ret; } return sizeof(njs_vmcode_template_literal_t); From mdounin at mdounin.ru Mon Apr 17 03:31:26 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:26 +0300 Subject: [PATCH 02 of 11] Tests: removed unneeded require from proxy_ssl_keepalive.t In-Reply-To: References: Message-ID: <6f0148ef1991d92a003c.1681702286@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1681702250 -10800 # Mon Apr 17 06:30:50 2023 +0300 # Node ID 6f0148ef1991d92a003c8529c8cce9a8dd49e706 # Parent a01b7d84f4355073a00f43760fc512e03b4452c3 Tests: removed unneeded require from proxy_ssl_keepalive.t. diff --git a/proxy_ssl_keepalive.t b/proxy_ssl_keepalive.t --- a/proxy_ssl_keepalive.t +++ b/proxy_ssl_keepalive.t @@ -22,9 +22,6 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; - my $t = Test::Nginx->new()->has(qw/http http_ssl proxy upstream_keepalive/) ->has_daemon('openssl')->plan(3) ->write_file_expand('nginx.conf', <<'EOF'); From mdounin at mdounin.ru Mon Apr 17 03:31:24 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:24 +0300 Subject: [PATCH 00 of 11] SSL tests simplified Message-ID: Hello! The following patch series simplifies various SSL tests by converting them to a common infrastructure based on IO::Socket::SSL. In particular, this ensures that various operations are properly guarded with timeouts and properly handle SIGPIPEs (which is important when testing with intentionally broken malloc(), see 96:ecff5407867c). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Apr 17 03:31:25 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:25 +0300 Subject: [PATCH 01 of 11] Tests: SIGPIPE handling in mail tests In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1681702248 -10800 # Mon Apr 17 06:30:48 2023 +0300 # Node ID a01b7d84f4355073a00f43760fc512e03b4452c3 # Parent 36a4563f7f005184547575f5ac4f22ef53a59c72 Tests: SIGPIPE handling in mail tests. In contrast to http tests, mail tests generally do not try to handle SIGPIPE when writing to a socket, and instead rely on $SIG{PIPE} being set at the start of the test (see 96:ecff5407867c). Fixed some tests which don't do this. diff --git a/mail_capability.t b/mail_capability.t --- a/mail_capability.t +++ b/mail_capability.t @@ -25,6 +25,8 @@ use Test::Nginx::SMTP; select STDERR; $| = 1; select STDOUT; $| = 1; +local $SIG{PIPE} = 'IGNORE'; + my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap pop3 smtp/) ->has_daemon('openssl')->plan(17); diff --git a/mail_error_log.t b/mail_error_log.t --- a/mail_error_log.t +++ b/mail_error_log.t @@ -26,6 +26,8 @@ use Test::Nginx::IMAP; select STDERR; $| = 1; select STDOUT; $| = 1; +local $SIG{PIPE} = 'IGNORE'; + plan(skip_all => 'win32') if $^O eq 'MSWin32'; my $t = Test::Nginx->new()->has(qw/mail imap http rewrite/); diff --git a/mail_ssl.t b/mail_ssl.t --- a/mail_ssl.t +++ b/mail_ssl.t @@ -25,6 +25,8 @@ use Test::Nginx::SMTP; select STDERR; $| = 1; select STDOUT; $| = 1; +local $SIG{PIPE} = 'IGNORE'; + eval { require Net::SSLeay; Net::SSLeay::load_error_strings(); diff --git a/mail_ssl_conf_command.t b/mail_ssl_conf_command.t --- a/mail_ssl_conf_command.t +++ b/mail_ssl_conf_command.t @@ -22,6 +22,8 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; +local $SIG{PIPE} = 'IGNORE'; + eval { require Net::SSLeay; Net::SSLeay::load_error_strings(); diff --git a/mail_ssl_session_reuse.t b/mail_ssl_session_reuse.t --- a/mail_ssl_session_reuse.t +++ b/mail_ssl_session_reuse.t @@ -23,6 +23,8 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; +local $SIG{PIPE} = 'IGNORE'; + eval { require Net::SSLeay; Net::SSLeay::load_error_strings(); From mdounin at mdounin.ru Mon Apr 17 03:31:27 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:27 +0300 Subject: [PATCH 03 of 11] Tests: added has_feature() tests for IO::Socket::SSL In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1681702252 -10800 # Mon Apr 17 06:30:52 2023 +0300 # Node ID f704912ed09f3494a815709710c3744b0adca50b # Parent 6f0148ef1991d92a003c8529c8cce9a8dd49e706 Tests: added has_feature() tests for IO::Socket::SSL. The following distinct features supported: - "socket_ssl", which requires IO::Socket::SSL and also implies existance of the IO::Socket::SSL::SSL_VERIFY_NONE() symbol. It is used by most of the tests. - "socket_ssl_sni", which requires IO::Socket::SSL with the can_client_sni() function (1.84), and SNI support available in Net::SSLeay and the OpenSSL library being used. Used by ssl_sni.t, ssl_sni_sessions.t, stream_ssl_preread.t. Additional Net::SSLeay testing is believed to be unneeded and was removed. - "socket_ssl_alpn", which requires IO::Socket::SSL with ALPN support (2.009), and ALPN support in Net::SSLeay and the OpenSSL library being used. Used by h2_ssl.t, h2_ssl_verify_client.t, stream_ssl_alpn.t, stream_ssl_preread_alpn.t. - "socket_ssl_sslversion", which requires IO::Socket::SSL with the get_sslversion() and get_sslversion_int() methods (1.964). Used by mail_imap_ssl.t. - "socket_ssl_reused", which requires IO::Socket::SSL with the get_session_reused() method (2.057). To be used in the following patches. This makes it possible to simplify and unify various SSL tests. diff --git a/h2_ssl.t b/h2_ssl.t --- a/h2_ssl.t +++ b/h2_ssl.t @@ -25,7 +25,7 @@ use Test::Nginx::HTTP2; select STDERR; $| = 1; select STDOUT; $| = 1; -my $t = Test::Nginx->new()->has(qw/http http_ssl http_v2/) +my $t = Test::Nginx->new()->has(qw/http http_ssl http_v2 socket_ssl_alpn/) ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF'); @@ -55,15 +55,6 @@ http { EOF -eval { require IO::Socket::SSL; die if $IO::Socket::SSL::VERSION < 1.56; }; -plan(skip_all => 'IO::Socket::SSL version >= 1.56 required') if $@; - -eval { IO::Socket::SSL->can_alpn() or die; }; -plan(skip_all => 'IO::Socket::SSL with OpenSSL ALPN support required') if $@; - -eval { exists &Net::SSLeay::P_alpn_selected or die; }; -plan(skip_all => 'Net::SSLeay with OpenSSL ALPN support required') if $@; - $t->write_file('openssl.conf', < 'IO::Socket::SSL not installed') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl http_v2 proxy cache/) +my $t = Test::Nginx->new() + ->has(qw/http http_ssl http_v2 proxy cache socket_ssl/) ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/h2_ssl_variables.t b/h2_ssl_variables.t --- a/h2_ssl_variables.t +++ b/h2_ssl_variables.t @@ -23,12 +23,7 @@ use Test::Nginx::HTTP2; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl http_v2 rewrite/) +my $t = Test::Nginx->new()->has(qw/http http_ssl http_v2 rewrite socket_ssl/) ->has_daemon('openssl')->plan(8); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/h2_ssl_verify_client.t b/h2_ssl_verify_client.t --- a/h2_ssl_verify_client.t +++ b/h2_ssl_verify_client.t @@ -23,14 +23,7 @@ use Test::Nginx::HTTP2; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL->can_client_sni() or die; }; -plan(skip_all => 'IO::Socket::SSL with OpenSSL SNI support required') if $@; -eval { IO::Socket::SSL->can_alpn() or die; }; -plan(skip_all => 'OpenSSL ALPN support required') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl sni http_v2/) +my $t = Test::Nginx->new()->has(qw/http http_ssl sni http_v2 socket_ssl_alpn/) ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/lib/Test/Nginx.pm b/lib/Test/Nginx.pm --- a/lib/Test/Nginx.pm +++ b/lib/Test/Nginx.pm @@ -241,6 +241,31 @@ sub has_feature($) { return $^O ne 'MSWin32'; } + if ($feature =~ /^socket_ssl/) { + eval { require IO::Socket::SSL; }; + return 0 if $@; + eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; + return 0 if $@; + if ($feature eq 'socket_ssl') { + return 1; + } + if ($feature eq 'socket_ssl_sni') { + eval { IO::Socket::SSL->can_client_sni() or die; }; + return !$@; + } + if ($feature eq 'socket_ssl_alpn') { + eval { IO::Socket::SSL->can_alpn() or die; }; + return !$@; + } + if ($feature eq 'socket_ssl_sslversion') { + return IO::Socket::SSL->can('get_sslversion'); + } + if ($feature eq 'socket_ssl_reused') { + return IO::Socket::SSL->can('get_session_reused'); + } + return 0; + } + return 0; } diff --git a/mail_imap_ssl.t b/mail_imap_ssl.t --- a/mail_imap_ssl.t +++ b/mail_imap_ssl.t @@ -26,14 +26,10 @@ use Test::Nginx::IMAP; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - local $SIG{PIPE} = 'IGNORE'; -my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap http rewrite/) +my $t = Test::Nginx->new() + ->has(qw/mail mail_ssl imap http rewrite socket_ssl_sslversion/) ->has_daemon('openssl')->plan(13) ->write_file_expand('nginx.conf', <<'EOF'); @@ -215,12 +211,10 @@ my $s = Test::Nginx::IMAP->new(PeerAddr my ($cipher, $sslversion); -if ($IO::Socket::SSL::VERSION >= 1.964) { - $s = get_ssl_socket(8143); - $cipher = $s->get_cipher(); - $sslversion = $s->get_sslversion(); - $sslversion =~ s/_/./; -} +$s = get_ssl_socket(8143); +$cipher = $s->get_cipher(); +$sslversion = $s->get_sslversion(); +$sslversion =~ s/_/./; undef $s; @@ -239,10 +233,6 @@ like($f, qr!^on:SUCCESS:(/?CN=2.example. like($f, qr!^on:SUCCESS:(/?CN=3.example.com):\1:\w+:\w+:[^:]+:s5$!m, 'log - trusted cert'); -SKIP: { -skip 'IO::Socket::SSL version >= 1.964 required', 1 - if $IO::Socket::SSL::VERSION < 1.964; - TODO: { local $TODO = 'not yet' unless $t->has_version('1.21.2'); @@ -251,8 +241,6 @@ like($f, qr|^$cipher:$sslversion$|m, 'lo } -} - ############################################################################### sub get_ssl_socket { diff --git a/mail_resolver.t b/mail_resolver.t --- a/mail_resolver.t +++ b/mail_resolver.t @@ -23,14 +23,9 @@ use Test::Nginx::SMTP; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - local $SIG{PIPE} = 'IGNORE'; -my $t = Test::Nginx->new()->has(qw/mail mail_ssl smtp http rewrite/) +my $t = Test::Nginx->new()->has(qw/mail mail_ssl smtp http rewrite socket_ssl/) ->has_daemon('openssl')->plan(11) ->write_file_expand('nginx.conf', <<'EOF'); diff --git a/proxy_ssl.t b/proxy_ssl.t --- a/proxy_ssl.t +++ b/proxy_ssl.t @@ -21,11 +21,9 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; - -my $t = Test::Nginx->new()->has(qw/http proxy http_ssl/)->has_daemon('openssl') - ->plan(8)->write_file_expand('nginx.conf', <<'EOF'); +my $t = Test::Nginx->new()->has(qw/http proxy http_ssl socket_ssl/) + ->has_daemon('openssl')->plan(8) + ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% diff --git a/ssl.t b/ssl.t --- a/ssl.t +++ b/ssl.t @@ -25,12 +25,7 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite proxy/) +my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite proxy socket_ssl/) ->has_daemon('openssl')->plan(21); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_certificate_chain.t b/ssl_certificate_chain.t --- a/ssl_certificate_chain.t +++ b/ssl_certificate_chain.t @@ -22,12 +22,7 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl/) +my $t = Test::Nginx->new()->has(qw/http http_ssl socket_ssl/) ->has_daemon('openssl')->plan(3); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_client_escaped_cert.t b/ssl_client_escaped_cert.t --- a/ssl_client_escaped_cert.t +++ b/ssl_client_escaped_cert.t @@ -22,12 +22,7 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite/) +my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite socket_ssl/) ->has_daemon('openssl')->plan(3); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_crl.t b/ssl_crl.t --- a/ssl_crl.t +++ b/ssl_crl.t @@ -22,12 +22,7 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl/) +my $t = Test::Nginx->new()->has(qw/http http_ssl socket_ssl/) ->has_daemon('openssl')->plan(3); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_curve.t b/ssl_curve.t --- a/ssl_curve.t +++ b/ssl_curve.t @@ -22,12 +22,7 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite/) +my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite socket_ssl/) ->has_daemon('openssl'); $t->{_configure_args} =~ /OpenSSL (\d+)/; diff --git a/ssl_password_file.t b/ssl_password_file.t --- a/ssl_password_file.t +++ b/ssl_password_file.t @@ -25,14 +25,9 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - plan(skip_all => 'win32') if $^O eq 'MSWin32'; -my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite/) +my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite socket_ssl/) ->has_daemon('openssl'); $t->plan(3)->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_proxy_protocol.t b/ssl_proxy_protocol.t --- a/ssl_proxy_protocol.t +++ b/ssl_proxy_protocol.t @@ -24,12 +24,7 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl access realip/) +my $t = Test::Nginx->new()->has(qw/http http_ssl access realip socket_ssl/) ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF')->plan(18); diff --git a/ssl_proxy_upgrade.t b/ssl_proxy_upgrade.t --- a/ssl_proxy_upgrade.t +++ b/ssl_proxy_upgrade.t @@ -29,12 +29,8 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http proxy http_ssl/)->has_daemon('openssl') +my $t = Test::Nginx->new()->has(qw/http proxy http_ssl socket_ssl/) + ->has_daemon('openssl') ->write_file_expand('nginx.conf', <<'EOF')->plan(30); %%TEST_GLOBALS%% diff --git a/ssl_reject_handshake.t b/ssl_reject_handshake.t --- a/ssl_reject_handshake.t +++ b/ssl_reject_handshake.t @@ -22,12 +22,8 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL->can_client_sni() or die; }; -plan(skip_all => 'IO::Socket::SSL with OpenSSL SNI support required') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl sni/)->has_daemon('openssl'); +my $t = Test::Nginx->new()->has(qw/http http_ssl sni socket_ssl/) + ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_session_reuse.t b/ssl_session_reuse.t --- a/ssl_session_reuse.t +++ b/ssl_session_reuse.t @@ -23,12 +23,7 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite/) +my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite socket_ssl/) ->has_daemon('openssl')->plan(8); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_sni.t b/ssl_sni.t --- a/ssl_sni.t +++ b/ssl_sni.t @@ -22,8 +22,8 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -my $t = Test::Nginx->new()->has(qw/http http_ssl sni rewrite/) - ->has_daemon('openssl') +my $t = Test::Nginx->new()->has(qw/http http_ssl sni rewrite socket_ssl_sni/) + ->has_daemon('openssl')->plan(8) ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% @@ -79,25 +79,6 @@ http { EOF -eval { require IO::Socket::SSL; die if $IO::Socket::SSL::VERSION < 1.56; }; -plan(skip_all => 'IO::Socket::SSL version >= 1.56 required') if $@; - -eval { - if (IO::Socket::SSL->can('can_client_sni')) { - IO::Socket::SSL->can_client_sni() or die; - } -}; -plan(skip_all => 'IO::Socket::SSL with OpenSSL SNI support required') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - -$t->plan(8); - $t->write_file('openssl.conf', <new()->has(qw/http http_ssl sni rewrite/); - -$t->has_daemon('openssl')->write_file_expand('nginx.conf', <<'EOF'); +my $t = Test::Nginx->new()->has(qw/http http_ssl sni rewrite socket_ssl_sni/) + ->has_daemon('openssl') + ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% @@ -87,23 +87,6 @@ http { EOF -eval { require IO::Socket::SSL; die if $IO::Socket::SSL::VERSION < 1.56; }; -plan(skip_all => 'IO::Socket::SSL version >= 1.56 required') if $@; - -eval { - if (IO::Socket::SSL->can('can_client_sni')) { - IO::Socket::SSL->can_client_sni() or die; - } -}; -plan(skip_all => 'IO::Socket::SSL with OpenSSL SNI support required') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - $t->write_file('openssl.conf', < 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl/)->has_daemon('openssl'); +my $t = Test::Nginx->new()->has(qw/http http_ssl socket_ssl/) + ->has_daemon('openssl'); plan(skip_all => 'LibreSSL') if $t->has_module('LibreSSL'); diff --git a/stream_js_fetch_https.t b/stream_js_fetch_https.t --- a/stream_js_fetch_https.t +++ b/stream_js_fetch_https.t @@ -23,12 +23,9 @@ use Test::Nginx::Stream qw/ stream /; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite stream stream_return/) +my $t = Test::Nginx->new() + ->has(qw/http http_ssl rewrite stream stream_return socket_ssl/) + ->has_daemon('openssl') ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% diff --git a/stream_proxy_protocol_ssl.t b/stream_proxy_protocol_ssl.t --- a/stream_proxy_protocol_ssl.t +++ b/stream_proxy_protocol_ssl.t @@ -24,11 +24,8 @@ use Test::Nginx qw/ :DEFAULT http_end /; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { require IO::Socket::SSL; }; -plan(skip_all => 'IO::Socket::SSL not installed') if $@; - -my $t = Test::Nginx->new()->has(qw/stream stream_ssl/)->has_daemon('openssl') - ->plan(2); +my $t = Test::Nginx->new()->has(qw/stream stream_ssl socket_ssl/) + ->has_daemon('openssl')->plan(2); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/stream_ssl_alpn.t b/stream_ssl_alpn.t --- a/stream_ssl_alpn.t +++ b/stream_ssl_alpn.t @@ -23,8 +23,10 @@ use Test::Nginx::Stream qw/ stream /; select STDERR; $| = 1; select STDOUT; $| = 1; -my $t = Test::Nginx->new()->has(qw/stream stream_ssl stream_return/) - ->has_daemon('openssl')->write_file_expand('nginx.conf', <<'EOF'); +my $t = Test::Nginx->new() + ->has(qw/stream stream_ssl stream_return socket_ssl_alpn/) + ->has_daemon('openssl') + ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% @@ -51,15 +53,6 @@ stream { EOF -eval { require IO::Socket::SSL; die if $IO::Socket::SSL::VERSION < 1.56; }; -plan(skip_all => 'IO::Socket::SSL version >= 1.56 required') if $@; - -eval { IO::Socket::SSL->can_alpn() or die; }; -plan(skip_all => 'IO::Socket::SSL with OpenSSL ALPN support required') if $@; - -eval { exists &Net::SSLeay::P_alpn_selected or die; }; -plan(skip_all => 'Net::SSLeay with OpenSSL ALPN support required') if $@; - $t->write_file('openssl.conf', <new()->has(qw/stream stream_map stream_ssl_preread/) - ->has(qw/stream_ssl stream_return/)->has_daemon('openssl') + ->has(qw/stream_ssl stream_return socket_ssl_sni/) + ->has_daemon('openssl')->plan(13) ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% @@ -107,25 +108,6 @@ stream { EOF -eval { require IO::Socket::SSL; die if $IO::Socket::SSL::VERSION < 1.56; }; -plan(skip_all => 'IO::Socket::SSL version >= 1.56 required') if $@; - -eval { - if (IO::Socket::SSL->can('can_client_sni')) { - IO::Socket::SSL->can_client_sni() or die; - } -}; -plan(skip_all => 'IO::Socket::SSL with OpenSSL SNI support required') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - -$t->plan(13); - $t->write_file('openssl.conf', <new()->has(qw/stream stream_map stream_ssl_preread/) - ->has(qw/stream_ssl stream_return/)->has_daemon('openssl') + ->has(qw/stream_ssl stream_return socket_ssl_alpn/) + ->has_daemon('openssl')->plan(5) ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% @@ -75,17 +76,6 @@ stream { EOF -eval { require IO::Socket::SSL; die if $IO::Socket::SSL::VERSION < 1.56; }; -plan(skip_all => 'IO::Socket::SSL version >= 1.56 required') if $@; - -eval { IO::Socket::SSL->can_alpn() or die; }; -plan(skip_all => 'IO::Socket::SSL with OpenSSL ALPN support required') if $@; - -eval { exists &Net::SSLeay::P_alpn_selected or die; }; -plan(skip_all => 'Net::SSLeay with OpenSSL ALPN support required') if $@; - -$t->plan(5); - $t->write_file('openssl.conf', < 'IO::Socket::SSL not installed') if $@; -eval { IO::Socket::SSL::SSL_VERIFY_NONE(); }; -plan(skip_all => 'IO::Socket::SSL too old') if $@; - my $t = Test::Nginx->new()->has(qw/stream stream_return stream_realip/) - ->has(qw/stream_ssl/)->has_daemon('openssl') + ->has(qw/stream_ssl socket_ssl/) + ->has_daemon('openssl') ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% From mdounin at mdounin.ru Mon Apr 17 03:31:28 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:28 +0300 Subject: [PATCH 04 of 11] Tests: fixed server_tokens tests for build names with spaces In-Reply-To: References: Message-ID: <605cab711606724e5879.1681702288@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1681702253 -10800 # Mon Apr 17 06:30:53 2023 +0300 # Node ID 605cab711606724e5879e8a81d5d21797e5ddcfb # Parent f704912ed09f3494a815709710c3744b0adca50b Tests: fixed server_tokens tests for build names with spaces. Build names can contain spaces, and previously used pattern, "--build=(\S+)", failed to properly match such build names. Instead, now we simply test that some build name is provided in the Server header. Further, the $t->has_module() method is now used to check if a build name is set instead of directly testing the $t->{_configure_args} internal field. diff --git a/h2_server_tokens.t b/h2_server_tokens.t --- a/h2_server_tokens.t +++ b/h2_server_tokens.t @@ -106,7 +106,7 @@ like(header_server('/on/200'), qr/^$re$/ like(header_server('/on/404'), qr/^$re$/, 'http2 tokens on 404'); like(body('/on/404'), $re, 'http2 tokens on 404 body'); -$re = qr/$re \Q($1)\E/ if $t->{_configure_args} =~ /--build=(\S+)/; +$re = qr/$re \(.*\)/ if $t->has_module('--build='); like(header_server('/b/200'), qr/^$re$/, 'http2 tokens build 200'); like(header_server('/b/404'), qr/^$re$/, 'http2 tokens build 404'); diff --git a/server_tokens.t b/server_tokens.t --- a/server_tokens.t +++ b/server_tokens.t @@ -105,7 +105,7 @@ like(http_get_server('/on/200'), $re, 't like(http_get_server('/on/404'), $re, 'tokens on 404'); like(http_body('/on/404'), $re, 'tokens on 404 body'); -$re = qr/$re \Q($1)\E/ if $t->{_configure_args} =~ /--build=(\S+)/; +$re = qr/$re \(.*\)/ if $t->has_module('--build='); like(http_get_server('/b/200'), $re, 'tokens build 200'); like(http_get_server('/b/404'), $re, 'tokens build 404'); From mdounin at mdounin.ru Mon Apr 17 03:31:29 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:29 +0300 Subject: [PATCH 05 of 11] Tests: added has_feature() test for SSL libraries In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1681702255 -10800 # Mon Apr 17 06:30:55 2023 +0300 # Node ID a8e22a3212da945e9060d4233905eb6de1399d34 # Parent 605cab711606724e5879e8a81d5d21797e5ddcfb Tests: added has_feature() test for SSL libraries. This makes it possible to further simplify various SSL tests. It also avoids direct testing of the $t->{_configure_args} internal field, and implements proper comparison of version numbers. diff --git a/grpc_pass.t b/grpc_pass.t --- a/grpc_pass.t +++ b/grpc_pass.t @@ -107,8 +107,7 @@ like(http_get('/basic'), qr/200 OK/, 'no like(http_get('/grpc'), qr/200 OK/, 'grpc scheme'); SKIP: { -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -skip 'OpenSSL too old', 1 unless defined $1 and $1 ge '1.0.2'; +skip 'OpenSSL too old', 1 unless $t->has_feature('openssl:1.0.2'); like(http_get('/grpcs'), qr/200 OK/, 'grpcs scheme'); diff --git a/grpc_ssl.t b/grpc_ssl.t --- a/grpc_ssl.t +++ b/grpc_ssl.t @@ -24,12 +24,9 @@ select STDERR; $| = 1; select STDOUT; $| = 1; my $t = Test::Nginx->new()->has(qw/http rewrite http_v2 grpc/) - ->has(qw/upstream_keepalive http_ssl/)->has_daemon('openssl'); - -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 ge '1.0.2'; - -$t->write_file_expand('nginx.conf', <<'EOF')->plan(38); + ->has(qw/upstream_keepalive http_ssl openssl:1.0.2/) + ->has_daemon('openssl') + ->write_file_expand('nginx.conf', <<'EOF')->plan(38); %%TEST_GLOBALS%% diff --git a/h2_ssl.t b/h2_ssl.t --- a/h2_ssl.t +++ b/h2_ssl.t @@ -87,10 +87,12 @@ plan(skip_all => 'no ALPN negotiation') ############################################################################### SKIP: { -$t->{_configure_args} =~ /LibreSSL ([\d\.]+)/; -skip 'LibreSSL too old', 1 if defined $1 and $1 lt '3.4.0'; -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -skip 'OpenSSL too old', 1 if defined $1 and $1 lt '1.1.0'; +skip 'LibreSSL too old', 1 + if $t->has_module('LibreSSL') + and not $t->has_feature('libressl:3.4.0'); +skip 'OpenSSL too old', 1 + if $t->has_module('OpenSSL') + and not $t->has_feature('openssl:1.1.0'); TODO: { local $TODO = 'not yet' unless $t->has_version('1.21.4'); diff --git a/lib/Test/Nginx.pm b/lib/Test/Nginx.pm --- a/lib/Test/Nginx.pm +++ b/lib/Test/Nginx.pm @@ -266,6 +266,28 @@ sub has_feature($) { return 0; } + if ($feature =~ /^(openssl|libressl):([0-9.]+)/) { + my $library = $1; + my $need = $2; + + $self->{_configure_args} = `$NGINX -V 2>&1` + if !defined $self->{_configure_args}; + + return 0 unless + $self->{_configure_args} =~ /with $library ([0-9.]+)/i; + + my @v = split(/\./, $1); + my ($n, $v); + + for $n (split(/\./, $need)) { + $v = shift @v || 0; + return 0 if $n > $v; + return 1 if $v > $n; + } + + return 1; + } + return 0; } diff --git a/mail_ssl.t b/mail_ssl.t --- a/mail_ssl.t +++ b/mail_ssl.t @@ -175,10 +175,12 @@ like(Net::SSLeay::dump_peer_certificate( ok(get_ssl_socket(8148, ['imap']), 'alpn'); SKIP: { -$t->{_configure_args} =~ /LibreSSL ([\d\.]+)/; -skip 'LibreSSL too old', 1 if defined $1 and $1 lt '3.4.0'; -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -skip 'OpenSSL too old', 1 if defined $1 and $1 lt '1.1.0'; +skip 'LibreSSL too old', 1 + if $t->has_module('LibreSSL') + and not $t->has_feature('libressl:3.4.0'); +skip 'OpenSSL too old', 1 + if $t->has_module('OpenSSL') + and not $t->has_feature('openssl:1.1.0'); TODO: { local $TODO = 'not yet' unless $t->has_version('1.21.4'); diff --git a/mail_ssl_conf_command.t b/mail_ssl_conf_command.t --- a/mail_ssl_conf_command.t +++ b/mail_ssl_conf_command.t @@ -32,11 +32,9 @@ eval { }; plan(skip_all => 'Net::SSLeay not installed') if $@; -my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap/) +my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap openssl:1.0.2/) ->has_daemon('openssl'); -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 ge '1.0.2'; plan(skip_all => 'no ssl_conf_command') if $t->has_module('BoringSSL'); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/proxy_ssl_conf_command.t b/proxy_ssl_conf_command.t --- a/proxy_ssl_conf_command.t +++ b/proxy_ssl_conf_command.t @@ -22,11 +22,10 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -my $t = Test::Nginx->new()->has(qw/http http_ssl proxy uwsgi http_v2 grpc/) +my $t = Test::Nginx->new() + ->has(qw/http http_ssl proxy uwsgi http_v2 grpc openssl:1.0.2/) ->has_daemon('openssl'); -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 ge '1.0.2'; plan(skip_all => 'no ssl_conf_command') if $t->has_module('BoringSSL'); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_certificate.t b/ssl_certificate.t --- a/ssl_certificate.t +++ b/ssl_certificate.t @@ -39,12 +39,9 @@ eval { }; plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; -my $t = Test::Nginx->new()->has(qw/http http_ssl geo/) +my $t = Test::Nginx->new()->has(qw/http http_ssl geo openssl:1.0.2/) ->has_daemon('openssl'); -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 ge '1.0.2'; - $t->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% diff --git a/ssl_certificate_perl.t b/ssl_certificate_perl.t --- a/ssl_certificate_perl.t +++ b/ssl_certificate_perl.t @@ -37,10 +37,9 @@ eval { }; plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; -my $t = Test::Nginx->new()->has(qw/http http_ssl perl/)->has_daemon('openssl'); - -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 ge '1.0.2'; +my $t = Test::Nginx->new() + ->has(qw/http http_ssl perl openssl:1.0.2/) + ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_conf_command.t b/ssl_conf_command.t --- a/ssl_conf_command.t +++ b/ssl_conf_command.t @@ -30,11 +30,9 @@ eval { }; plan(skip_all => 'Net::SSLeay not installed') if $@; -my $t = Test::Nginx->new()->has(qw/http http_ssl/) +my $t = Test::Nginx->new()->has(qw/http http_ssl openssl:1.0.2/) ->has_daemon('openssl'); -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 ge '1.0.2'; plan(skip_all => 'no ssl_conf_command') if $t->has_module('BoringSSL'); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/ssl_curve.t b/ssl_curve.t --- a/ssl_curve.t +++ b/ssl_curve.t @@ -22,12 +22,10 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -my $t = Test::Nginx->new()->has(qw/http http_ssl rewrite socket_ssl/) +my $t = Test::Nginx->new() + ->has(qw/http http_ssl rewrite socket_ssl openssl:3.0.0/) ->has_daemon('openssl'); -$t->{_configure_args} =~ /OpenSSL (\d+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 >= 3; - $t->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% diff --git a/stream_proxy_ssl_conf_command.t b/stream_proxy_ssl_conf_command.t --- a/stream_proxy_ssl_conf_command.t +++ b/stream_proxy_ssl_conf_command.t @@ -22,11 +22,10 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -my $t = Test::Nginx->new()->has(qw/stream stream_ssl http http_ssl/) +my $t = Test::Nginx->new() + ->has(qw/stream stream_ssl http http_ssl openssl:1.0.2/) ->has_daemon('openssl'); -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 ge '1.0.2'; plan(skip_all => 'no ssl_conf_command') if $t->has_module('BoringSSL'); $t->write_file_expand('nginx.conf', <<'EOF'); diff --git a/stream_ssl_alpn.t b/stream_ssl_alpn.t --- a/stream_ssl_alpn.t +++ b/stream_ssl_alpn.t @@ -81,10 +81,12 @@ is(get_ssl('wrong', 'second'), 'X second is(get_ssl(), 'X X', 'no alpn'); SKIP: { -$t->{_configure_args} =~ /LibreSSL ([\d\.]+)/; -skip 'LibreSSL too old', 2 if defined $1 and $1 lt '3.4.0'; -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -skip 'OpenSSL too old', 2 if defined $1 and $1 lt '1.1.0'; +skip 'LibreSSL too old', 2 + if $t->has_module('LibreSSL') + and not $t->has_feature('libressl:3.4.0'); +skip 'OpenSSL too old', 2 + if $t->has_module('OpenSSL') + and not $t->has_feature('openssl:1.1.0'); ok(!get_ssl('wrong'), 'alpn mismatch'); diff --git a/stream_ssl_certificate.t b/stream_ssl_certificate.t --- a/stream_ssl_certificate.t +++ b/stream_ssl_certificate.t @@ -37,13 +37,10 @@ eval { }; plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; -my $t = Test::Nginx->new()->has(qw/stream stream_ssl stream_geo stream_return/) - ->has_daemon('openssl'); - -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 ge '1.0.2'; - -$t->write_file_expand('nginx.conf', <<'EOF'); +my $t = Test::Nginx->new() + ->has(qw/stream stream_ssl stream_geo stream_return openssl:1.0.2/) + ->has_daemon('openssl') + ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% diff --git a/stream_ssl_conf_command.t b/stream_ssl_conf_command.t --- a/stream_ssl_conf_command.t +++ b/stream_ssl_conf_command.t @@ -30,11 +30,10 @@ eval { }; plan(skip_all => 'Net::SSLeay not installed') if $@; -my $t = Test::Nginx->new()->has(qw/stream stream_ssl stream_return/) +my $t = Test::Nginx->new() + ->has(qw/stream stream_ssl stream_return openssl:1.0.2/) ->has_daemon('openssl'); -$t->{_configure_args} =~ /OpenSSL ([\d\.]+)/; -plan(skip_all => 'OpenSSL too old') unless defined $1 and $1 ge '1.0.2'; plan(skip_all => 'no ssl_conf_command') if $t->has_module('BoringSSL'); $t->write_file_expand('nginx.conf', <<'EOF'); From mdounin at mdounin.ru Mon Apr 17 03:31:33 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:33 +0300 Subject: [PATCH 09 of 11] Tests: simplified stream SSL tests with IO::Socket::SSL In-Reply-To: References: Message-ID: <90913cb36b512c45cd9a.1681702293@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1681702262 -10800 # Mon Apr 17 06:31:02 2023 +0300 # Node ID 90913cb36b512c45cd9a171cbb4320b12ff24b48 # Parent 0103d7cd7b5c46af63642dbb02481563662fb3d9 Tests: simplified stream SSL tests with IO::Socket::SSL. The stream SSL tests which previously used IO::Socket::SSL were converted to use infrastructure in Test::Nginx::Stream where appropriate. diff --git a/stream_ssl_alpn.t b/stream_ssl_alpn.t --- a/stream_ssl_alpn.t +++ b/stream_ssl_alpn.t @@ -100,25 +100,12 @@ like($t->read_file('test.log'), qr/500$/ sub get_ssl { my (@alpn) = @_; - my $s = stream('127.0.0.1:' . port(8080)); - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - IO::Socket::SSL->start_SSL($s->{_socket}, - SSL_alpn_protocols => [ @alpn ], - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } + my $s = stream( + PeerAddr => '127.0.0.1:' . port(8080), + SSL => 1, + SSL_alpn_protocols => [ @alpn ] + ); return $s->read(); } diff --git a/stream_ssl_preread.t b/stream_ssl_preread.t --- a/stream_ssl_preread.t +++ b/stream_ssl_preread.t @@ -142,7 +142,7 @@ is(get_ssl('bar', 8081), $p2, 'sni 2 aga is(get_ssl('', 8081), $p3, 'no sni'); is(get_ssl('foo', 8082), $p3, 'preread off'); -is(get_ssl('foo', 8083), undef, 'preread buffer full'); +is(get_ssl('foo', 8083), '', 'preread buffer full'); is(stream('127.0.0.1:' . port(8080))->io('x' x 1000), "127.0.0.1:$p3", 'not a handshake'); @@ -202,25 +202,12 @@ sub get_oldver { sub get_ssl { my ($host, $port) = @_; - my $s = stream('127.0.0.1:' . port($port)); - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - IO::Socket::SSL->start_SSL($s->{_socket}, - SSL_hostname => $host, - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } + my $s = stream( + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_hostname => $host + ); return $s->read(); } diff --git a/stream_ssl_preread_alpn.t b/stream_ssl_preread_alpn.t --- a/stream_ssl_preread_alpn.t +++ b/stream_ssl_preread_alpn.t @@ -114,25 +114,12 @@ get_ssl(8081, ''); sub get_ssl { my ($port, @alpn) = @_; - my $s = stream('127.0.0.1:' . port($port)); - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - IO::Socket::SSL->start_SSL($s->{_socket}, - SSL_alpn_protocols => [ @alpn ], - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } + my $s = stream( + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_alpn_protocols => [ @alpn ] + ); return $s->read(); } From mdounin at mdounin.ru Mon Apr 17 03:31:30 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:30 +0300 Subject: [PATCH 06 of 11] Tests: reworked mail SSL tests to use IO::Socket::SSL In-Reply-To: References: Message-ID: <20d603cd3cbeab891271.1681702290@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1681702257 -10800 # Mon Apr 17 06:30:57 2023 +0300 # Node ID 20d603cd3cbeab89127108fe9cb6dffd0e9469e8 # Parent a8e22a3212da945e9060d4233905eb6de1399d34 Tests: reworked mail SSL tests to use IO::Socket::SSL. Relevant infrastructure is provided in Test::Nginx::IMAP (and also POP3 and SMTP for completeness). This also ensures that SSL handshake and various read operations are guarded with timeouts. diff --git a/lib/Test/Nginx/IMAP.pm b/lib/Test/Nginx/IMAP.pm --- a/lib/Test/Nginx/IMAP.pm +++ b/lib/Test/Nginx/IMAP.pm @@ -20,17 +20,40 @@ sub new { my $self = {}; bless $self, shift @_; - $self->{_socket} = IO::Socket::INET->new( - Proto => "tcp", - PeerAddr => "127.0.0.1:" . port(8143), - @_ - ) - or die "Can't connect to nginx: $!\n"; + my $port = {@_}->{'SSL'} ? 8993 : 8143; + + eval { + local $SIG{ALRM} = sub { die "timeout\n" }; + local $SIG{PIPE} = sub { die "sigpipe\n" }; + alarm(8); + + $self->{_socket} = IO::Socket::INET->new( + Proto => "tcp", + PeerAddr => "127.0.0.1:" . port($port), + @_ + ) + or die "Can't connect to nginx: $!\n"; - if ({@_}->{'SSL'}) { - require IO::Socket::SSL; - IO::Socket::SSL->start_SSL($self->{_socket}, @_) - or die $IO::Socket::SSL::SSL_ERROR . "\n"; + if ({@_}->{'SSL'}) { + require IO::Socket::SSL; + IO::Socket::SSL->start_SSL( + $self->{_socket}, + SSL_verify_mode => + IO::Socket::SSL::SSL_VERIFY_NONE(), + @_ + ) + or die $IO::Socket::SSL::SSL_ERROR . "\n"; + + my $s = $self->{_socket}; + log_in("ssl cipher: " . $s->get_cipher()); + log_in("ssl cert: " . $s->peer_certificate('issuer')); + } + + alarm(0); + }; + alarm(0); + if ($@) { + log_in("died: $@"); } $self->{_socket}->autoflush(1); @@ -39,6 +62,11 @@ sub new { return $self; } +sub DESTROY { + my $self = shift; + $self->{_socket}->close(); +} + sub eof { my $self = shift; return $self->{_socket}->eof(); @@ -109,6 +137,11 @@ sub can_read { IO::Select->new($self->{_socket})->can_read($timo || 3); } +sub socket { + my ($self) = @_; + $self->{_socket}; +} + ############################################################################### sub imap_test_daemon { diff --git a/lib/Test/Nginx/POP3.pm b/lib/Test/Nginx/POP3.pm --- a/lib/Test/Nginx/POP3.pm +++ b/lib/Test/Nginx/POP3.pm @@ -20,17 +20,40 @@ sub new { my $self = {}; bless $self, shift @_; - $self->{_socket} = IO::Socket::INET->new( - Proto => "tcp", - PeerAddr => "127.0.0.1:" . port(8110), - @_ - ) - or die "Can't connect to nginx: $!\n"; + my $port = {@_}->{'SSL'} ? 8995 : 8110; + + eval { + local $SIG{ALRM} = sub { die "timeout\n" }; + local $SIG{PIPE} = sub { die "sigpipe\n" }; + alarm(8); + + $self->{_socket} = IO::Socket::INET->new( + Proto => "tcp", + PeerAddr => "127.0.0.1:" . port($port), + @_ + ) + or die "Can't connect to nginx: $!\n"; - if ({@_}->{'SSL'}) { - require IO::Socket::SSL; - IO::Socket::SSL->start_SSL($self->{_socket}, @_) - or die $IO::Socket::SSL::SSL_ERROR . "\n"; + if ({@_}->{'SSL'}) { + require IO::Socket::SSL; + IO::Socket::SSL->start_SSL( + $self->{_socket}, + SSL_verify_mode => + IO::Socket::SSL::SSL_VERIFY_NONE(), + @_ + ) + or die $IO::Socket::SSL::SSL_ERROR . "\n"; + + my $s = $self->{_socket}; + log_in("ssl cipher: " . $s->get_cipher()); + log_in("ssl cert: " . $s->peer_certificate('issuer')); + } + + alarm(0); + }; + alarm(0); + if ($@) { + log_in("died: $@"); } $self->{_socket}->autoflush(1); @@ -39,6 +62,11 @@ sub new { return $self; } +sub DESTROY { + my $self = shift; + $self->{_socket}->close(); +} + sub eof { my $self = shift; return $self->{_socket}->eof(); @@ -109,6 +137,11 @@ sub can_read { IO::Select->new($self->{_socket})->can_read($timo || 3); } +sub socket { + my ($self) = @_; + $self->{_socket}; +} + ############################################################################### sub pop3_test_daemon { diff --git a/lib/Test/Nginx/SMTP.pm b/lib/Test/Nginx/SMTP.pm --- a/lib/Test/Nginx/SMTP.pm +++ b/lib/Test/Nginx/SMTP.pm @@ -20,17 +20,40 @@ sub new { my $self = {}; bless $self, shift @_; - $self->{_socket} = IO::Socket::INET->new( - Proto => "tcp", - PeerAddr => "127.0.0.1:" . port(8025), - @_ - ) - or die "Can't connect to nginx: $!\n"; + my $port = {@_}->{'SSL'} ? 8465 : 8025; + + eval { + local $SIG{ALRM} = sub { die "timeout\n" }; + local $SIG{PIPE} = sub { die "sigpipe\n" }; + alarm(8); + + $self->{_socket} = IO::Socket::INET->new( + Proto => "tcp", + PeerAddr => "127.0.0.1:" . port($port), + @_ + ) + or die "Can't connect to nginx: $!\n"; - if ({@_}->{'SSL'}) { - require IO::Socket::SSL; - IO::Socket::SSL->start_SSL($self->{_socket}, @_) - or die $IO::Socket::SSL::SSL_ERROR . "\n"; + if ({@_}->{'SSL'}) { + require IO::Socket::SSL; + IO::Socket::SSL->start_SSL( + $self->{_socket}, + SSL_verify_mode => + IO::Socket::SSL::SSL_VERIFY_NONE(), + @_ + ) + or die $IO::Socket::SSL::SSL_ERROR . "\n"; + + my $s = $self->{_socket}; + log_in("ssl cipher: " . $s->get_cipher()); + log_in("ssl cert: " . $s->peer_certificate('issuer')); + } + + alarm(0); + }; + alarm(0); + if ($@) { + log_in("died: $@"); } $self->{_socket}->autoflush(1); @@ -39,6 +62,11 @@ sub new { return $self; } +sub DESTROY { + my $self = shift; + $self->{_socket}->close(); +} + sub eof { my $self = shift; return $self->{_socket}->eof(); @@ -115,6 +143,11 @@ sub can_read { IO::Select->new($self->{_socket})->can_read($timo || 3); } +sub socket { + my ($self) = @_; + $self->{_socket}; +} + ############################################################################### sub smtp_test_daemon { diff --git a/mail_ssl.t b/mail_ssl.t --- a/mail_ssl.t +++ b/mail_ssl.t @@ -27,19 +27,8 @@ select STDOUT; $| = 1; local $SIG{PIPE} = 'IGNORE'; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -eval { exists &Net::SSLeay::P_alpn_selected or die; }; -plan(skip_all => 'Net::SSLeay with OpenSSL ALPN support required') if $@; - -my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap pop3 smtp/) - ->has_daemon('openssl')->plan(18); +my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap pop3 smtp socket_ssl/) + ->has_daemon('openssl')->plan(19); $t->write_file_expand('nginx.conf', <<'EOF'); @@ -143,7 +132,6 @@ foreach my $name ('localhost', 'inherits or die "Can't create certificate for $name: $!\n"; } -my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); $t->write_file('password', 'localhost'); open OLDERR, ">&", \*STDERR; close STDERR; @@ -164,15 +152,24 @@ my ($s, $ssl); # ssl_certificate inheritance -($s, $ssl) = get_ssl_socket(8145); -like(Net::SSLeay::dump_peer_certificate($ssl), qr/CN=localhost/, 'CN'); +$s = Test::Nginx::IMAP->new(PeerAddr => '127.0.0.1:' . port(8145), SSL => 1); +$s->ok('greeting ssl'); + +like($s->socket()->dump_peer_certificate(), qr/CN=localhost/, 'CN'); -($s, $ssl) = get_ssl_socket(8148); -like(Net::SSLeay::dump_peer_certificate($ssl), qr/CN=inherits/, 'CN inner'); +$s = Test::Nginx::IMAP->new(PeerAddr => '127.0.0.1:' . port(8148), SSL => 1); +$s->read(); + +like($s->socket()->dump_peer_certificate(), qr/CN=inherits/, 'CN inner'); # alpn -ok(get_ssl_socket(8148, ['imap']), 'alpn'); +$s = Test::Nginx::IMAP->new( + PeerAddr => '127.0.0.1:' . port(8148), + SSL => 1, + SSL_alpn_protocols => [ 'imap' ] +); +$s->ok('alpn'); SKIP: { skip 'LibreSSL too old', 1 @@ -184,8 +181,15 @@ skip 'OpenSSL too old', 1 TODO: { local $TODO = 'not yet' unless $t->has_version('1.21.4'); +local $TODO = 'no ALPN support in IO::Socket::SSL' + unless $t->has_feature('socket_ssl_alpn'); -ok(!get_ssl_socket(8148, ['unknown']), 'alpn rejected'); +$s = Test::Nginx::IMAP->new( + PeerAddr => '127.0.0.1:' . port(8148), + SSL => 1, + SSL_alpn_protocols => [ 'unknown' ] +); +ok(!$s->read(), 'alpn rejected'); } @@ -270,16 +274,3 @@ ok(!get_ssl_socket(8148, ['unknown']), ' $s->ok('smtp starttls only'); ############################################################################### - -sub get_ssl_socket { - my ($port, $alpn) = @_; - - my $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_alpn_protos($ssl, $alpn) if defined $alpn; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) == 1 or return; - return ($s, $ssl); -} - -############################################################################### diff --git a/mail_ssl_conf_command.t b/mail_ssl_conf_command.t --- a/mail_ssl_conf_command.t +++ b/mail_ssl_conf_command.t @@ -16,6 +16,7 @@ BEGIN { use FindBin; chdir($FindBin::Bin use lib 'lib'; use Test::Nginx; +use Test::Nginx::IMAP; ############################################################################### @@ -24,15 +25,8 @@ select STDOUT; $| = 1; local $SIG{PIPE} = 'IGNORE'; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap openssl:1.0.2/) +my $t = Test::Nginx->new() + ->has(qw/mail mail_ssl imap openssl:1.0.2 socket_ssl_reused/) ->has_daemon('openssl'); plan(skip_all => 'no ssl_conf_command') if $t->has_module('BoringSSL'); @@ -50,7 +44,7 @@ mail { auth_http http://127.0.0.1:8080; # unused server { - listen 127.0.0.1:8443 ssl; + listen 127.0.0.1:8993 ssl; protocol imap; ssl_protocols TLSv1.2; @@ -93,32 +87,28 @@ foreach my $name ('localhost', 'override ############################################################################### -my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); +my $s; -my ($s, $ssl) = get_ssl_socket(); -like(Net::SSLeay::dump_peer_certificate($ssl), qr/CN=override/, 'Certificate'); +$s = Test::Nginx::IMAP->new( + SSL => 1, + SSL_session_cache_size => 100 +); +$s->read(); + +like($s->socket()->dump_peer_certificate(), qr/CN=override/, 'Certificate'); -my $ses = Net::SSLeay::get_session($ssl); -($s, $ssl) = get_ssl_socket(ses => $ses); -ok(Net::SSLeay::session_reused($ssl), 'SessionTicket'); +$s = Test::Nginx::IMAP->new( + SSL => 1, + SSL_reuse_ctx => $s->socket() +); +ok($s->socket()->get_session_reused(), 'SessionTicket'); -($s, $ssl) = get_ssl_socket(ciphers => - 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'); -is(Net::SSLeay::get_cipher($ssl), +$s = Test::Nginx::IMAP->new( + SSL => 1, + SSL_cipher_list => + 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384' +); +is($s->socket()->get_cipher(), 'ECDHE-RSA-AES128-GCM-SHA256', 'ServerPreference'); ############################################################################### - -sub get_ssl_socket { - my (%extra) = @_; - - my $s = IO::Socket::INET->new('127.0.0.1:' . port(8443)); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_session($ssl, $extra{ses}) if $extra{ses}; - Net::SSLeay::set_cipher_list($ssl, $extra{ciphers}) if $extra{ciphers}; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); -} - -############################################################################### diff --git a/mail_ssl_session_reuse.t b/mail_ssl_session_reuse.t --- a/mail_ssl_session_reuse.t +++ b/mail_ssl_session_reuse.t @@ -17,6 +17,7 @@ BEGIN { use FindBin; chdir($FindBin::Bin use lib 'lib'; use Test::Nginx; +use Test::Nginx::IMAP; ############################################################################### @@ -25,15 +26,7 @@ select STDOUT; $| = 1; local $SIG{PIPE} = 'IGNORE'; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap/) +my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap socket_ssl_sslversion/) ->has_daemon('openssl')->plan(7); $t->write_file_expand('nginx.conf', <<'EOF'); @@ -125,8 +118,6 @@ foreach my $name ('localhost') { or die "Can't create certificate for $name: $!\n"; } -my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - $t->run(); ############################################################################### @@ -142,6 +133,10 @@ my $ctx = Net::SSLeay::CTX_new() or die( # - only cache off TODO: { +local $TODO = 'no TLSv1.3 sessions, old Net::SSLeay' + if $Net::SSLeay::VERSION < 1.88 && test_tls13(); +local $TODO = 'no TLSv1.3 sessions, old IO::Socket::SSL' + if $IO::Socket::SSL::VERSION < 2.061 && test_tls13(); local $TODO = 'no TLSv1.3 sessions in LibreSSL' if $t->has_module('LibreSSL') && test_tls13(); @@ -165,28 +160,27 @@ is(test_reuse(8999), 0, 'cache off not r ############################################################################### sub test_tls13 { - my ($s, $ssl) = get_ssl_socket(8993); - return (Net::SSLeay::version($ssl) > 0x303); + my $s = Test::Nginx::IMAP->new(SSL => 1); + return ($s->socket()->get_sslversion_int() > 0x303); } sub test_reuse { my ($port) = @_; - my ($s, $ssl) = get_ssl_socket($port); - Net::SSLeay::read($ssl); - my $ses = Net::SSLeay::get_session($ssl); - ($s, $ssl) = get_ssl_socket($port, $ses); - return Net::SSLeay::session_reused($ssl); -} + + my $s = Test::Nginx::IMAP->new( + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_session_cache_size => 100 + ); + $s->read(); -sub get_ssl_socket { - my ($port, $ses) = @_; + $s = Test::Nginx::IMAP->new( + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_reuse_ctx => $s->socket() + ); - my $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_session($ssl, $ses) if defined $ses; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) == 1 or return; - return ($s, $ssl); + return $s->socket()->get_session_reused(); } ############################################################################### From mdounin at mdounin.ru Mon Apr 17 03:31:32 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:32 +0300 Subject: [PATCH 08 of 11] Tests: reworked stream SSL tests to use IO::Socket::SSL In-Reply-To: References: Message-ID: <0103d7cd7b5c46af6364.1681702292@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1681702261 -10800 # Mon Apr 17 06:31:01 2023 +0300 # Node ID 0103d7cd7b5c46af63642dbb02481563662fb3d9 # Parent 072be0b91d77eb9c9ab15c20d4df04efac51106a Tests: reworked stream SSL tests to use IO::Socket::SSL. Relevant infrastructure is provided in Test::Nginx::Stream. This also ensures that SSL handshake and various read operations are guarded with timeouts. The stream_ssl_verify_client.t test uses IO::Socket::SSL::_get_ssl_object() to access the Net::SSLeay object directly, as it seems to be the only way to obtain CA list with IO::Socket::SSL. While not exactly correct, this seems to be good enough for tests. diff --git a/lib/Test/Nginx/Stream.pm b/lib/Test/Nginx/Stream.pm --- a/lib/Test/Nginx/Stream.pm +++ b/lib/Test/Nginx/Stream.pm @@ -38,17 +38,38 @@ sub new { unshift(@_, "PeerAddr") if @_ == 1; - $self->{_socket} = IO::Socket::INET->new( - Proto => "tcp", - PeerAddr => '127.0.0.1', - @_ - ) - or die "Can't connect to nginx: $!\n"; + eval { + local $SIG{ALRM} = sub { die "timeout\n" }; + local $SIG{PIPE} = sub { die "sigpipe\n" }; + alarm(8); + + $self->{_socket} = IO::Socket::INET->new( + Proto => "tcp", + PeerAddr => '127.0.0.1', + @_ + ) + or die "Can't connect to nginx: $!\n"; - if ({@_}->{'SSL'}) { - require IO::Socket::SSL; - IO::Socket::SSL->start_SSL($self->{_socket}, @_) - or die $IO::Socket::SSL::SSL_ERROR . "\n"; + if ({@_}->{'SSL'}) { + require IO::Socket::SSL; + IO::Socket::SSL->start_SSL( + $self->{_socket}, + SSL_verify_mode => + IO::Socket::SSL::SSL_VERIFY_NONE(), + @_ + ) + or die $IO::Socket::SSL::SSL_ERROR . "\n"; + + my $s = $self->{_socket}; + log_in("ssl cipher: " . $s->get_cipher()); + log_in("ssl cert: " . $s->peer_certificate('issuer')); + } + + alarm(0); + }; + alarm(0); + if ($@) { + log_in("died: $@"); } $self->{_socket}->autoflush(1); @@ -56,6 +77,11 @@ sub new { return $self; } +sub DESTROY { + my $self = shift; + $self->{_socket}->close(); +} + sub write { my ($self, $message, %extra) = @_; my $s = $self->{_socket}; @@ -135,6 +161,11 @@ sub sockport { return $self->{_socket}->sockport(); } +sub socket { + my ($self) = @_; + $self->{_socket}; +} + ############################################################################### 1; diff --git a/stream_ssl.t b/stream_ssl.t --- a/stream_ssl.t +++ b/stream_ssl.t @@ -19,23 +19,17 @@ BEGIN { use FindBin; chdir($FindBin::Bin use lib 'lib'; use Test::Nginx; +use Test::Nginx::Stream qw/ stream /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - plan(skip_all => 'win32') if $^O eq 'MSWin32'; -my $t = Test::Nginx->new()->has(qw/stream stream_ssl/)->has_daemon('openssl'); +my $t = Test::Nginx->new()->has(qw/stream stream_ssl socket_ssl/) + ->has_daemon('openssl'); $t->plan(5)->write_file_expand('nginx.conf', <<'EOF'); @@ -110,8 +104,6 @@ foreach my $name ('localhost', 'inherits or die "Can't create certificate for $name: $!\n"; } -my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - $t->write_file('password', 'localhost'); $t->write_file('password_many', "wrong$CRLF" . "localhost$CRLF"); $t->write_file('password_stream', 'inherits'); @@ -132,38 +124,30 @@ kill 'INT', $p if $@; ############################################################################### -my ($s, $ssl); - -($s, $ssl) = get_ssl_socket(8443); -Net::SSLeay::write($ssl, "GET / HTTP/1.0$CRLF$CRLF"); -like(Net::SSLeay::read($ssl), qr/200 OK/, 'ssl'); - -($s, $ssl) = get_ssl_socket(8444); -Net::SSLeay::write($ssl, "GET / HTTP/1.0$CRLF$CRLF"); -like(Net::SSLeay::read($ssl), qr/200 OK/, 'ssl password many'); - -($s, $ssl) = get_ssl_socket(8445); -Net::SSLeay::write($ssl, "GET / HTTP/1.0$CRLF$CRLF"); -like(Net::SSLeay::read($ssl), qr/200 OK/, 'ssl password fifo'); +like(get(8443), qr/200 OK/, 'ssl'); +like(get(8444), qr/200 OK/, 'ssl password many'); +like(get(8445), qr/200 OK/, 'ssl password fifo'); # ssl_certificate inheritance -($s, $ssl) = get_ssl_socket(8443); -like(Net::SSLeay::dump_peer_certificate($ssl), qr/CN=localhost/, 'CN'); - -($s, $ssl) = get_ssl_socket(8446); -like(Net::SSLeay::dump_peer_certificate($ssl), qr/CN=inherits/, 'CN inner'); +like(cert(8443), qr/CN=localhost/, 'CN'); +like(cert(8446), qr/CN=inherits/, 'CN inner'); ############################################################################### -sub get_ssl_socket { - my ($port) = @_; +sub get { + my $s = get_socket(@_); + return $s->io("GET / HTTP/1.0$CRLF$CRLF"); +} - my $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); +sub cert { + my $s = get_socket(@_); + return $s->socket()->dump_peer_certificate(); +} + +sub get_socket { + my ($port) = @_; + return stream(PeerAddr => '127.0.0.1:' . port($port), SSL => 1); } ############################################################################### diff --git a/stream_ssl_certificate.t b/stream_ssl_certificate.t --- a/stream_ssl_certificate.t +++ b/stream_ssl_certificate.t @@ -16,29 +16,16 @@ BEGIN { use FindBin; chdir($FindBin::Bin use lib 'lib'; use Test::Nginx; +use Test::Nginx::Stream qw/ stream /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - my $t = Test::Nginx->new() ->has(qw/stream stream_ssl stream_geo stream_return openssl:1.0.2/) + ->has(qw/socket_ssl_sni/) ->has_daemon('openssl') ->write_file_expand('nginx.conf', <<'EOF'); @@ -69,7 +56,7 @@ stream { server { listen 127.0.0.1:8080 ssl; - return $ssl_server_name:$ssl_session_reused; + return $ssl_server_name:$ssl_session_reused:$ssl_protocol; ssl_certificate $one.crt; ssl_certificate_key $one.key; @@ -154,59 +141,63 @@ like(get('password', 8083), qr/password/ # session reuse -my ($s, $ssl) = get('default', 8080); -my $ses = Net::SSLeay::get_session($ssl); +my $s = session('default', 8080); -like(get('default', 8080, $ses), qr/default:r/, 'session reused'); +TODO: { +local $TODO = 'no TLSv1.3 sessions, old Net::SSLeay' + if $Net::SSLeay::VERSION < 1.88 && test_tls13(); +local $TODO = 'no TLSv1.3 sessions, old IO::Socket::SSL' + if $IO::Socket::SSL::VERSION < 2.061 && test_tls13(); + +like(get('default', 8080, $s), qr/default:r/, 'session reused'); TODO: { # ticket key name mismatch prevents session resumption local $TODO = 'not yet' unless $t->has_version('1.23.2'); -like(get('default', 8081, $ses), qr/default:r/, 'session id context match'); +like(get('default', 8081, $s), qr/default:r/, 'session id context match'); } +} -like(get('default', 8082, $ses), qr/default:\./, 'session id context distinct'); +like(get('default', 8082, $s), qr/default:\./, 'session id context distinct'); # errors -Net::SSLeay::ERR_clear_error(); -get_ssl_socket('nx', 8084); -ok(Net::SSLeay::ERR_peek_error(), 'no certificate'); +ok(!get('nx', 8084), 'no certificate'); ############################################################################### sub get { - my ($host, $port, $ctx) = @_; - my ($s, $ssl) = get_ssl_socket($host, $port, $ctx) or return; - - local $SIG{PIPE} = 'IGNORE'; - - my $r = Net::SSLeay::read($ssl); - Net::SSLeay::shutdown($ssl); - $s->close(); - return $r unless wantarray(); - return ($s, $ssl); + my $s = get_socket(@_) || return; + return $s->read(); } sub cert { - my ($host, $port, $ctx) = @_; - my ($s, $ssl) = get_ssl_socket($host, $port, $ctx) or return; - Net::SSLeay::dump_peer_certificate($ssl); + my $s = get_socket(@_) || return; + return $s->socket()->dump_peer_certificate(); +} + +sub session { + my $s = get_socket(@_); + $s->read(); + return $s->socket(); } -sub get_ssl_socket { - my ($host, $port, $ses) = @_; +sub get_socket { + my ($host, $port, $ctx) = @_; + return stream( + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_hostname => $host, + SSL_session_cache_size => 100, + SSL_session_key => 1, + SSL_reuse_ctx => $ctx + ); +} - my $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_tlsext_host_name($ssl, $host); - Net::SSLeay::set_session($ssl, $ses) if defined $ses; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); +sub test_tls13 { + return get('default', 8080) =~ /TLSv1.3/; } ############################################################################### diff --git a/stream_ssl_conf_command.t b/stream_ssl_conf_command.t --- a/stream_ssl_conf_command.t +++ b/stream_ssl_conf_command.t @@ -16,22 +16,16 @@ BEGIN { use FindBin; chdir($FindBin::Bin use lib 'lib'; use Test::Nginx; +use Test::Nginx::Stream qw/ stream /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - my $t = Test::Nginx->new() ->has(qw/stream stream_ssl stream_return openssl:1.0.2/) + ->has(qw/socket_ssl_reused/) ->has_daemon('openssl'); plan(skip_all => 'no ssl_conf_command') if $t->has_module('BoringSSL'); @@ -92,32 +86,31 @@ foreach my $name ('localhost', 'override ############################################################################### -my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); +my $s; -my ($s, $ssl) = get_ssl_socket(); -like(Net::SSLeay::dump_peer_certificate($ssl), qr/CN=override/, 'Certificate'); +$s = stream( + PeerAddr => '127.0.0.1:' . port(8443), + SSL => 1, + SSL_session_cache_size => 100 +); +$s->read(); + +like($s->socket()->dump_peer_certificate(), qr/CN=override/, 'Certificate'); -my $ses = Net::SSLeay::get_session($ssl); -($s, $ssl) = get_ssl_socket(ses => $ses); -ok(Net::SSLeay::session_reused($ssl), 'SessionTicket'); +$s = stream( + PeerAddr => '127.0.0.1:' . port(8443), + SSL => 1, + SSL_reuse_ctx => $s->socket() +); +ok($s->socket()->get_session_reused(), 'SessionTicket'); -($s, $ssl) = get_ssl_socket(ciphers => - 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'); -is(Net::SSLeay::get_cipher($ssl), +$s = stream( + PeerAddr => '127.0.0.1:' . port(8443), + SSL => 1, + SSL_cipher_list => + 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384' +); +is($s->socket()->get_cipher(), 'ECDHE-RSA-AES128-GCM-SHA256', 'ServerPreference'); ############################################################################### - -sub get_ssl_socket { - my (%extra) = @_; - - my $s = IO::Socket::INET->new('127.0.0.1:' . port(8443)); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_session($ssl, $extra{ses}) if $extra{ses}; - Net::SSLeay::set_cipher_list($ssl, $extra{ciphers}) if $extra{ciphers}; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); -} - -############################################################################### diff --git a/stream_ssl_session_reuse.t b/stream_ssl_session_reuse.t --- a/stream_ssl_session_reuse.t +++ b/stream_ssl_session_reuse.t @@ -19,23 +19,17 @@ BEGIN { use FindBin; chdir($FindBin::Bin use lib 'lib'; use Test::Nginx; +use Test::Nginx::Stream qw/ stream /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; +my $t = Test::Nginx->new()->has(qw/stream stream_ssl socket_ssl_sslversion/) + ->has_daemon('openssl')->plan(7); -my $t = Test::Nginx->new()->has(qw/stream stream_ssl/)->has_daemon('openssl'); - -$t->plan(7)->write_file_expand('nginx.conf', <<'EOF'); +$t->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% @@ -124,8 +118,6 @@ foreach my $name ('localhost') { or die "Can't create certificate for $name: $!\n"; } -my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - $t->run_daemon(\&http_daemon); $t->run(); @@ -145,6 +137,10 @@ my $ctx = Net::SSLeay::CTX_new() or die( # - only cache off TODO: { +local $TODO = 'no TLSv1.3 sessions, old Net::SSLeay' + if $Net::SSLeay::VERSION < 1.88 && test_tls13(); +local $TODO = 'no TLSv1.3 sessions, old IO::Socket::SSL' + if $IO::Socket::SSL::VERSION < 2.061 && test_tls13(); local $TODO = 'no TLSv1.3 sessions in LibreSSL' if $t->has_module('LibreSSL') && test_tls13(); @@ -168,29 +164,30 @@ is(test_reuse(8449), 0, 'cache off not r ############################################################################### sub test_tls13 { - my ($s, $ssl) = get_ssl_socket(8443); - return (Net::SSLeay::version($ssl) > 0x303); + my $s = stream( + PeerAddr => '127.0.0.1:' . port(8443), + SSL => 1 + ); + return ($s->socket()->get_sslversion_int() > 0x303); } sub test_reuse { my ($port) = @_; - my ($s, $ssl) = get_ssl_socket($port); - Net::SSLeay::write($ssl, "GET / HTTP/1.0$CRLF$CRLF"); - Net::SSLeay::read($ssl); - my $ses = Net::SSLeay::get_session($ssl); - ($s, $ssl) = get_ssl_socket($port, $ses); - return Net::SSLeay::session_reused($ssl); -} + + my $s = stream( + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_session_cache_size => 100 + ); + $s->io("GET / HTTP/1.0$CRLF$CRLF"); -sub get_ssl_socket { - my ($port, $ses) = @_; + $s = stream( + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_reuse_ctx => $s->socket() + ); - my $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_session($ssl, $ses) if defined $ses; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); + return $s->socket()->get_session_reused(); } ############################################################################### diff --git a/stream_ssl_variables.t b/stream_ssl_variables.t --- a/stream_ssl_variables.t +++ b/stream_ssl_variables.t @@ -23,22 +23,8 @@ use Test::Nginx::Stream qw/ stream /; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - -my $t = Test::Nginx->new()->has(qw/stream stream_ssl stream_return/) +my $t = Test::Nginx->new() + ->has(qw/stream stream_ssl stream_return socket_ssl_sni/) ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF'); @@ -59,12 +45,12 @@ stream { server { listen 127.0.0.1:8080; - listen 127.0.0.1:8081 ssl; + listen 127.0.0.1:8443 ssl; return $ssl_session_reused:$ssl_session_id:$ssl_cipher:$ssl_protocol; } server { - listen 127.0.0.1:8082 ssl; + listen 127.0.0.1:8444 ssl; return $ssl_server_name; } } @@ -93,21 +79,32 @@ foreach my $name ('localhost') { ############################################################################### -my ($s, $ssl); +my $s; is(stream('127.0.0.1:' . port(8080))->read(), ':::', 'no ssl'); -($s, $ssl) = get_ssl_socket(port(8081)); -like(Net::SSLeay::read($ssl), qr/^\.:(\w{64})?:[\w-]+:(TLS|SSL)v(\d|\.)+$/, +$s = stream( + PeerAddr => '127.0.0.1:' . port(8443), + SSL => 1, + SSL_session_cache_size => 100 +); +like($s->read(), qr/^\.:(\w{64})?:[\w-]+:(TLS|SSL)v(\d|\.)+$/, 'ssl variables'); TODO: { +local $TODO = 'no TLSv1.3 sessions, old Net::SSLeay' + if $Net::SSLeay::VERSION < 1.88 && test_tls13(); +local $TODO = 'no TLSv1.3 sessions, old IO::Socket::SSL' + if $IO::Socket::SSL::VERSION < 2.061 && test_tls13(); local $TODO = 'no TLSv1.3 sessions in LibreSSL' if $t->has_module('LibreSSL') && test_tls13(); -my $ses = Net::SSLeay::get_session($ssl); -($s, $ssl) = get_ssl_socket(port(8081), $ses); -like(Net::SSLeay::read($ssl), qr/^r:(\w{64})?:[\w-]+:(TLS|SSL)v(\d|\.)+$/, +$s = stream( + PeerAddr => '127.0.0.1:' . port(8443), + SSL => 1, + SSL_reuse_ctx => $s->socket() +); +like($s->read(), qr/^r:(\w{64})?:[\w-]+:(TLS|SSL)v(\d|\.)+$/, 'ssl variables - session reused'); } @@ -115,36 +112,37 @@ like(Net::SSLeay::read($ssl), qr/^r:(\w{ SKIP: { skip 'no sni', 3 unless $t->has_module('sni'); -($s, $ssl) = get_ssl_socket(port(8082), undef, 'example.com'); -is(Net::SSLeay::ssl_read_all($ssl), 'example.com', 'ssl server name'); +$s = stream( + PeerAddr => '127.0.0.1:' . port(8444), + SSL => 1, + SSL_session_cache_size => 100, + SSL_hostname => 'example.com' +); +is($s->read(), 'example.com', 'ssl server name'); -my $ses = Net::SSLeay::get_session($ssl); -($s, $ssl) = get_ssl_socket(port(8082), $ses, 'example.com'); -is(Net::SSLeay::ssl_read_all($ssl), 'example.com', 'ssl server name - reused'); +$s = stream( + PeerAddr => '127.0.0.1:' . port(8444), + SSL => 1, + SSL_reuse_ctx => $s->socket(), + SSL_hostname => 'example.com' +); +is($s->read(), 'example.com', 'ssl server name - reused'); -($s, $ssl) = get_ssl_socket(port(8082)); -is(Net::SSLeay::ssl_read_all($ssl), '', 'ssl server name empty'); +$s = stream( + PeerAddr => '127.0.0.1:' . port(8444), + SSL => 1 +); +is($s->read(), '', 'ssl server name empty'); } +undef $s; + ############################################################################### sub test_tls13 { - ($s, $ssl) = get_ssl_socket(port(8081)); - Net::SSLeay::read($ssl) =~ /TLSv1.3/; -} - -sub get_ssl_socket { - my ($port, $ses, $name) = @_; - - my $s = IO::Socket::INET->new('127.0.0.1:' . $port); - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_tlsext_host_name($ssl, $name) if defined $name; - Net::SSLeay::set_session($ssl, $ses) if defined $ses; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); + $s = stream(PeerAddr => '127.0.0.1:' . port(8443), SSL => 1); + $s->read() =~ /TLSv1.3/; } ############################################################################### diff --git a/stream_ssl_verify_client.t b/stream_ssl_verify_client.t --- a/stream_ssl_verify_client.t +++ b/stream_ssl_verify_client.t @@ -24,15 +24,7 @@ use Test::Nginx::Stream qw/ stream /; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -my $t = Test::Nginx->new()->has(qw/stream stream_ssl stream_return/) +my $t = Test::Nginx->new()->has(qw/stream stream_ssl stream_return socket_ssl/) ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF'); @@ -154,18 +146,23 @@ sub test_tls13 { sub get { my ($port, $cert) = @_; - my $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - Net::SSLeay::set_cert_and_key($ctx, "$d/$cert.crt", "$d/$cert.key") - or die if $cert; - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); + my $s = stream( + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + $cert ? ( + SSL_cert_file => "$d/$cert.crt", + SSL_key_file => "$d/$cert.key" + ) : () + ); - my $buf = Net::SSLeay::read($ssl); - log_in($buf); - return $buf unless wantarray(); + return $s->read() unless wantarray(); + # Note: this uses IO::Socket::SSL::_get_ssl_object() internal method. + # While not exactly correct, it looks like there is no other way to + # obtain CA list with IO::Socket::SSL, and this seems to be good + # enough for tests. + + my $ssl = $s->socket()->_get_ssl_object(); my $list = Net::SSLeay::get_client_CA_list($ssl); my @names; for my $i (0 .. Net::SSLeay::sk_X509_NAME_num($list) - 1) { From mdounin at mdounin.ru Mon Apr 17 03:31:31 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:31 +0300 Subject: [PATCH 07 of 11] Tests: simplified mail_imap_ssl.t In-Reply-To: References: Message-ID: <072be0b91d77eb9c9ab1.1681702291@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1681702259 -10800 # Mon Apr 17 06:30:59 2023 +0300 # Node ID 072be0b91d77eb9c9ab15c20d4df04efac51106a # Parent 20d603cd3cbeab89127108fe9cb6dffd0e9469e8 Tests: simplified mail_imap_ssl.t. The test now uses improved IO::Socket::SSL infrastructure in Test::Nginx::IMAP. While here, fixed incorrect port being used for the "trusted cert" test. diff --git a/mail_imap_ssl.t b/mail_imap_ssl.t --- a/mail_imap_ssl.t +++ b/mail_imap_ssl.t @@ -50,12 +50,12 @@ mail { ssl_certificate 1.example.com.crt; server { - listen 127.0.0.1:8142; + listen 127.0.0.1:8143; protocol imap; } server { - listen 127.0.0.1:8143 ssl; + listen 127.0.0.1:8993 ssl; protocol imap; ssl_verify_client on; @@ -63,7 +63,7 @@ mail { } server { - listen 127.0.0.1:8145 ssl; + listen 127.0.0.1:8994 ssl; protocol imap; ssl_verify_client optional; @@ -71,7 +71,7 @@ mail { } server { - listen 127.0.0.1:8146 ssl; + listen 127.0.0.1:8995 ssl; protocol imap; ssl_verify_client optional; @@ -80,7 +80,7 @@ mail { } server { - listen 127.0.0.1:8147 ssl; + listen 127.0.0.1:8996 ssl; protocol imap; ssl_verify_client optional_no_ca; @@ -140,46 +140,41 @@ foreach my $name ('1.example.com', '2.ex ############################################################################### my $cred = sub { encode_base64("\0test\@example.com\0$_[0]", '') }; -my %ssl = ( - SSL => 1, - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] }, -); # no ssl connection -my $s = Test::Nginx::IMAP->new(PeerAddr => '127.0.0.1:' . port(8142)); +my $s = Test::Nginx::IMAP->new(); $s->ok('plain connection'); $s->send('1 AUTHENTICATE PLAIN ' . $cred->("s1")); # no cert -$s = Test::Nginx::IMAP->new(PeerAddr => '127.0.0.1:' . port(8143), %ssl); +$s = Test::Nginx::IMAP->new(SSL => 1); $s->check(qr/BYE No required SSL certificate/, 'no cert'); # no cert with ssl_verify_client optional -$s = Test::Nginx::IMAP->new(PeerAddr => '127.0.0.1:' . port(8145), %ssl); +$s = Test::Nginx::IMAP->new(PeerAddr => '127.0.0.1:' . port(8994), SSL => 1); $s->ok('no optional cert'); $s->send('1 AUTHENTICATE PLAIN ' . $cred->("s2")); # wrong cert with ssl_verify_client optional $s = Test::Nginx::IMAP->new( - PeerAddr => '127.0.0.1:' . port(8145), + PeerAddr => '127.0.0.1:' . port(8995), + SSL => 1, SSL_cert_file => "$d/1.example.com.crt", - SSL_key_file => "$d/1.example.com.key", - %ssl, + SSL_key_file => "$d/1.example.com.key" ); $s->check(qr/BYE SSL certificate error/, 'bad optional cert'); # wrong cert with ssl_verify_client optional_no_ca $s = Test::Nginx::IMAP->new( - PeerAddr => '127.0.0.1:' . port(8147), + PeerAddr => '127.0.0.1:' . port(8996), + SSL => 1, SSL_cert_file => "$d/1.example.com.crt", - SSL_key_file => "$d/1.example.com.key", - %ssl, + SSL_key_file => "$d/1.example.com.key" ); $s->ok('bad optional_no_ca cert'); $s->send('1 AUTHENTICATE PLAIN ' . $cred->("s3")); @@ -187,10 +182,10 @@ my $s = Test::Nginx::IMAP->new(PeerAddr # matching cert with ssl_verify_client optional $s = Test::Nginx::IMAP->new( - PeerAddr => '127.0.0.1:' . port(8145), + PeerAddr => '127.0.0.1:' . port(8995), + SSL => 1, SSL_cert_file => "$d/2.example.com.crt", - SSL_key_file => "$d/2.example.com.key", - %ssl, + SSL_key_file => "$d/2.example.com.key" ); $s->ok('good cert'); $s->send('1 AUTHENTICATE PLAIN ' . $cred->("s4")); @@ -198,10 +193,10 @@ my $s = Test::Nginx::IMAP->new(PeerAddr # trusted cert with ssl_verify_client optional $s = Test::Nginx::IMAP->new( - PeerAddr => '127.0.0.1:' . port(8146), + PeerAddr => '127.0.0.1:' . port(8995), + SSL => 1, SSL_cert_file => "$d/3.example.com.crt", - SSL_key_file => "$d/3.example.com.key", - %ssl, + SSL_key_file => "$d/3.example.com.key" ); $s->ok('trusted cert'); $s->send('1 AUTHENTICATE PLAIN ' . $cred->("s5")); @@ -211,9 +206,9 @@ my $s = Test::Nginx::IMAP->new(PeerAddr my ($cipher, $sslversion); -$s = get_ssl_socket(8143); -$cipher = $s->get_cipher(); -$sslversion = $s->get_sslversion(); +$s = Test::Nginx::IMAP->new(SSL => 1); +$cipher = $s->socket()->get_cipher(); +$sslversion = $s->socket()->get_sslversion(); $sslversion =~ s/_/./; undef $s; @@ -242,31 +237,3 @@ like($f, qr|^$cipher:$sslversion$|m, 'lo } ############################################################################### - -sub get_ssl_socket { - my ($port) = @_; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1:' . port($port), - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; -} - -############################################################################### From mdounin at mdounin.ru Mon Apr 17 03:31:34 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:34 +0300 Subject: [PATCH 10 of 11] Tests: reworked http SSL tests to use IO::Socket::SSL In-Reply-To: References: Message-ID: <2aaba5bbc0366bffe1f4.1681702294@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1681702264 -10800 # Mon Apr 17 06:31:04 2023 +0300 # Node ID 2aaba5bbc0366bffe1f468105b1185cd48efbc93 # Parent 90913cb36b512c45cd9a171cbb4320b12ff24b48 Tests: reworked http SSL tests to use IO::Socket::SSL. Relevant infrastructure is provided in Test::Nginx http() functions. This also ensures that SSL handshake and various read and write operations are guarded with timeouts. The ssl_sni_reneg.t test uses IO::Socket::SSL::_get_ssl_object() to access the Net::SSLeay object directly and trigger renegotation. While not exactly correct, this seems to be good enough for tests. Similarly, IO::Socket::SSL::_get_ssl_object() is used in ssl_stapling.t, since SSL_ocsp_staple_callback is called with the socket instead of the Net::SSLeay object. Similarly, IO::Socket::SSL::_get_ssl_object() is used in ssl_verify_client.t, since there seems to be no way to obtain CA list with IO::Socket::SSL. Notable change to http() request interface is that http_end() now closes the socket. This is to make sure that SSL connections are properly closed and SSL sessions are not removed from the IO::Socket::SSL session cache. This affected access_log.t, which was modified accordingly. diff --git a/access_log.t b/access_log.t --- a/access_log.t +++ b/access_log.t @@ -161,11 +161,11 @@ http_get('/varlog?logname=0'); http_get('/varlog?logname=filename'); my $s = http('', start => 1); -http_get('/addr', socket => $s); my $addr = $s->sockhost(); my $port = $s->sockport(); my $saddr = $s->peerhost(); my $sport = $s->peerport(); +http_get('/addr', socket => $s); http_get('/binary'); diff --git a/lib/Test/Nginx.pm b/lib/Test/Nginx.pm --- a/lib/Test/Nginx.pm +++ b/lib/Test/Nginx.pm @@ -838,13 +838,15 @@ sub http($;%) { my $s = http_start($request, %extra); return $s if $extra{start} or !defined $s; - return http_end($s); + return http_end($s, %extra); } sub http_start($;%) { my ($request, %extra) = @_; my $s; + my $port = $extra{SSL} ? 8443 : 8080; + eval { local $SIG{ALRM} = sub { die "timeout\n" }; local $SIG{PIPE} = sub { die "sigpipe\n" }; @@ -852,10 +854,25 @@ sub http_start($;%) { $s = $extra{socket} || IO::Socket::INET->new( Proto => 'tcp', - PeerAddr => '127.0.0.1:' . port(8080) + PeerAddr => '127.0.0.1:' . port($port), + %extra ) or die "Can't connect to nginx: $!\n"; + if ($extra{SSL}) { + require IO::Socket::SSL; + IO::Socket::SSL->start_SSL( + $s, + SSL_verify_mode => + IO::Socket::SSL::SSL_VERIFY_NONE(), + %extra + ) + or die $IO::Socket::SSL::SSL_ERROR . "\n"; + + log_in("ssl cipher: " . $s->get_cipher()); + log_in("ssl cert: " . $s->peer_certificate('issuer')); + } + log_out($request); $s->print($request); @@ -879,7 +896,7 @@ sub http_start($;%) { } sub http_end($;%) { - my ($s) = @_; + my ($s, %extra) = @_; my $reply; eval { @@ -890,6 +907,8 @@ sub http_end($;%) { local $/; $reply = $s->getline(); + $s->close(); + alarm(0); }; alarm(0); diff --git a/ssl_certificate.t b/ssl_certificate.t --- a/ssl_certificate.t +++ b/ssl_certificate.t @@ -17,29 +17,15 @@ use Socket qw/ CRLF /; BEGIN { use FindBin; chdir($FindBin::Bin); } use lib 'lib'; -use Test::Nginx; +use Test::Nginx qw/ :DEFAULT http_end /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl geo openssl:1.0.2/) +my $t = Test::Nginx->new() + ->has(qw/http http_ssl geo openssl:1.0.2 socket_ssl_sni/) ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF'); @@ -67,6 +53,7 @@ http { } add_header X-SSL $ssl_server_name:$ssl_session_reused; + add_header X-SSL-Protocol $ssl_protocol; ssl_session_cache shared:SSL:1m; ssl_session_tickets on; @@ -177,60 +164,63 @@ like(get('password', 8083), qr/password/ # session reuse -my ($s, $ssl) = get('default', 8080); -my $ses = Net::SSLeay::get_session($ssl); - -like(get('default', 8080, $ses), qr/default:r/, 'session reused'); +my $s = session('default', 8080); TODO: { -# ticket key name mismatch prevents session resumption +local $TODO = 'no TLSv1.3 sessions, old Net::SSLeay' + if $Net::SSLeay::VERSION < 1.88 && test_tls13(); +local $TODO = 'no TLSv1.3 sessions, old IO::Socket::SSL' + if $IO::Socket::SSL::VERSION < 2.061 && test_tls13(); + +like(get('default', 8080, $s), qr/default:r/, 'session reused'); + +TODO: { +# automatic ticket ticket key name mismatch prevents session resumption local $TODO = 'not yet' unless $t->has_version('1.23.2'); -like(get('default', 8081, $ses), qr/default:r/, 'session id context match'); +like(get('default', 8081, $s), qr/default:r/, 'session id context match'); } +} -like(get('default', 8082, $ses), qr/default:\./, 'session id context distinct'); +like(get('default', 8082, $s), qr/default:\./, 'session id context distinct'); # errors -Net::SSLeay::ERR_clear_error(); -get_ssl_socket('nx', 8084); -ok(Net::SSLeay::ERR_peek_error(), 'no certificate'); +ok(!get('nx', 8084), 'no certificate'); ############################################################################### sub get { - my ($host, $port, $ctx) = @_; - my ($s, $ssl) = get_ssl_socket($host, $port, $ctx) or return; - - local $SIG{PIPE} = 'IGNORE'; - - Net::SSLeay::write($ssl, 'GET / HTTP/1.0' . CRLF . CRLF); - my $r = Net::SSLeay::read($ssl); - Net::SSLeay::shutdown($ssl); - $s->close(); - return $r unless wantarray(); - return ($s, $ssl); + my $s = get_socket(@_) || return; + return http_end($s); } sub cert { - my ($host, $port, $ctx) = @_; - my ($s, $ssl) = get_ssl_socket($host, $port, $ctx) or return; - Net::SSLeay::dump_peer_certificate($ssl); + my $s = get_socket(@_) || return; + return $s->dump_peer_certificate(); +} + +sub session { + my $s = get_socket(@_) || return; + http_end($s); + return $s; } -sub get_ssl_socket { - my ($host, $port, $ses) = @_; +sub get_socket { + my ($host, $port, $ctx) = @_; + return http_get( + '/', start => 1, PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_hostname => $host, + SSL_session_cache_size => 100, + SSL_session_key => 1, + SSL_reuse_ctx => $ctx + ); +} - my $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_tlsext_host_name($ssl, $host); - Net::SSLeay::set_session($ssl, $ses) if defined $ses; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl); - return ($s, $ssl); +sub test_tls13 { + return get('default', 8080) =~ /TLSv1.3/; } ############################################################################### diff --git a/ssl_certificate_perl.t b/ssl_certificate_perl.t --- a/ssl_certificate_perl.t +++ b/ssl_certificate_perl.t @@ -22,23 +22,8 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - my $t = Test::Nginx->new() - ->has(qw/http http_ssl perl openssl:1.0.2/) + ->has(qw/http http_ssl perl openssl:1.0.2 socket_ssl_sni/) ->has_daemon('openssl'); $t->write_file_expand('nginx.conf', <<'EOF'); @@ -66,7 +51,7 @@ http { '; server { - listen 127.0.0.1:8080 ssl; + listen 127.0.0.1:8443 ssl; server_name localhost; ssl_certificate data:$pem; @@ -98,27 +83,19 @@ foreach my $name ('one', 'two') { ############################################################################### -like(cert('one', 8080), qr/CN=one/, 'certificate'); -like(cert('two', 8080), qr/CN=two/, 'certificate 2'); +like(cert('one'), qr/CN=one/, 'certificate'); +like(cert('two'), qr/CN=two/, 'certificate 2'); ############################################################################### sub cert { - my ($host, $port) = @_; - my ($s, $ssl) = get_ssl_socket($host, $port) or return; - Net::SSLeay::dump_peer_certificate($ssl); + my $s = get_socket(@_) || return; + return $s->dump_peer_certificate(); } -sub get_ssl_socket { - my ($host, $port) = @_; - - my $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_tlsext_host_name($ssl, $host); - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); +sub get_socket { + my $host = shift; + return http_get('/', start => 1, SSL => 1, SSL_hostname => $host); } ############################################################################### diff --git a/ssl_certificates.t b/ssl_certificates.t --- a/ssl_certificates.t +++ b/ssl_certificates.t @@ -22,16 +22,8 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); - Net::SSLeay::SSLeay(); -}; -plan(skip_all => 'Net::SSLeay not installed or too old') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl/)->has_daemon('openssl'); +my $t = Test::Nginx->new()->has(qw/http http_ssl socket_ssl/) + ->has_daemon('openssl'); plan(skip_all => 'no multiple certificates') if $t->has_module('BoringSSL'); @@ -51,8 +43,10 @@ http { ssl_certificate rsa.crt; ssl_ciphers DEFAULT:ECCdraft; + add_header X-SSL-Protocol $ssl_protocol; + server { - listen 127.0.0.1:8080 ssl; + listen 127.0.0.1:8443 ssl; server_name localhost; ssl_certificate_key ec.key; @@ -91,65 +85,54 @@ foreach my $name ('ec', 'rsa') { or die "Can't create certificate for $name: $!\n"; } +$t->write_file('index.html', ''); + $t->run()->plan(2); ############################################################################### -like(get_cert('RSA'), qr/CN=rsa/, 'ssl cert RSA'); -like(get_cert('ECDSA'), qr/CN=ec/, 'ssl cert ECDSA'); +TODO: { +local $TODO = 'broken TLSv1.3 sigalgs in LibreSSL' + if $t->has_module('LibreSSL') && test_tls13(); + +like(cert('RSA'), qr/CN=rsa/, 'ssl cert RSA'); + +} + +like(cert('ECDSA'), qr/CN=ec/, 'ssl cert ECDSA'); ############################################################################### -sub get_version { - my ($s, $ssl) = get_ssl_socket(); - return Net::SSLeay::version($ssl); +sub test_tls13 { + return http_get('/', SSL => 1) =~ /TLSv1.3/; } -sub get_cert { - my ($type) = @_; - $type = 'PSS' if $type eq 'RSA' && get_version() > 0x0303; - my ($s, $ssl) = get_ssl_socket($type); - my $cipher = Net::SSLeay::get_cipher($ssl); - Test::Nginx::log_core('||', "cipher: $cipher"); - return Net::SSLeay::dump_peer_certificate($ssl); +sub cert { + my $s = get_socket(@_) || return; + return $s->dump_peer_certificate(); } -sub get_ssl_socket { +sub get_socket { my ($type) = @_; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::INET->new('127.0.0.1:' . port(8080)); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - - if (defined $type) { + my $ctx_cb = sub { + my $ctx = shift; + return unless defined $type; my $ssleay = Net::SSLeay::SSLeay(); - if ($ssleay < 0x1000200f || $ssleay == 0x20000000) { - Net::SSLeay::CTX_set_cipher_list($ctx, $type) - or die("Failed to set cipher list"); - } else { - # SSL_CTRL_SET_SIGALGS_LIST - Net::SSLeay::CTX_ctrl($ctx, 98, 0, $type . '+SHA256') - or die("Failed to set sigalgs"); - } - } + return if ($ssleay < 0x1000200f || $ssleay == 0x20000000); + my $sigalgs = 'RSA+SHA256:PSS+SHA256'; + $sigalgs = $type . '+SHA256' unless $type eq 'RSA'; + # SSL_CTRL_SET_SIGALGS_LIST + Net::SSLeay::CTX_ctrl($ctx, 98, 0, $sigalgs) + or die("Failed to set sigalgs"); + }; - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); + return http_get( + '/', start => 1, + SSL => 1, + SSL_cipher_list => $type, + SSL_create_ctx_callback => $ctx_cb + ); } ############################################################################### diff --git a/ssl_conf_command.t b/ssl_conf_command.t --- a/ssl_conf_command.t +++ b/ssl_conf_command.t @@ -15,22 +15,15 @@ use Test::More; BEGIN { use FindBin; chdir($FindBin::Bin); } use lib 'lib'; -use Test::Nginx; +use Test::Nginx qw/ :DEFAULT http_end /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl openssl:1.0.2/) +my $t = Test::Nginx->new() + ->has(qw/http http_ssl openssl:1.0.2 socket_ssl_reused/) ->has_daemon('openssl'); plan(skip_all => 'no ssl_conf_command') if $t->has_module('BoringSSL'); @@ -91,32 +84,32 @@ foreach my $name ('localhost', 'override ############################################################################### -my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); +my $s; -my ($s, $ssl) = get_ssl_socket(); -like(Net::SSLeay::dump_peer_certificate($ssl), qr/CN=override/, 'Certificate'); +$s = http_get( + '/', start => 1, + SSL => 1, + SSL_session_cache_size => 100 +); + +like($s->dump_peer_certificate(), qr/CN=override/, 'Certificate'); +http_end($s); -my $ses = Net::SSLeay::get_session($ssl); -($s, $ssl) = get_ssl_socket(ses => $ses); -ok(Net::SSLeay::session_reused($ssl), 'SessionTicket'); +$s = http_get( + '/', start => 1, + SSL => 1, + SSL_reuse_ctx => $s +); + +ok($s->get_session_reused(), 'SessionTicket'); -($s, $ssl) = get_ssl_socket(ciphers => - 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'); -is(Net::SSLeay::get_cipher($ssl), - 'ECDHE-RSA-AES128-GCM-SHA256', 'ServerPreference'); +$s = http_get( + '/', start => 1, + SSL => 1, + SSL_cipher_list => + 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384' +); + +is($s->get_cipher(), 'ECDHE-RSA-AES128-GCM-SHA256', 'ServerPreference'); ############################################################################### - -sub get_ssl_socket { - my (%extra) = @_; - - my $s = IO::Socket::INET->new('127.0.0.1:' . port(8443)); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_session($ssl, $extra{ses}) if $extra{ses}; - Net::SSLeay::set_cipher_list($ssl, $extra{ciphers}) if $extra{ciphers}; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); -} - -############################################################################### diff --git a/ssl_ocsp.t b/ssl_ocsp.t --- a/ssl_ocsp.t +++ b/ssl_ocsp.t @@ -17,31 +17,15 @@ use MIME::Base64 qw/ decode_base64 /; BEGIN { use FindBin; chdir($FindBin::Bin); } use lib 'lib'; -use Test::Nginx; +use Test::Nginx qw/ :DEFAULT http_end /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); - Net::SSLeay::SSLeay(); - defined &Net::SSLeay::set_tlsext_status_type or die; -}; -plan(skip_all => 'Net::SSLeay not installed or too old') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl sni/)->has_daemon('openssl'); +my $t = Test::Nginx->new()->has(qw/http http_ssl sni socket_ssl_sni/) + ->has_daemon('openssl'); plan(skip_all => 'no OCSP support in BoringSSL') if $t->has_module('BoringSSL'); @@ -70,6 +54,7 @@ http { ssl_session_tickets off; add_header X-Verify x${ssl_client_verify}:${ssl_session_reused}x always; + add_header X-SSL-Protocol $ssl_protocol always; server { listen 127.0.0.1:8443 ssl; @@ -283,8 +268,6 @@ foreach my $name ('rsa') { $t->waitforsocket("127.0.0.1:" . port(8081)); $t->waitforsocket("127.0.0.1:" . port(8082)); -my $version = get_version(); - ############################################################################### like(get('end'), qr/200 OK.*SUCCESS/s, 'ocsp leaf'); @@ -366,14 +349,17 @@ system("openssl ocsp -index $d/certindex like(get('ec-end'), qr/200 OK.*SUCCESS/s, 'ocsp ecdsa'); -my ($s, $ssl) = get('ec-end'); -my $ses = Net::SSLeay::get_session($ssl); +my $s = session('ec-end'); TODO: { +local $TODO = 'no TLSv1.3 sessions, old Net::SSLeay' + if $Net::SSLeay::VERSION < 1.88 && test_tls13(); +local $TODO = 'no TLSv1.3 sessions, old IO::Socket::SSL' + if $IO::Socket::SSL::VERSION < 2.061 && test_tls13(); local $TODO = 'no TLSv1.3 sessions in LibreSSL' - if $t->has_module('LibreSSL') and $version > 0x303; + if $t->has_module('LibreSSL') && test_tls13(); -like(get('ec-end', ses => $ses), +like(get('ec-end', ses => $s), qr/200 OK.*SUCCESS:r/s, 'session reused'); } @@ -398,10 +384,14 @@ system("openssl ocsp -index $d/certindex # reusing session with revoked certificate TODO: { +local $TODO = 'no TLSv1.3 sessions, old Net::SSLeay' + if $Net::SSLeay::VERSION < 1.88 && test_tls13(); +local $TODO = 'no TLSv1.3 sessions, old IO::Socket::SSL' + if $IO::Socket::SSL::VERSION < 2.061 && test_tls13(); local $TODO = 'no TLSv1.3 sessions in LibreSSL' - if $t->has_module('LibreSSL') and $version > 0x303; + if $t->has_module('LibreSSL') && test_tls13(); -like(get('ec-end', ses => $ses), +like(get('ec-end', ses => $s), qr/400 Bad.*FAILED:certificate revoked:r/s, 'session reused - revoked'); } @@ -417,57 +407,38 @@ like(`grep -F '[crit]' ${\($t->testdir() ############################################################################### sub get { - my ($cert, %extra) = @_; - my ($s, $ssl) = get_ssl_socket($cert, %extra); - my $cipher = Net::SSLeay::get_cipher($ssl); - Test::Nginx::log_core('||', "cipher: $cipher"); - my $host = $extra{sni} ? $extra{sni} : 'localhost'; - local $SIG{PIPE} = 'IGNORE'; - log_out("GET /serial HTTP/1.0\nHost: $host\n\n"); - Net::SSLeay::write($ssl, "GET /serial HTTP/1.0\nHost: $host\n\n"); - my $r = Net::SSLeay::read($ssl); - log_in($r); - $s->close(); - return $r unless wantarray(); - return ($s, $ssl); + my $s = get_socket(@_) || return; + return http_end($s); } -sub get_ssl_socket { +sub session { + my $s = get_socket(@_) || return; + http_end($s); + return $s; +} + +sub get_socket { my ($cert, %extra) = @_; my $ses = $extra{ses}; - my $sni = $extra{sni}; + my $sni = $extra{sni} || 'localhost'; my $port = $extra{port} || 8443; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - alarm(0); - }; - alarm(0); - if ($@) { - log_in("died: $@"); - return undef; - } - - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - - Net::SSLeay::set_cert_and_key($ctx, "$d/$cert.crt", "$d/$cert.key") - or die if $cert; - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_session($ssl, $ses) if defined $ses; - Net::SSLeay::set_tlsext_host_name($ssl, $sni) if $sni; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); + return http( + "GET /serial HTTP/1.0\nHost: $sni\n\n", + start => 1, PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_hostname => $sni, + SSL_session_cache_size => 100, + SSL_reuse_ctx => $ses, + $cert ? ( + SSL_cert_file => "$d/$cert.crt", + SSL_key_file => "$d/$cert.key" + ) : () + ); } -sub get_version { - my ($s, $ssl) = get_ssl_socket(); - return Net::SSLeay::version($ssl); +sub test_tls13 { + return http_get('/', SSL => 1) =~ /TLSv1.3/; } ############################################################################### diff --git a/ssl_session_ticket_key.t b/ssl_session_ticket_key.t --- a/ssl_session_ticket_key.t +++ b/ssl_session_ticket_key.t @@ -15,23 +15,19 @@ use Test::More; BEGIN { use FindBin; chdir($FindBin::Bin); } use lib 'lib'; -use Test::Nginx; +use Test::Nginx qw/ :DEFAULT http_end /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; die if $Net::SSLeay::VERSION < 1.86; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; +eval { require Net::SSLeay; die if $Net::SSLeay::VERSION < 1.86; }; plan(skip_all => 'Net::SSLeay version => 1.86 required') if $@; -my $t = Test::Nginx->new()->has(qw/http http_ssl/)->has_daemon('openssl') - ->plan(2)->write_file_expand('nginx.conf', <<'EOF'); +my $t = Test::Nginx->new()->has(qw/http http_ssl socket_ssl/) + ->has_daemon('openssl')->plan(2) + ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% @@ -47,8 +43,10 @@ http { ssl_certificate_key localhost.key; ssl_certificate localhost.crt; + add_header X-SSL-Protocol $ssl_protocol; + server { - listen 127.0.0.1:8080 ssl; + listen 127.0.0.1:8443 ssl; server_name localhost; ssl_session_cache shared:SSL:1m; @@ -76,6 +74,8 @@ foreach my $name ('localhost') { or die "Can't create certificate for $name: $!\n"; } +$t->write_file('index.html', ''); + $t->run(); ############################################################################### @@ -105,8 +105,7 @@ cmp_ok(get_ticket_key_name(), 'ne', $key ############################################################################### sub get_ticket_key_name { - my $ses = get_ssl_session(); - my $asn = Net::SSLeay::i2d_SSL_SESSION($ses); + my $asn = get_ssl_session(); my $any = qr/[\x00-\xff]/; next: # tag(10) | len{2} | OCTETSTRING(4) | len{2} | ticket(key_name|..) @@ -119,29 +118,25 @@ next: } sub get_ssl_session { - my ($s, $ssl) = get_ssl_socket(); + my $cache = IO::Socket::SSL::Session_Cache->new(100); - Net::SSLeay::write($ssl, < 1, + SSL => 1, + SSL_session_cache => $cache, + SSL_session_key => 1 + ); -EOF - Net::SSLeay::read($ssl); - Net::SSLeay::get_session($ssl); + return unless $s; + http_end($s); + + my $sess = $cache->get_session(1); + return '' unless defined $sess; + return Net::SSLeay::i2d_SSL_SESSION($sess); } sub test_tls13 { - my ($s, $ssl) = get_ssl_socket(); - return (Net::SSLeay::version($ssl) > 0x303); -} - -sub get_ssl_socket { - my $s = IO::Socket::INET->new('127.0.0.1:' . port(8080)); - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - return ($s, $ssl); + return http_get('/', SSL => 1) =~ /TLSv1.3/; } ############################################################################### diff --git a/ssl_sni_reneg.t b/ssl_sni_reneg.t --- a/ssl_sni_reneg.t +++ b/ssl_sni_reneg.t @@ -17,29 +17,15 @@ use Socket qw/ CRLF /; BEGIN { use FindBin; chdir($FindBin::Bin); } use lib 'lib'; -use Test::Nginx; +use Test::Nginx qw/ :DEFAULT http_end /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl/)->has_daemon('openssl'); +my $t = Test::Nginx->new()->has(qw/http http_ssl socket_ssl_sni/) + ->has_daemon('openssl')->plan(8); $t->write_file_expand('nginx.conf', <<'EOF'); @@ -58,15 +44,15 @@ http { ssl_protocols TLSv1.2; server { - listen 127.0.0.1:8080 ssl; - listen 127.0.0.1:8081 ssl; + listen 127.0.0.1:8443 ssl; + listen 127.0.0.1:8444 ssl; server_name localhost; location / { } } server { - listen 127.0.0.1:8081 ssl; + listen 127.0.0.1:8444 ssl; server_name localhost2; location / { } @@ -94,11 +80,12 @@ foreach my $name ('localhost') { } $t->run(); -$t->plan(8); ############################################################################### -my ($s, $ssl) = get_ssl_socket(8080); +my ($s, $ssl); + +$s = http('', start => 1, SSL => 1); ok($s, 'connection'); SKIP: { @@ -106,20 +93,26 @@ skip 'connection failed', 3 unless $s; local $SIG{PIPE} = 'IGNORE'; -Net::SSLeay::write($ssl, 'GET / HTTP/1.0' . CRLF); +$s->print('GET / HTTP/1.0' . CRLF); +# Note: this uses IO::Socket::SSL::_get_ssl_object() internal method. +# While not exactly correct, it looks like there is no other way to +# trigger renegotiation with IO::Socket::SSL, and this seems to be +# good enough for tests. + +$ssl = $s->_get_ssl_object(); ok(Net::SSLeay::renegotiate($ssl), 'renegotiation'); ok(Net::SSLeay::set_tlsext_host_name($ssl, 'localhost'), 'SNI'); -Net::SSLeay::write($ssl, 'Host: localhost' . CRLF . CRLF); +$s->print('Host: localhost' . CRLF . CRLF); -ok(!Net::SSLeay::read($ssl), 'response'); +ok(!http_end($s), 'response'); } # virtual servers -($s, $ssl) = get_ssl_socket(8081); +$s = http('', start => 1, PeerAddr => '127.0.0.1:' . port(8444), SSL => 1); ok($s, 'connection 2'); SKIP: { @@ -127,44 +120,21 @@ skip 'connection failed', 3 unless $s; local $SIG{PIPE} = 'IGNORE'; -Net::SSLeay::write($ssl, 'GET / HTTP/1.0' . CRLF); +$s->print('GET / HTTP/1.0' . CRLF); +# Note: this uses IO::Socket::SSL::_get_ssl_object() internal method. +# While not exactly correct, it looks like there is no other way to +# trigger renegotiation with IO::Socket::SSL, and this seems to be +# good enough for tests. + +$ssl = $s->_get_ssl_object(); ok(Net::SSLeay::renegotiate($ssl), 'renegotiation'); ok(Net::SSLeay::set_tlsext_host_name($ssl, 'localhost'), 'SNI'); -Net::SSLeay::write($ssl, 'Host: localhost' . CRLF . CRLF); +$s->print('Host: localhost' . CRLF . CRLF); -ok(!Net::SSLeay::read($ssl), 'virtual servers'); +ok(!http_end($s), 'virtual servers'); } ############################################################################### - -sub get_ssl_socket { - my ($port) = @_; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::set_tlsext_host_name($ssl, 'localhost'); - Net::SSLeay::connect($ssl) or die("ssl connect"); - - return ($s, $ssl); -} - -############################################################################### diff --git a/ssl_stapling.t b/ssl_stapling.t --- a/ssl_stapling.t +++ b/ssl_stapling.t @@ -24,17 +24,13 @@ use Test::Nginx; select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); - Net::SSLeay::SSLeay(); - defined &Net::SSLeay::set_tlsext_status_type or die; -}; -plan(skip_all => 'Net::SSLeay not installed or too old') if $@; +my $t = Test::Nginx->new()->has(qw/http http_ssl socket_ssl/) + ->has_daemon('openssl'); -my $t = Test::Nginx->new()->has(qw/http http_ssl/)->has_daemon('openssl'); +eval { defined &Net::SSLeay::set_tlsext_status_type or die; }; +plan(skip_all => 'Net::SSLeay too old') if $@; +eval { defined &IO::Socket::SSL::SSL_OCSP_TRY_STAPLE or die; }; +plan(skip_all => 'IO::Socket::SSL too old') if $@; plan(skip_all => 'no OCSP stapling') if $t->has_module('BoringSSL'); @@ -246,8 +242,6 @@ system("openssl ocsp -index $d/certindex ############################################################################### -my $version = get_version(); - staple(8443, 'RSA'); staple(8443, 'ECDSA'); staple(8444, 'RSA'); @@ -262,7 +256,7 @@ ok(!staple(8443, 'RSA'), 'staple revoked TODO: { local $TODO = 'broken TLSv1.3 sigalgs in LibreSSL' - if $t->has_module('LibreSSL') && $version > 0x303; + if $t->has_module('LibreSSL') && test_tls13(); ok(staple(8443, 'ECDSA'), 'staple success'); @@ -272,7 +266,7 @@ ok(!staple(8444, 'RSA'), 'responder revo TODO: { local $TODO = 'broken TLSv1.3 sigalgs in LibreSSL' - if $t->has_module('LibreSSL') && $version > 0x303; + if $t->has_module('LibreSSL') && test_tls13(); ok(staple(8444, 'ECDSA'), 'responder success'); @@ -289,7 +283,7 @@ ok(!staple(8449, 'ECDSA'), 'ocsp error') TODO: { local $TODO = 'broken TLSv1.3 sigalgs in LibreSSL' - if $t->has_module('LibreSSL') && $version > 0x303; + if $t->has_module('LibreSSL') && test_tls13(); like(`grep -F '[crit]' ${\($t->testdir())}/error.log`, qr/^$/s, 'no crit'); @@ -302,9 +296,16 @@ sub staple { my (@resp); my $staple_cb = sub { - my ($ssl, $resp) = @_; + my ($s, $resp) = @_; push @resp, !!$resp; return 1 unless $resp; + + # Contrary to the documentation, IO::Socket::SSL calls the + # SSL_ocsp_staple_callback with the socket, and not the + # Net::SSLeay object. + + my $ssl = $s->_get_ssl_object(); + my $cert = Net::SSLeay::get_peer_certificate($ssl); my $certid = eval { Net::SSLeay::OCSP_cert2ids($ssl, $cert) } or do { die "no OCSP_CERTID for certificate: $@"; }; @@ -313,69 +314,34 @@ sub staple { push @resp, $res[0][2]->{'statusType'}; }; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - alarm(0); + my $ctx_cb = sub { + my $ctx = shift; + return unless defined $ciphers; + my $ssleay = Net::SSLeay::SSLeay(); + return if ($ssleay < 0x1000200f || $ssleay == 0x20000000); + my $sigalgs = 'RSA+SHA256:PSS+SHA256'; + $sigalgs = $ciphers . '+SHA256' unless $ciphers eq 'RSA'; + # SSL_CTRL_SET_SIGALGS_LIST + Net::SSLeay::CTX_ctrl($ctx, 98, 0, $sigalgs) + or die("Failed to set sigalgs"); }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - my $ssleay = Net::SSLeay::SSLeay(); - if ($ssleay < 0x1000200f || $ssleay == 0x20000000) { - Net::SSLeay::CTX_set_cipher_list($ctx, $ciphers) - or die("Failed to set cipher list"); - } else { - # SSL_CTRL_SET_SIGALGS_LIST - $ciphers = 'PSS' if $ciphers eq 'RSA' && $version > 0x0303; - Net::SSLeay::CTX_ctrl($ctx, 98, 0, $ciphers . '+SHA256') - or die("Failed to set sigalgs"); - } + my $s = http_get( + '/', start => 1, PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_cipher_list => $ciphers, + SSL_create_ctx_callback => $ctx_cb, + SSL_ocsp_staple_callback => $staple_cb, + SSL_ocsp_mode => IO::Socket::SSL::SSL_OCSP_TRY_STAPLE(), + SSL_ca_file => $ca + ); - Net::SSLeay::CTX_load_verify_locations($ctx, $ca || '', ''); - Net::SSLeay::CTX_set_tlsext_status_cb($ctx, $staple_cb); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_tlsext_status_type($ssl, - Net::SSLeay::TLSEXT_STATUSTYPE_ocsp()); - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - + return $s unless $s; return join ' ', @resp; } -sub get_version { - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::INET->new('127.0.0.1:' . port(8443)); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - - Net::SSLeay::version($ssl); +sub test_tls13 { + return http_get('/', start => 1, SSL => 1) =~ /TLSv1.3/; } ############################################################################### diff --git a/ssl_verify_client.t b/ssl_verify_client.t --- a/ssl_verify_client.t +++ b/ssl_verify_client.t @@ -17,29 +17,14 @@ use Socket qw/ CRLF /; BEGIN { use FindBin; chdir($FindBin::Bin); } use lib 'lib'; -use Test::Nginx; +use Test::Nginx qw/ :DEFAULT http_end /; ############################################################################### select STDERR; $| = 1; select STDOUT; $| = 1; -eval { - require Net::SSLeay; - Net::SSLeay::load_error_strings(); - Net::SSLeay::SSLeay_add_ssl_algorithms(); - Net::SSLeay::randomize(); -}; -plan(skip_all => 'Net::SSLeay not installed') if $@; - -eval { - my $ctx = Net::SSLeay::CTX_new() or die; - my $ssl = Net::SSLeay::new($ctx) or die; - Net::SSLeay::set_tlsext_host_name($ssl, 'example.org') == 1 or die; -}; -plan(skip_all => 'Net::SSLeay with OpenSSL SNI support required') if $@; - -my $t = Test::Nginx->new()->has(qw/http http_ssl sni/) +my $t = Test::Nginx->new()->has(qw/http http_ssl sni socket_ssl_sni/) ->has_daemon('openssl')->plan(13); $t->write_file_expand('nginx.conf', <<'EOF'); @@ -72,7 +57,7 @@ http { } server { - listen 127.0.0.1:8081 ssl; + listen 127.0.0.1:8443 ssl; server_name on; ssl_certificate_key 1.example.com.key; @@ -83,7 +68,7 @@ http { } server { - listen 127.0.0.1:8081 ssl; + listen 127.0.0.1:8443 ssl; server_name optional; ssl_certificate_key 1.example.com.key; @@ -95,7 +80,7 @@ http { } server { - listen 127.0.0.1:8081 ssl; + listen 127.0.0.1:8443 ssl; server_name off; ssl_certificate_key 1.example.com.key; @@ -107,7 +92,7 @@ http { } server { - listen 127.0.0.1:8081 ssl; + listen 127.0.0.1:8443 ssl; server_name optional.no.ca; ssl_certificate_key 1.example.com.key; @@ -118,7 +103,7 @@ http { } server { - listen 127.0.0.1:8081 ssl; + listen 127.0.0.1:8443 ssl; server_name no.context; ssl_verify_client on; @@ -191,25 +176,28 @@ sub test_tls13 { sub get { my ($sni, $cert, $host) = @_; - local $SIG{PIPE} = 'IGNORE'; - $host = $sni if !defined $host; - my $s = IO::Socket::INET->new('127.0.0.1:' . port(8081)); - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - Net::SSLeay::set_cert_and_key($ctx, "$d/$cert.crt", "$d/$cert.key") - or die if $cert; - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_tlsext_host_name($ssl, $sni) == 1 or die; - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); + my $s = http( + "GET /t HTTP/1.0" . CRLF . + "Host: $host" . CRLF . CRLF, + start => 1, + SSL => 1, + SSL_hostname => $sni, + $cert ? ( + SSL_cert_file => "$d/$cert.crt", + SSL_key_file => "$d/$cert.key" + ) : () + ); - Net::SSLeay::write($ssl, 'GET /t HTTP/1.0' . CRLF); - Net::SSLeay::write($ssl, "Host: $host" . CRLF . CRLF); - my $buf = Net::SSLeay::read($ssl); - log_in($buf); - return $buf unless wantarray(); + return http_end($s) unless wantarray(); + # Note: this uses IO::Socket::SSL::_get_ssl_object() internal method. + # While not exactly correct, it looks like there is no other way to + # obtain CA list with IO::Socket::SSL, and this seems to be good + # enough for tests. + + my $ssl = $s->_get_ssl_object(); my $list = Net::SSLeay::get_client_CA_list($ssl); my @names; for my $i (0 .. Net::SSLeay::sk_X509_NAME_num($list) - 1) { From mdounin at mdounin.ru Mon Apr 17 03:31:35 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:31:35 +0300 Subject: [PATCH 11 of 11] Tests: simplified http SSL tests with IO::Socket::SSL In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1681702266 -10800 # Mon Apr 17 06:31:06 2023 +0300 # Node ID cfedbdf904c75695664c6fb6826e310ff6f510de # Parent 2aaba5bbc0366bffe1f468105b1185cd48efbc93 Tests: simplified http SSL tests with IO::Socket::SSL. The http SSL tests which previously used IO::Socket::SSL were converted to use improved IO::Socket::SSL infrastructure in Test::Nginx. diff --git a/ssl.t b/ssl.t --- a/ssl.t +++ b/ssl.t @@ -14,6 +14,7 @@ use strict; use Test::More; use Socket qw/ CRLF /; +use IO::Select; BEGIN { use FindBin; chdir($FindBin::Bin); } @@ -278,11 +279,9 @@ sub test_tls13 { } sub get { - my ($uri, $port, $ctx) = @_; - my $s = get_ssl_socket($port, $ctx) or return; - my $r = http_get($uri, socket => $s); - $s->close(); - return $r; + my ($uri, $port, $ctx, %extra) = @_; + my $s = get_ssl_socket($port, $ctx, %extra) or return; + return http_get($uri, socket => $s); } sub get_body { @@ -297,16 +296,16 @@ sub get_body { http($chs . CRLF . $body x $len . CRLF, socket => $s, start => 1) for 1 .. $n; my $r = http("0" . CRLF . CRLF, socket => $s); - $s->close(); return $r; } sub cert { my ($uri, $port) = @_; - my $s = get_ssl_socket($port, undef, + return get( + $uri, $port, undef, SSL_cert_file => "$d/subject.crt", - SSL_key_file => "$d/subject.key") or return; - http_get($uri, socket => $s); + SSL_key_file => "$d/subject.key" + ); } sub get_ssl_context { @@ -318,45 +317,32 @@ sub get_ssl_context { sub get_ssl_socket { my ($port, $ctx, %extra) = @_; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1', - PeerPort => port($port), - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_reuse_ctx => $ctx, - SSL_error_trap => sub { die $_[1] }, - %extra - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; + return http( + '', PeerAddr => '127.0.0.1:' . port($port), start => 1, + SSL => 1, + SSL_reuse_ctx => $ctx, + %extra + ); } sub get_ssl_shutdown { my ($port) = @_; - my $s = IO::Socket::INET->new('127.0.0.1:' . port($port)); - my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); - my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); - Net::SSLeay::set_fd($ssl, fileno($s)); - Net::SSLeay::connect($ssl) or die("ssl connect"); - Net::SSLeay::write($ssl, 'GET /' . CRLF . 'extra'); - Net::SSLeay::read($ssl); - Net::SSLeay::set_shutdown($ssl, 1); - Net::SSLeay::shutdown($ssl); + my $s = http( + 'GET /' . CRLF . 'extra', + PeerAddr => '127.0.0.1:' . port($port), start => 1, + SSL => 1 + ); + + $s->blocking(0); + while (IO::Select->new($s)->can_read(8)) { + my $n = $s->sysread(my $buf, 16384); + next if !defined $n && $!{EWOULDBLOCK}; + last; + } + $s->blocking(1); + + return $s->stop_SSL(); } ############################################################################### diff --git a/ssl_certificate_chain.t b/ssl_certificate_chain.t --- a/ssl_certificate_chain.t +++ b/ssl_certificate_chain.t @@ -133,41 +133,27 @@ system("openssl ca -batch -config $d/ca. ############################################################################### -is(get_ssl_socket(port(8080)), undef, 'incomplete chain'); -ok(get_ssl_socket(port(8081)), 'intermediate'); -ok(get_ssl_socket(port(8082)), 'intermediate server'); +ok(!get_ssl_socket(8080), 'incomplete chain'); +ok(get_ssl_socket(8081), 'intermediate'); +ok(get_ssl_socket(8082), 'intermediate server'); ############################################################################### sub get_ssl_socket { my ($port) = @_; - my ($s, $verify); + my ($verify); - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1', - PeerPort => $port, - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_PEER(), - SSL_ca_file => "$d/root.crt", - SSL_verify_callback => sub { - my ($ok) = @_; - $verify = $ok; - return $ok; - }, - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } + http( + '', PeerAddr => '127.0.0.1:' . port($port), start => 1, + SSL => 1, + SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_PEER(), + SSL_ca_file => "$d/root.crt", + SSL_verify_callback => sub { + my ($ok) = @_; + $verify = $ok; + return $ok; + } + ); return $verify; } diff --git a/ssl_client_escaped_cert.t b/ssl_client_escaped_cert.t --- a/ssl_client_escaped_cert.t +++ b/ssl_client_escaped_cert.t @@ -91,31 +91,12 @@ is($escaped, $cert, 'ssl_client_escaped_ sub cert { my ($uri) = @_; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1', - PeerPort => port(8443), - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_cert_file => "$d/localhost.crt", - SSL_key_file => "$d/localhost.key", - SSL_error_trap => sub { die $_[1] }, - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - http_get($uri, socket => $s); + return http_get( + $uri, + SSL => 1, + SSL_cert_file => "$d/localhost.crt", + SSL_key_file => "$d/localhost.key" + ); } ############################################################################### diff --git a/ssl_crl.t b/ssl_crl.t --- a/ssl_crl.t +++ b/ssl_crl.t @@ -162,37 +162,12 @@ like(get(8082, 'end'), qr/FAILED/, 'crl sub get { my ($port, $cert) = @_; - my $s = get_ssl_socket($port, $cert) or return; - http_get('/t', socket => $s); -} - -sub get_ssl_socket { - my ($port, $cert) = @_; - my ($s); - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1', - PeerPort => port($port), - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_cert_file => "$d/$cert.crt", - SSL_key_file => "$d/$cert.key", - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; + http_get( + '/t', PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_cert_file => "$d/$cert.crt", + SSL_key_file => "$d/$cert.key" + ); } ############################################################################### diff --git a/ssl_curve.t b/ssl_curve.t --- a/ssl_curve.t +++ b/ssl_curve.t @@ -75,43 +75,6 @@ foreach my $name ('localhost') { ############################################################################### -like(get('/curve'), qr/^prime256v1 /m, 'ssl curve'); +like(http_get('/curve', SSL => 1), qr/^prime256v1 /m, 'ssl curve'); ############################################################################### - -sub get { - my ($uri, $port, $ctx) = @_; - my $s = get_ssl_socket($port) or return; - my $r = http_get($uri, socket => $s); - $s->close(); - return $r; -} - -sub get_ssl_socket { - my ($port, $ctx) = @_; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1', - PeerPort => port(8443), - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] }, - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; -} - -############################################################################### diff --git a/ssl_password_file.t b/ssl_password_file.t --- a/ssl_password_file.t +++ b/ssl_password_file.t @@ -49,7 +49,7 @@ http { ssl_password_file password_http; server { - listen 127.0.0.1:8081 ssl; + listen 127.0.0.1:8443 ssl; listen 127.0.0.1:8080; server_name localhost; @@ -132,33 +132,6 @@ is($@, '', 'ssl_password_file works'); # simple tests to ensure that nothing broke with ssl_password_file directive like(http_get('/'), qr/200 OK.*http/ms, 'http'); -like(http_get('/', socket => get_ssl_socket()), qr/200 OK.*https/ms, 'https'); +like(http_get('/', SSL => 1), qr/200 OK.*https/ms, 'https'); ############################################################################### - -sub get_ssl_socket { - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1:' . port(8081), - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; -} - -############################################################################### diff --git a/ssl_proxy_protocol.t b/ssl_proxy_protocol.t --- a/ssl_proxy_protocol.t +++ b/ssl_proxy_protocol.t @@ -148,24 +148,7 @@ sub pp_get { my $s = http($proxy, start => 1); - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - IO::Socket::SSL->start_SSL($s, - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return http(< $s); + return http(< $s, SSL => 1); GET $url HTTP/1.0 Host: localhost diff --git a/ssl_reject_handshake.t b/ssl_reject_handshake.t --- a/ssl_reject_handshake.t +++ b/ssl_reject_handshake.t @@ -136,44 +136,14 @@ like(get('virtual2', 8082), qr/unrecogni sub get { my ($host, $port) = @_; - my $s = get_ssl_socket($host, $port) or return $@; - $host = 'localhost' if !defined $host; - my $r = http(< $s); -GET / HTTP/1.0 -Host: $host - -EOF - - $s->close(); + my $r = http( + "GET / HTTP/1.0\nHost: " . ($host || 'localhost') . "\n\n", + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_hostname => $host + ) + or return "$@"; return $r; } -sub get_ssl_socket { - my ($host, $port) = @_; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1', - PeerPort => port($port), - SSL_hostname => $host, - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] }, - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; -} - ############################################################################### diff --git a/ssl_session_reuse.t b/ssl_session_reuse.t --- a/ssl_session_reuse.t +++ b/ssl_session_reuse.t @@ -16,7 +16,7 @@ use Test::More; BEGIN { use FindBin; chdir($FindBin::Bin); } use lib 'lib'; -use Test::Nginx; +use Test::Nginx qw/ :DEFAULT http_end /; ############################################################################### @@ -192,58 +192,26 @@ like(`grep -F '[crit]' ${\($t->testdir() ############################################################################### sub test_tls13 { - return get('/protocol', 8443) =~ /TLSv1.3/; + return http_get('/protocol', SSL => 1) =~ /TLSv1.3/; } sub test_reuse { my ($port) = @_; - my $ctx = get_ssl_context(); - get('/', $port, $ctx); - return (get('/', $port, $ctx) =~ qr/^body r$/m) ? 1 : 0; -} -sub get { - my ($uri, $port, $ctx) = @_; - my $s = get_ssl_socket($port, $ctx) or return; - my $r = http_get($uri, socket => $s); - $s->close(); - return $r; -} - -sub get_ssl_context { - return IO::Socket::SSL::SSL_Context->new( - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), + my $s = http_get( + '/', PeerAddr => '127.0.0.1:' . port($port), start => 1, + SSL => 1, SSL_session_cache_size => 100 ); -} - -sub get_ssl_socket { - my ($port, $ctx, %extra) = @_; - my $s; + http_end($s); - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1', - PeerPort => port($port), - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_reuse_ctx => $ctx, - SSL_error_trap => sub { die $_[1] }, - %extra - ); - alarm(0); - }; - alarm(0); + my $r = http_get( + '/', PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_reuse_ctx => $s + ); - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; + return ($r =~ qr/^body r$/m) ? 1 : 0; } ############################################################################### diff --git a/ssl_sni.t b/ssl_sni.t --- a/ssl_sni.t +++ b/ssl_sni.t @@ -37,42 +37,34 @@ http { %%TEST_GLOBALS_HTTP%% server { - listen 127.0.0.1:8080 ssl; + listen 127.0.0.1:8443 ssl; server_name localhost; ssl_certificate_key localhost.key; ssl_certificate localhost.crt; location / { - return 200 $server_name; + return 200 $server_name:$ssl_server_name; } location /protocol { return 200 $ssl_protocol; } + + location /name { + return 200 $ssl_session_reused:$ssl_server_name; + } } server { - listen 127.0.0.1:8080; + listen 127.0.0.1:8443; server_name example.com; ssl_certificate_key example.com.key; ssl_certificate example.com.crt; location / { - return 200 $server_name; - } - } - - server { - listen 127.0.0.1:8081 ssl; - server_name localhost; - - ssl_certificate_key localhost.key; - ssl_certificate localhost.crt; - - location / { - return 200 $ssl_session_reused:$ssl_server_name; + return 200 $server_name:$ssl_server_name; } } } @@ -104,19 +96,19 @@ foreach my $name ('localhost', 'example. like(get_cert_cn(), qr!/CN=localhost!, 'default cert'); like(get_cert_cn('example.com'), qr!/CN=example.com!, 'sni cert'); -like(https_get_host('example.com'), qr!example.com!, +like(get_host('example.com'), qr!example.com:example.com!, 'host exists, sni exists, and host is equal sni'); -like(https_get_host('example.com', 'example.org'), qr!example.com!, +like(get_host('example.com', 'example.org'), qr!example.com:example.org!, 'host exists, sni not found'); TODO: { local $TODO = 'sni restrictions'; -like(https_get_host('example.com', 'localhost'), qr!400 Bad Request!, +like(get_host('example.com', 'localhost'), qr!400 Bad Request!, 'host exists, sni exists, and host is not equal sni'); -like(https_get_host('example.org', 'example.com'), qr!400 Bad Request!, +like(get_host('example.org', 'example.com'), qr!400 Bad Request!, 'host not found, sni exists'); } @@ -127,7 +119,7 @@ my $ctx = new IO::Socket::SSL::SSL_Conte SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), SSL_session_cache_size => 100); -like(get('/', 'localhost', 8081, $ctx), qr/^\.:localhost$/m, 'ssl server name'); +like(get('/name', 'localhost', $ctx), qr/^\.:localhost$/m, 'ssl server name'); TODO: { local $TODO = 'no TLSv1.3 sessions, old Net::SSLeay' @@ -137,7 +129,7 @@ local $TODO = 'no TLSv1.3 sessions, old local $TODO = 'no TLSv1.3 sessions in LibreSSL' if $t->has_module('LibreSSL') && test_tls13(); -like(get('/', 'localhost', 8081, $ctx), qr/^r:localhost$/m, +like(get('/name', 'localhost', $ctx), qr/^r:localhost$/m, 'ssl server name - reused'); } @@ -148,58 +140,29 @@ sub test_tls13 { get('/protocol', 'localhost') =~ /TLSv1.3/; } -sub get_ssl_socket { - my ($host, $port, $ctx) = @_; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1:' . port($port || 8080), - SSL_hostname => $host, - SSL_reuse_ctx => $ctx, - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; -} - sub get_cert_cn { my ($host) = @_; - my $s = get_ssl_socket($host); - + my $s = http('', start => 1, SSL => 1, SSL_hostname => $host); return $s->dump_peer_certificate(); } -sub https_get_host { +sub get_host { my ($host, $sni) = @_; - my $s = get_ssl_socket($sni ? $sni : $host); - - return http(< $s); -GET / HTTP/1.0 -Host: $host - -EOF + return http( + "GET / HTTP/1.0\nHost: $host\n\n", + SSL => 1, + SSL_hostname => $sni || $host + ); } sub get { - my ($uri, $host, $port, $ctx) = @_; - my $s = get_ssl_socket($host, $port, $ctx) or return; - my $r = http_get($uri, socket => $s); - $s->close(); - return $r; + my ($uri, $host, $ctx) = @_; + return http_get( + $uri, + SSL => 1, + SSL_hostname => $host, + SSL_reuse_ctx => $ctx + ); } ############################################################################### diff --git a/ssl_sni_sessions.t b/ssl_sni_sessions.t --- a/ssl_sni_sessions.t +++ b/ssl_sni_sessions.t @@ -110,15 +110,14 @@ foreach my $name ('localhost') { $t->run(); -plan(skip_all => 'no TLS 1.3 sessions') - if get('default', port(8443), get_ssl_context()) =~ /TLSv1.3/ - && ($Net::SSLeay::VERSION < 1.88 || $IO::Socket::SSL::VERSION < 2.061); -plan(skip_all => 'no TLS 1.3 sessions in LibreSSL') - if get('default', port(8443), get_ssl_context()) =~ /TLSv1.3/ - && $t->has_module('LibreSSL'); +plan(skip_all => 'no TLSv1.3 sessions, old Net::SSLeay') + if $Net::SSLeay::VERSION < 1.88 && test_tls13(); +plan(skip_all => 'no TLSv1.3 sessions, old IO::Socket::SSL') + if $IO::Socket::SSL::VERSION < 2.061 && test_tls13(); +plan(skip_all => 'no TLSv1.3 sessions in LibreSSL') + if $t->has_module('LibreSSL') && test_tls13(); plan(skip_all => 'no TLS 1.3 session cache in BoringSSL') - if get('default', port(8443), get_ssl_context()) =~ /TLSv1.3/ - && $t->has_module('BoringSSL'); + if $t->has_module('BoringSSL') && test_tls13(); $t->plan(6); @@ -128,8 +127,8 @@ plan(skip_all => 'no TLS 1.3 session cac my $ctx = get_ssl_context(); -like(get('default', port(8443), $ctx), qr!default:\.!, 'default server'); -like(get('default', port(8443), $ctx), qr!default:r!, 'default server reused'); +like(get('default', 8443, $ctx), qr!default:\.!, 'default server'); +like(get('default', 8443, $ctx), qr!default:r!, 'default server reused'); # check that sessions are still properly saved and restored # when using an SNI-based virtual server with different session cache; @@ -143,16 +142,16 @@ like(get('default', port(8443), $ctx), q $ctx = get_ssl_context(); -like(get('nocache', port(8443), $ctx), qr!nocache:\.!, 'without cache'); -like(get('nocache', port(8443), $ctx), qr!nocache:r!, 'without cache reused'); +like(get('nocache', 8443, $ctx), qr!nocache:\.!, 'without cache'); +like(get('nocache', 8443, $ctx), qr!nocache:r!, 'without cache reused'); # make sure tickets can be used if an SNI-based virtual server # uses a different set of session ticket keys explicitly set $ctx = get_ssl_context(); -like(get('tickets', port(8444), $ctx), qr!tickets:\.!, 'tickets'); -like(get('tickets', port(8444), $ctx), qr!tickets:r!, 'tickets reused'); +like(get('tickets', 8444, $ctx), qr!tickets:\.!, 'tickets'); +like(get('tickets', 8444, $ctx), qr!tickets:r!, 'tickets reused'); ############################################################################### @@ -163,46 +162,19 @@ sub get_ssl_context { ); } -sub get_ssl_socket { +sub get { my ($host, $port, $ctx) = @_; - my $s; - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1', - PeerPort => $port, - SSL_hostname => $host, - SSL_reuse_ctx => $ctx, - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; + return http( + "GET / HTTP/1.0\nHost: $host\n\n", + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_hostname => $host, + SSL_reuse_ctx => $ctx + ); } -sub get { - my ($host, $port, $ctx) = @_; - - my $s = get_ssl_socket($host, $port, $ctx) or return; - my $r = http(< $s); -GET / HTTP/1.0 -Host: $host - -EOF - - $s->close(); - return $r; +sub test_tls13 { + return get('default', 8443) =~ /TLSv1.3/; } ############################################################################### diff --git a/ssl_verify_depth.t b/ssl_verify_depth.t --- a/ssl_verify_depth.t +++ b/ssl_verify_depth.t @@ -172,37 +172,13 @@ like(get(8082, 'end'), qr/SUCCESS/, 've sub get { my ($port, $cert) = @_; - my $s = get_ssl_socket($port, $cert) or return; - http_get("/t?$cert", socket => $s); -} - -sub get_ssl_socket { - my ($port, $cert) = @_; - my ($s); - - eval { - local $SIG{ALRM} = sub { die "timeout\n" }; - local $SIG{PIPE} = sub { die "sigpipe\n" }; - alarm(8); - $s = IO::Socket::SSL->new( - Proto => 'tcp', - PeerAddr => '127.0.0.1', - PeerPort => port($port), - SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE(), - SSL_cert_file => "$d/$cert.crt", - SSL_key_file => "$d/$cert.key", - SSL_error_trap => sub { die $_[1] } - ); - alarm(0); - }; - alarm(0); - - if ($@) { - log_in("died: $@"); - return undef; - } - - return $s; + http_get( + "/t?$cert", + PeerAddr => '127.0.0.1:' . port($port), + SSL => 1, + SSL_cert_file => "$d/$cert.crt", + SSL_key_file => "$d/$cert.key" + ); } ############################################################################### From mdounin at mdounin.ru Mon Apr 17 03:47:16 2023 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 17 Apr 2023 06:47:16 +0300 Subject: [PATCH] Fixed segfault if regex studies list allocation fails Message-ID: <910ee4cb25e07423a40f.1681703236@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1681703207 -10800 # Mon Apr 17 06:46:47 2023 +0300 # Node ID 910ee4cb25e07423a40fa6951d62f74029e7db2d # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 Fixed segfault if regex studies list allocation fails. The rcf->studies list is unconditionally accessed by ngx_regex_cleanup(), and this used to cause NULL pointer dereference if allocation failed. Fix is to set cleanup handler only when allocation succeeds. diff --git a/src/core/ngx_regex.c b/src/core/ngx_regex.c --- a/src/core/ngx_regex.c +++ b/src/core/ngx_regex.c @@ -732,14 +732,14 @@ ngx_regex_create_conf(ngx_cycle_t *cycle return NULL; } - cln->handler = ngx_regex_cleanup; - cln->data = rcf; - rcf->studies = ngx_list_create(cycle->pool, 8, sizeof(ngx_regex_elt_t)); if (rcf->studies == NULL) { return NULL; } + cln->handler = ngx_regex_cleanup; + cln->data = rcf; + ngx_regex_studies = rcf->studies; return rcf; From mdounin at mdounin.ru Mon Apr 17 03:53:15 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Apr 2023 06:53:15 +0300 Subject: [PATCH] Added stream modules realip and ssl_preread to win32 builds In-Reply-To: References: Message-ID: Hello! On Wed, Apr 12, 2023 at 05:44:29PM +0400, Sergey Kandaurov wrote: > # HG changeset patch > # User Sergey Kandaurov > # Date 1681306935 -14400 > # Wed Apr 12 17:42:15 2023 +0400 > # Node ID bdfbd7ed2433d1a68d466f353983829b17f6df1f > # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 > Added stream modules realip and ssl_preread to win32 builds. > > diff --git a/misc/GNUmakefile b/misc/GNUmakefile > --- a/misc/GNUmakefile > +++ b/misc/GNUmakefile > @@ -75,6 +75,8 @@ win32: > --with-http_slice_module \ > --with-mail \ > --with-stream \ > + --with-stream_realip_module \ > + --with-stream_ssl_preread_module \ > --with-openssl=$(OBJS)/lib/$(OPENSSL) \ > --with-openssl-opt="no-asm no-tests -D_WIN32_WINNT=0x0501" \ > --with-http_ssl_module \ There seems to be whitespace damage: spaces instead of tabs after options being added. Otherwise looks good. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Apr 17 05:21:19 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Apr 2023 08:21:19 +0300 Subject: [PATCH 1 of 2] SSL: support for TLSv1.3 certificate compression (RFC 8879) In-Reply-To: <06458cd5733cd2ffaa4e.1681304149@enoparse.local> References: <06458cd5733cd2ffaa4e.1681304149@enoparse.local> Message-ID: Hello! On Wed, Apr 12, 2023 at 04:55:49PM +0400, Sergey Kandaurov wrote: > # HG changeset patch > # User Sergey Kandaurov > # Date 1681304029 -14400 > # Wed Apr 12 16:53:49 2023 +0400 > # Node ID 06458cd5733cd2ffaa4e2d26d357524a0934a7eb > # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 > SSL: support for TLSv1.3 certificate compression (RFC 8879). > > Certificates are precompressed using the "ssl_certificate_compression" > directive, disabled by default. A negotiated certificate-compression > algorithm depends on the OpenSSL library builtin support. While not exactly relevant to the patch, looking into OpenSSL's master branch I don't see any obvious limits on the certificate expansion, except the fact that uncompressed length is limited to a 24-bit value. Is it indeed an easy way to allocate 16 MB per connection? (When I see "OpenSSL" and "compression" used together, I tend to look for a resource usage audit, a security audit, and the "no compression" option.) Also, it might make sense to add a note to the commit log that this functionality is expected to appear in OpenSSL 3.2. > > diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c > +++ b/src/event/ngx_event_openssl.c > @@ -847,6 +847,29 @@ ngx_ssl_password_callback(char *buf, int > > > ngx_int_t > +ngx_ssl_certificate_compression(ngx_conf_t *cf, ngx_ssl_t *ssl, > + ngx_uint_t enable) > +{ > + if (!enable) { > + return NGX_OK; > + } > + > +#ifdef TLSEXT_comp_cert_none > + > + if (SSL_CTX_compress_certs(ssl->ctx, 0)) { > + return NGX_OK; > + } > + > +#endif > + > + ngx_log_error(NGX_LOG_WARN, ssl->log, 0, > + "\"ssl_certificate_compression\" ignored, not supported"); Please note that this option, contrary to the name, does not enable certificate compression, but rather pre-compresses server certificates. Certificate compression is enabled by default for both client and server connections, and both sending and receiving certificates, unless disabled by the SSL_OP_NO_TX_CERTIFICATE_COMPRESSION / SSL_OP_NO_RX_CERTIFICATE_COMPRESSION options. (Further, client-side seems to compress client certificates on each connection, which looks suboptimal for proxying to SSL upstream servers with client certificates.) It might worth looking for a better name, or expanding the directive to actually disable compression unless it is enabled. [...] -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Mon Apr 17 10:59:10 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 17 Apr 2023 14:59:10 +0400 Subject: [PATCH] Added stream modules realip and ssl_preread to win32 builds In-Reply-To: References: Message-ID: > On 17 Apr 2023, at 07:53, Maxim Dounin wrote: > > Hello! > > On Wed, Apr 12, 2023 at 05:44:29PM +0400, Sergey Kandaurov wrote: > >> # HG changeset patch >> # User Sergey Kandaurov >> # Date 1681306935 -14400 >> # Wed Apr 12 17:42:15 2023 +0400 >> # Node ID bdfbd7ed2433d1a68d466f353983829b17f6df1f >> # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 >> Added stream modules realip and ssl_preread to win32 builds. >> >> diff --git a/misc/GNUmakefile b/misc/GNUmakefile >> --- a/misc/GNUmakefile >> +++ b/misc/GNUmakefile >> @@ -75,6 +75,8 @@ win32: >> --with-http_slice_module \ >> --with-mail \ >> --with-stream \ >> + --with-stream_realip_module \ >> + --with-stream_ssl_preread_module \ >> --with-openssl=$(OBJS)/lib/$(OPENSSL) \ >> --with-openssl-opt="no-asm no-tests -D_WIN32_WINNT=0x0501" \ >> --with-http_ssl_module \ > > There seems to be whitespace damage: spaces instead of tabs after > options being added. > > Otherwise looks good. Indeed, thanks for catching. -- Sergey Kandaurov From pluknet at nginx.com Mon Apr 17 11:01:03 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 17 Apr 2023 11:01:03 +0000 Subject: [nginx] Version bump. Message-ID: details: https://hg.nginx.org/nginx/rev/ea658355015b branches: changeset: 8160:ea658355015b user: Sergey Kandaurov date: Mon Apr 17 14:06:43 2023 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 5f1d05a21287 -r ea658355015b src/core/nginx.h --- a/src/core/nginx.h Tue Mar 28 18:01:54 2023 +0300 +++ b/src/core/nginx.h Mon Apr 17 14:06:43 2023 +0400 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1023004 -#define NGINX_VERSION "1.23.4" +#define nginx_version 1025000 +#define NGINX_VERSION "1.25.0" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From pluknet at nginx.com Mon Apr 17 11:01:06 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 17 Apr 2023 11:01:06 +0000 Subject: [nginx] Year 2023. Message-ID: details: https://hg.nginx.org/nginx/rev/e70cd097490a branches: changeset: 8161:e70cd097490a user: Sergey Kandaurov date: Mon Apr 17 14:07:59 2023 +0400 description: Year 2023. diffstat: docs/text/LICENSE | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (11 lines): diff -r ea658355015b -r e70cd097490a docs/text/LICENSE --- a/docs/text/LICENSE Mon Apr 17 14:06:43 2023 +0400 +++ b/docs/text/LICENSE Mon Apr 17 14:07:59 2023 +0400 @@ -1,6 +1,6 @@ /* * Copyright (C) 2002-2021 Igor Sysoev - * Copyright (C) 2011-2022 Nginx, Inc. + * Copyright (C) 2011-2023 Nginx, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without From pluknet at nginx.com Mon Apr 17 11:01:09 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 17 Apr 2023 11:01:09 +0000 Subject: [nginx] Added stream modules realip and ssl_preread to win32 builds. Message-ID: details: https://hg.nginx.org/nginx/rev/252a7acd35ce branches: changeset: 8162:252a7acd35ce user: Sergey Kandaurov date: Mon Apr 17 14:08:00 2023 +0400 description: Added stream modules realip and ssl_preread to win32 builds. diffstat: misc/GNUmakefile | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (12 lines): diff -r e70cd097490a -r 252a7acd35ce misc/GNUmakefile --- a/misc/GNUmakefile Mon Apr 17 14:07:59 2023 +0400 +++ b/misc/GNUmakefile Mon Apr 17 14:08:00 2023 +0400 @@ -75,6 +75,8 @@ win32: --with-http_slice_module \ --with-mail \ --with-stream \ + --with-stream_realip_module \ + --with-stream_ssl_preread_module \ --with-openssl=$(OBJS)/lib/$(OPENSSL) \ --with-openssl-opt="no-asm no-tests -D_WIN32_WINNT=0x0501" \ --with-http_ssl_module \ From pluknet at nginx.com Mon Apr 17 12:54:37 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 17 Apr 2023 16:54:37 +0400 Subject: [PATCH] Fixed segfault if regex studies list allocation fails In-Reply-To: <910ee4cb25e07423a40f.1681703236@vm-bsd.mdounin.ru> References: <910ee4cb25e07423a40f.1681703236@vm-bsd.mdounin.ru> Message-ID: > On 17 Apr 2023, at 07:47, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1681703207 -10800 > # Mon Apr 17 06:46:47 2023 +0300 > # Node ID 910ee4cb25e07423a40fa6951d62f74029e7db2d > # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 > Fixed segfault if regex studies list allocation fails. > > The rcf->studies list is unconditionally accessed by ngx_regex_cleanup(), > and this used to cause NULL pointer dereference if allocation > failed. Fix is to set cleanup handler only when allocation succeeds. > > diff --git a/src/core/ngx_regex.c b/src/core/ngx_regex.c > --- a/src/core/ngx_regex.c > +++ b/src/core/ngx_regex.c > @@ -732,14 +732,14 @@ ngx_regex_create_conf(ngx_cycle_t *cycle > return NULL; > } > > - cln->handler = ngx_regex_cleanup; > - cln->data = rcf; > - > rcf->studies = ngx_list_create(cycle->pool, 8, sizeof(ngx_regex_elt_t)); > if (rcf->studies == NULL) { > return NULL; > } > > + cln->handler = ngx_regex_cleanup; > + cln->data = rcf; > + > ngx_regex_studies = rcf->studies; > > return rcf; Looks good. On a related note, 2ca57257252d where it was seemingly introduced, has a "Core:" log summary prefix. -- Sergey Kandaurov From vadim.fedorenko at cdnnow.ru Mon Apr 17 23:07:07 2023 From: vadim.fedorenko at cdnnow.ru (Vadim Fedorenko) Date: Tue, 18 Apr 2023 02:07:07 +0300 Subject: [PATCH 1 of 4] Core: use explicit_bzero if possible In-Reply-To: References: Message-ID: <0a1c8cb5c05141f3ea31.1681772827@repo.dev.cdnnow.net> # HG changeset patch # User Vadim Fedorenko # Date 1681771172 -10800 # Tue Apr 18 01:39:32 2023 +0300 # Node ID 0a1c8cb5c05141f3ea3135d9f01688f7693fc7df # Parent 252a7acd35ceff4fca7a8c60a9aa6d4d22b688bf Core: use explicit_bzero if possible. GCC 11+ expanded the scope of dead store elimination optimization and memory barrier trick doesn't work anymore. But there is new function exists in glibc to explicitly clear the buffer - explicit_bzero(). Let's use it instead. --- auto/unix | 10 ++++++++++ src/core/ngx_string.c | 4 ++++ src/core/ngx_string.h | 5 ++++- 3 files changed, 18 insertions(+), 1 deletion(-) diff -r 252a7acd35ce -r 0a1c8cb5c051 auto/unix --- a/auto/unix Mon Apr 17 14:08:00 2023 +0400 +++ b/auto/unix Tue Apr 18 01:39:32 2023 +0300 @@ -1002,3 +1002,13 @@ if (getaddrinfo("localhost", NULL, NULL, &res) != 0) return 1; freeaddrinfo(res)' . auto/feature + + +ngx_feature="explicit_bzero()" +ngx_feature_name="NGX_HAVE_EXPLICIT_BZERO" +ngx_feature_run=no +ngx_feature_incs='#include ' +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="char p[16]; explicit_bzero(p, sizeof(p));" +. auto/feature diff -r 252a7acd35ce -r 0a1c8cb5c051 src/core/ngx_string.c --- a/src/core/ngx_string.c Mon Apr 17 14:08:00 2023 +0400 +++ b/src/core/ngx_string.c Tue Apr 18 01:39:32 2023 +0300 @@ -2080,6 +2080,8 @@ } +#if !(NGX_HAVE_EXPLICIT_BZERO) + void ngx_explicit_memzero(void *buf, size_t n) { @@ -2087,6 +2089,8 @@ ngx_memory_barrier(); } +#endif + #if (NGX_MEMCPY_LIMIT) diff -r 252a7acd35ce -r 0a1c8cb5c051 src/core/ngx_string.h --- a/src/core/ngx_string.h Mon Apr 17 14:08:00 2023 +0400 +++ b/src/core/ngx_string.h Tue Apr 18 01:39:32 2023 +0300 @@ -87,8 +87,11 @@ */ #define ngx_memzero(buf, n) (void) memset(buf, 0, n) #define ngx_memset(buf, c, n) (void) memset(buf, c, n) - +#if (NGX_HAVE_EXPLICIT_BZERO) +#define ngx_explicit_memzero(buf, n) explicit_bzero(buf, n) +#else void ngx_explicit_memzero(void *buf, size_t n); +#endif #if (NGX_MEMCPY_LIMIT) From vadim.fedorenko at cdnnow.ru Mon Apr 17 23:07:08 2023 From: vadim.fedorenko at cdnnow.ru (Vadim Fedorenko) Date: Tue, 18 Apr 2023 02:07:08 +0300 Subject: [PATCH 2 of 4] md5: use explicit memzero to avoid optimizations In-Reply-To: References: Message-ID: <8f8773a3076bdbd91fc7.1681772828@repo.dev.cdnnow.net> # HG changeset patch # User Vadim Fedorenko # Date 1681771200 -10800 # Tue Apr 18 01:40:00 2023 +0300 # Node ID 8f8773a3076bdbd91fc7a4e96d7a068f7ff29b09 # Parent 0a1c8cb5c05141f3ea3135d9f01688f7693fc7df md5: use explicit memzero to avoid optimizations. GCC11 is optimizing memzero functions in md5 implementation. Use ngx_explicit_memzero() instead. --- src/core/ngx_md5.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff -r 0a1c8cb5c051 -r 8f8773a3076b src/core/ngx_md5.c --- a/src/core/ngx_md5.c Tue Apr 18 01:39:32 2023 +0300 +++ b/src/core/ngx_md5.c Tue Apr 18 01:40:00 2023 +0300 @@ -70,13 +70,13 @@ free = 64 - used; if (free < 8) { - ngx_memzero(&ctx->buffer[used], free); + ngx_explicit_memzero(&ctx->buffer[used], free); (void) ngx_md5_body(ctx, ctx->buffer, 64); used = 0; free = 64; } - ngx_memzero(&ctx->buffer[used], free - 8); + ngx_explicit_memzero(&ctx->buffer[used], free - 8); ctx->bytes <<= 3; ctx->buffer[56] = (u_char) ctx->bytes; @@ -107,7 +107,7 @@ result[14] = (u_char) (ctx->d >> 16); result[15] = (u_char) (ctx->d >> 24); - ngx_memzero(ctx, sizeof(*ctx)); + ngx_explicit_memzero(ctx, sizeof(*ctx)); } From vadim.fedorenko at cdnnow.ru Mon Apr 17 23:07:10 2023 From: vadim.fedorenko at cdnnow.ru (Vadim Fedorenko) Date: Tue, 18 Apr 2023 02:07:10 +0300 Subject: [PATCH 4 of 4] inet: use explicit memzero to avoid optimizations In-Reply-To: References: Message-ID: <460c71c36b00fdd510cb.1681772830@repo.dev.cdnnow.net> # HG changeset patch # User Vadim Fedorenko # Date 1681771255 -10800 # Tue Apr 18 01:40:55 2023 +0300 # Node ID 460c71c36b00fdd510cb511a5714face68280dac # Parent 5663d8ff4399e7e76369c024db59c40178290213 inet: use explicit memzero to avoid optimizations. GCC11+ removes memzero call in ngx_inet6_addr(). Use ngx_explicit_memzero() to avoid optimization. --- src/core/ngx_inet.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff -r 5663d8ff4399 -r 460c71c36b00 src/core/ngx_inet.c --- a/src/core/ngx_inet.c Tue Apr 18 01:40:20 2023 +0300 +++ b/src/core/ngx_inet.c Tue Apr 18 01:40:55 2023 +0300 @@ -163,7 +163,7 @@ while (s >= zero) { *d-- = *s--; } - ngx_memzero(zero, n); + ngx_explicit_memzero(zero, n); return NGX_OK; } From vadim.fedorenko at cdnnow.ru Mon Apr 17 23:07:06 2023 From: vadim.fedorenko at cdnnow.ru (Vadim Fedorenko) Date: Tue, 18 Apr 2023 02:07:06 +0300 Subject: [PATCH 0 of 4] Avoid dead store elimination in GCC 11+ Message-ID: GCC version 11 and newer use more aggressive way to eliminate dead stores which ends up removing ngx_memzero() calls in several places. Such optimization affects calculations of md5 and sha1 implemented internally in nginx. The effect could be easily observed by adding a random data to buffer array in md5_init() or sha1_init() functions. With this simple modifications the result of the hash computation will be different each time even though the provided data to hash is not changed. Changing the code to use current implementation of ngx_explicit_memzero() doesn't help because of link-time optimizations enabled in RHEL 9 and derivatives. Glibc 2.34 found in RHEL 9 provides explicit_bzero() function which should be used to avoid such optimization. ngx_explicit_memzero() is changed to use explicit_bzero() if possible. From vadim.fedorenko at cdnnow.ru Mon Apr 17 23:07:09 2023 From: vadim.fedorenko at cdnnow.ru (Vadim Fedorenko) Date: Tue, 18 Apr 2023 02:07:09 +0300 Subject: [PATCH 3 of 4] sha1: use explicit memzero to avoid optimizations In-Reply-To: References: Message-ID: <5663d8ff4399e7e76369.1681772829@repo.dev.cdnnow.net> # HG changeset patch # User Vadim Fedorenko # Date 1681771220 -10800 # Tue Apr 18 01:40:20 2023 +0300 # Node ID 5663d8ff4399e7e76369c024db59c40178290213 # Parent 8f8773a3076bdbd91fc7a4e96d7a068f7ff29b09 sha1: use explicit memzero to avoid optimizations. GCC11 is optimizing memzero functions in sha1 implementation. Use ngx_explicit_memzero() instead. --- src/core/ngx_sha1.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff -r 8f8773a3076b -r 5663d8ff4399 src/core/ngx_sha1.c --- a/src/core/ngx_sha1.c Tue Apr 18 01:40:00 2023 +0300 +++ b/src/core/ngx_sha1.c Tue Apr 18 01:40:20 2023 +0300 @@ -72,13 +72,13 @@ free = 64 - used; if (free < 8) { - ngx_memzero(&ctx->buffer[used], free); + ngx_explicit_memzero(&ctx->buffer[used], free); (void) ngx_sha1_body(ctx, ctx->buffer, 64); used = 0; free = 64; } - ngx_memzero(&ctx->buffer[used], free - 8); + ngx_explicit_memzero(&ctx->buffer[used], free - 8); ctx->bytes <<= 3; ctx->buffer[56] = (u_char) (ctx->bytes >> 56); @@ -113,7 +113,7 @@ result[18] = (u_char) (ctx->e >> 8); result[19] = (u_char) ctx->e; - ngx_memzero(ctx, sizeof(*ctx)); + ngx_explicit_memzero(ctx, sizeof(*ctx)); } From mdounin at mdounin.ru Tue Apr 18 01:54:29 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Apr 2023 04:54:29 +0300 Subject: [PATCH 0 of 4] Avoid dead store elimination in GCC 11+ In-Reply-To: References: Message-ID: Hello! On Tue, Apr 18, 2023 at 02:07:06AM +0300, Vadim Fedorenko via nginx-devel wrote: > GCC version 11 and newer use more aggressive way to eliminate dead stores > which ends up removing ngx_memzero() calls in several places. Such optimization > affects calculations of md5 and sha1 implemented internally in nginx. The > effect could be easily observed by adding a random data to buffer array in > md5_init() or sha1_init() functions. With this simple modifications the result > of the hash computation will be different each time even though the provided > data to hash is not changed. If calculations of md5 and sha1 are affected, this means that the stores in question are not dead, and they shouldn't be eliminated in the first place. From your description this looks like a bug in the compiler in question. Alternatively, this can be a bug in nginx code which makes the compiler think that it can eliminate these ngx_memzero() calls - for example, GCC is known to do such things if it sees an undefined behaviour in the code. You may want to elaborate more on how to reproduce this, and, if possible, how to build a minimal test case which demonstrates the problem. > Changing the code to use current implementation > of ngx_explicit_memzero() doesn't help because of link-time optimizations > enabled in RHEL 9 and derivatives. Glibc 2.34 found in RHEL 9 provides > explicit_bzero() function which should be used to avoid such optimization. > ngx_explicit_memzero() is changed to use explicit_bzero() if possible. The ngx_explicit_memzero() function is to be used when zeroed data are indeed not used afterwards, for example, to make sure passwords are actually eliminated from memory. It shouldn't be used instead of a real ngx_memzero() call - doing so might hide the problem, which is either in the compiler or in nginx, but won't fix it. As for using explicit_bzero() for it, we've looked into various OS-specific solutions, though there are too many variants out there, so it was decided that having our own simple implementation is a better way to go. If it doesn't work in the particular setup, it might make sense to adjust our implementation - but given the above, it might be the same issue which causes the original problem. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Apr 18 03:31:17 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Apr 2023 06:31:17 +0300 Subject: [PATCH] Fixed segfault if regex studies list allocation fails In-Reply-To: References: <910ee4cb25e07423a40f.1681703236@vm-bsd.mdounin.ru> Message-ID: Hello! On Mon, Apr 17, 2023 at 04:54:37PM +0400, Sergey Kandaurov wrote: > > > On 17 Apr 2023, at 07:47, Maxim Dounin wrote: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1681703207 -10800 > > # Mon Apr 17 06:46:47 2023 +0300 > > # Node ID 910ee4cb25e07423a40fa6951d62f74029e7db2d > > # Parent 5f1d05a21287ba0290dd3a17ad501595b442a194 > > Fixed segfault if regex studies list allocation fails. > > > > The rcf->studies list is unconditionally accessed by ngx_regex_cleanup(), > > and this used to cause NULL pointer dereference if allocation > > failed. Fix is to set cleanup handler only when allocation succeeds. > > > > diff --git a/src/core/ngx_regex.c b/src/core/ngx_regex.c > > --- a/src/core/ngx_regex.c > > +++ b/src/core/ngx_regex.c > > @@ -732,14 +732,14 @@ ngx_regex_create_conf(ngx_cycle_t *cycle > > return NULL; > > } > > > > - cln->handler = ngx_regex_cleanup; > > - cln->data = rcf; > > - > > rcf->studies = ngx_list_create(cycle->pool, 8, sizeof(ngx_regex_elt_t)); > > if (rcf->studies == NULL) { > > return NULL; > > } > > > > + cln->handler = ngx_regex_cleanup; > > + cln->data = rcf; > > + > > ngx_regex_studies = rcf->studies; > > > > return rcf; > > Looks good. > > On a related note, 2ca57257252d where it was seemingly > introduced, has a "Core:" log summary prefix. Thanks for the review, pushed to http://mdounin.ru/hg/nginx. -- Maxim Dounin http://mdounin.ru/ From vadim.fedorenko at cdnnow.ru Tue Apr 18 09:50:01 2023 From: vadim.fedorenko at cdnnow.ru (Vadim Fedorenko) Date: Tue, 18 Apr 2023 10:50:01 +0100 Subject: [PATCH 0 of 4] Avoid dead store elimination in GCC 11+ In-Reply-To: References: Message-ID: On 18.04.2023 02:54, Maxim Dounin wrote: > Hello! > > On Tue, Apr 18, 2023 at 02:07:06AM +0300, Vadim Fedorenko via nginx-devel wrote: > >> GCC version 11 and newer use more aggressive way to eliminate dead stores >> which ends up removing ngx_memzero() calls in several places. Such optimization >> affects calculations of md5 and sha1 implemented internally in nginx. The >> effect could be easily observed by adding a random data to buffer array in >> md5_init() or sha1_init() functions. With this simple modifications the result >> of the hash computation will be different each time even though the provided >> data to hash is not changed. > > If calculations of md5 and sha1 are affected, this means that the > stores in question are not dead, and they shouldn't be eliminated > in the first place. From your description this looks like a bug > in the compiler in question. Yeah, these ngx_memzero()s must not be dead, but according to the standart they are. In md5_final() the function is called this way: ngx_memzero(&ctx->buffer[used], free - 8); That means that a new variable of type 'char *' is created with the life time scoped to the call to ngx_memzero(). As the result of of the function is ignored explicitly, no other parameters are passed by pointer, and the variable is not accessed anywhere else, the whole call can be optimized out. > Alternatively, this can be a bug in nginx code which makes the > compiler think that it can eliminate these ngx_memzero() calls - for > example, GCC is known to do such things if it sees an undefined > behaviour in the code. There is no undefined behavior unfortunately, everything in this place is well defined. > You may want to elaborate more on how to reproduce this, and, if > possible, how to build a minimal test case which demonstrates the > problem. Sure, let's elaborate a bit. To reproduce the bug you can simply apply the diff: diff --git a/src/core/ngx_md5.c b/src/core/ngx_md5.c index c25d0025d..67cc06438 100644 --- a/src/core/ngx_md5.c +++ b/src/core/ngx_md5.c @@ -24,6 +24,7 @@ ngx_md5_init(ngx_md5_t *ctx) ctx->d = 0x10325476; ctx->bytes = 0; + getrandom(ctx->buffer, 64, 0); } This code will emulate the garbage for the stack-allocated 'ngx_md5_t md5;' in ngx_http_file_cache_create_key when nginx is running under the load. Then you can use simple configuration: upstream test_001_origin { server 127.0.0.1:8000; } proxy_cache_path /var/cache/nginx/test-001 keys_zone=test_001:10m max_size=5g inactive=24h levels=1:2 use_temp_path=off; server { listen 127.0.0.1:8000; location = /plain { return 200; } } server { listen 127.0.0.1:80; location /oplain { proxy_cache test_001; proxy_cache_key /oplain; proxy_pass http://test_001_origin/plain/; } } Every time you call 'curl http://127.0.0.1/oplain' a new cache file will be created, but the md5sum of the file will be the same, meaining that the key stored in the file is absolutely the same. >> Changing the code to use current implementation >> of ngx_explicit_memzero() doesn't help because of link-time optimizations >> enabled in RHEL 9 and derivatives. Glibc 2.34 found in RHEL 9 provides >> explicit_bzero() function which should be used to avoid such optimization. >> ngx_explicit_memzero() is changed to use explicit_bzero() if possible. > > The ngx_explicit_memzero() function is to be used when zeroed data > are indeed not used afterwards, for example, to make sure > passwords are actually eliminated from memory. It shouldn't be > used instead of a real ngx_memzero() call - doing so might hide > the problem, which is either in the compiler or in nginx, but > won't fix it. In this case the nginx code should be fixed to avoid partial memory fillings, but such change will come with performance penalty, especially on the CPUs without proper `REP MOVSB/MOVSD/MOVSQ` implementation. Controlled usage of explicit zeroing is much better is this case. > As for using explicit_bzero() for it, we've looked into various > OS-specific solutions, though there are too many variants out > there, so it was decided that having our own simple implementation > is a better way to go. If it doesn't work in the particular > setup, it might make sense to adjust our implementation - but > given the above, it might be the same issue which causes the > original problem. Unfortunately, the memory barrier trick is not working anymore for linker-time optimizations. Linker has better information about whether the stored information is used again or not. And it will remove memset in such implementation, and it will definitely affected security-related code you mentioned above. explicit_bzero() function is available in well-loved *BSD systems now and is a proper way to do cleaning of the artifacts, doesn't matter which implementation is used in the specific system. From alx.manpages at gmail.com Tue Apr 18 14:16:20 2023 From: alx.manpages at gmail.com (Alejandro Colomar) Date: Tue, 18 Apr 2023 16:16:20 +0200 Subject: [PATCH 0 of 4] Avoid dead store elimination in GCC 11+ In-Reply-To: References: Message-ID: <96779525-a259-b260-7a7e-f48f4e47c650@gmail.com> Hello Vladim, On 4/18/23 11:50, Vadim Fedorenko via nginx-devel wrote: > On 18.04.2023 02:54, Maxim Dounin wrote: >> Hello! >> >> On Tue, Apr 18, 2023 at 02:07:06AM +0300, Vadim Fedorenko via nginx-devel wrote: >> >>> GCC version 11 and newer use more aggressive way to eliminate dead stores >>> which ends up removing ngx_memzero() calls in several places. Such optimization >>> affects calculations of md5 and sha1 implemented internally in nginx. The >>> effect could be easily observed by adding a random data to buffer array in >>> md5_init() or sha1_init() functions. With this simple modifications the result >>> of the hash computation will be different each time even though the provided >>> data to hash is not changed. >> >> If calculations of md5 and sha1 are affected, this means that the >> stores in question are not dead, and they shouldn't be eliminated >> in the first place. From your description this looks like a bug >> in the compiler in question. > > Yeah, these ngx_memzero()s must not be dead, but according to the standart they > are. In md5_final() the function is called this way: > ngx_memzero(&ctx->buffer[used], free - 8); > That means that a new variable of type 'char *' is created with the life time Correction: no variable is being created, but rather an object. > scoped to the call to ngx_memzero(). As the result of of the function is ignored > explicitly, no other parameters are passed by pointer, and the variable is not > accessed anywhere else, the whole call can be optimized out. > >> Alternatively, this can be a bug in nginx code which makes the >> compiler think that it can eliminate these ngx_memzero() calls - for >> example, GCC is known to do such things if it sees an undefined >> behaviour in the code. > > There is no undefined behavior unfortunately, everything in this place is well > defined. No. UB is being invoked somewhere, or this couldn't possibly happen. If this object being created by ngx_memzero() only lives during this short time, then why does it affect any other code? It seems to be some aliasing error, since some other code seems to be reading that "unused" memory. Maybe the UB is far from this code, and fixing it may not be trivial, but Maxim is right, in that using explicit_bzero(3) or equivalent is just hiding the bug instead of fixing it. Since GCC 11 is relatively old and tested, I'm inclined to think the bug is in nginx. Maybe if nginx specified -fno-strict-aliasing, this UB issue could be resolved; you could try. Cheers, Alex > >> You may want to elaborate more on how to reproduce this, and, if >> possible, how to build a minimal test case which demonstrates the >> problem. > > Sure, let's elaborate a bit. To reproduce the bug you can simply apply the diff: > > diff --git a/src/core/ngx_md5.c b/src/core/ngx_md5.c > index c25d0025d..67cc06438 100644 > --- a/src/core/ngx_md5.c > +++ b/src/core/ngx_md5.c > @@ -24,6 +24,7 @@ ngx_md5_init(ngx_md5_t *ctx) > ctx->d = 0x10325476; > > ctx->bytes = 0; > + getrandom(ctx->buffer, 64, 0); > } > > > This code will emulate the garbage for the stack-allocated 'ngx_md5_t md5;' in > ngx_http_file_cache_create_key when nginx is running under the load. Then you > can use simple configuration: > > upstream test_001_origin { > server 127.0.0.1:8000; > } > > proxy_cache_path /var/cache/nginx/test-001 keys_zone=test_001:10m max_size=5g > inactive=24h levels=1:2 use_temp_path=off; > > server { > listen 127.0.0.1:8000; > > location = /plain { > return 200; > } > > } > > server { > listen 127.0.0.1:80; > > location /oplain { > proxy_cache test_001; > proxy_cache_key /oplain; > proxy_pass http://test_001_origin/plain/; > } > } > > > Every time you call 'curl http://127.0.0.1/oplain' a new cache file will be > created, but the md5sum of the file will be the same, meaining that the key > stored in the file is absolutely the same. > >>> Changing the code to use current implementation >>> of ngx_explicit_memzero() doesn't help because of link-time optimizations >>> enabled in RHEL 9 and derivatives. Glibc 2.34 found in RHEL 9 provides >>> explicit_bzero() function which should be used to avoid such optimization. >>> ngx_explicit_memzero() is changed to use explicit_bzero() if possible. >> >> The ngx_explicit_memzero() function is to be used when zeroed data >> are indeed not used afterwards, for example, to make sure >> passwords are actually eliminated from memory. It shouldn't be >> used instead of a real ngx_memzero() call - doing so might hide >> the problem, which is either in the compiler or in nginx, but >> won't fix it. > > In this case the nginx code should be fixed to avoid partial memory fillings, > but such change will come with performance penalty, especially on the CPUs > without proper `REP MOVSB/MOVSD/MOVSQ` implementation. Controlled usage of > explicit zeroing is much better is this case. > >> As for using explicit_bzero() for it, we've looked into various >> OS-specific solutions, though there are too many variants out >> there, so it was decided that having our own simple implementation >> is a better way to go. If it doesn't work in the particular >> setup, it might make sense to adjust our implementation - but >> given the above, it might be the same issue which causes the >> original problem. > > Unfortunately, the memory barrier trick is not working anymore for linker-time > optimizations. Linker has better information about whether the stored > information is used again or not. And it will remove memset in such > implementation, and it will definitely affected security-related code you > mentioned above. explicit_bzero() function is available in well-loved *BSD > systems now and is a proper way to do cleaning of the artifacts, doesn't matter > which implementation is used in the specific system. > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel -- GPG key fingerprint: A9348594CE31283A826FBDD8D57633D441E25BB5 -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From pluknet at nginx.com Tue Apr 18 16:11:42 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 18 Apr 2023 16:11:42 +0000 Subject: [nginx] Fixed segfault if regex studies list allocation fails. Message-ID: details: https://hg.nginx.org/nginx/rev/77d5c662f3d9 branches: changeset: 8163:77d5c662f3d9 user: Maxim Dounin date: Tue Apr 18 06:28:46 2023 +0300 description: Fixed segfault if regex studies list allocation fails. The rcf->studies list is unconditionally accessed by ngx_regex_cleanup(), and this used to cause NULL pointer dereference if allocation failed. Fix is to set cleanup handler only when allocation succeeds. diffstat: src/core/ngx_regex.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diffs (21 lines): diff -r 252a7acd35ce -r 77d5c662f3d9 src/core/ngx_regex.c --- a/src/core/ngx_regex.c Mon Apr 17 14:08:00 2023 +0400 +++ b/src/core/ngx_regex.c Tue Apr 18 06:28:46 2023 +0300 @@ -732,14 +732,14 @@ ngx_regex_create_conf(ngx_cycle_t *cycle return NULL; } - cln->handler = ngx_regex_cleanup; - cln->data = rcf; - rcf->studies = ngx_list_create(cycle->pool, 8, sizeof(ngx_regex_elt_t)); if (rcf->studies == NULL) { return NULL; } + cln->handler = ngx_regex_cleanup; + cln->data = rcf; + ngx_regex_studies = rcf->studies; return rcf; From pgnet.dev at gmail.com Tue Apr 18 17:26:44 2023 From: pgnet.dev at gmail.com (PGNet Dev) Date: Tue, 18 Apr 2023 13:26:44 -0400 Subject: nginx 1.24 + njs build errors [-Werror=dangling-pointer=] after switch from GCC 12 (Fedora 37) -> GCC13 (Fedora 38) Message-ID: I'm building nginx mainline v1.24 on Fedora. on F37, with gcc 12, gcc --version gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4) Copyright (C) 2022 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. build's good. Upgrading to today's new/latest F38, with gcc 13, gcc --version gcc (GCC) 13.0.1 20230401 (Red Hat 13.0.1-0) Copyright (C) 2023 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. builds for target F38+ fail @ dangling-pointer errors, ... src/njs_iterator.c: In function 'njs_object_iterate': src/njs_iterator.c:358:25: error: storing the address of local variable 'string_obj' in '*args.value' [-Werror=dangling-pointer=] 358 | args->value = &string_obj; | ~~~~~~~~~~~~^~~~~~~~~~~~~ ... cc1: all warnings being treated as errors adding -Wno-dangling-pointer to build flags worksaround it, with successful build. for ref, FAILED build log: https://download.copr.fedorainfracloud.org/results/pgfed/nginx-mainline/fedora-38-x86_64/05802768-nginx/build.log.gz OK build log: https://download.copr.fedorainfracloud.org/results/pgfed/nginx-mainline/fedora-38-x86_64/05802814-nginx/build.log.gz I'm checking to see whether the error flag was added to GCC 13 upstream, or just to Redhat/Fedora flags ... From mdounin at mdounin.ru Tue Apr 18 19:14:24 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Apr 2023 22:14:24 +0300 Subject: [PATCH 0 of 4] Avoid dead store elimination in GCC 11+ In-Reply-To: References: Message-ID: Hello! On Tue, Apr 18, 2023 at 10:50:01AM +0100, Vadim Fedorenko wrote: > On 18.04.2023 02:54, Maxim Dounin wrote: > > Hello! > > > > On Tue, Apr 18, 2023 at 02:07:06AM +0300, Vadim Fedorenko via nginx-devel wrote: > > > >> GCC version 11 and newer use more aggressive way to eliminate dead stores > >> which ends up removing ngx_memzero() calls in several places. Such optimization > >> affects calculations of md5 and sha1 implemented internally in nginx. The > >> effect could be easily observed by adding a random data to buffer array in > >> md5_init() or sha1_init() functions. With this simple modifications the result > >> of the hash computation will be different each time even though the provided > >> data to hash is not changed. > > > > If calculations of md5 and sha1 are affected, this means that the > > stores in question are not dead, and they shouldn't be eliminated > > in the first place. From your description this looks like a bug > > in the compiler in question. > > Yeah, these ngx_memzero()s must not be dead, but according to the standart they > are. In md5_final() the function is called this way: > ngx_memzero(&ctx->buffer[used], free - 8); > That means that a new variable of type 'char *' is created with the life time > scoped to the call to ngx_memzero(). As the result of of the function is ignored > explicitly, no other parameters are passed by pointer, and the variable is not > accessed anywhere else, the whole call can be optimized out. The pointer is passed to the function, and the function modifies the memory being pointed to by the pointer. While the pointer is not used anywhere else and can be optimized out, the memory it points to is used elsewhere, and this modification cannot be optimized out, so it is incorrect to remove the call according to my understanding of the C standard. If you still think it's correct and based on the C standard, please provide relevant references (and quotes) which explain why these calls can be optimized out. > > Alternatively, this can be a bug in nginx code which makes the > > compiler think that it can eliminate these ngx_memzero() calls - for > > example, GCC is known to do such things if it sees an undefined > > behaviour in the code. > > There is no undefined behavior unfortunately, everything in this place is well > defined. Well, I don't think so. There is a function call, and it cannot be eliminated by the compiler unless the compiler thinks that the results of the function call do not affect the program execution as externally observed. Clearly the program execution is affected (as per your claim). This leaves us the two possible alternatives: - There is a bug in the compiler, and it incorrectly thinks that the function call do not affect the program execution. - There is a bug in the code, and it triggers undefined behaviour, so the compiler might not actually know what happens in the code (because it not required to do anything meaningful in case of undefined behaviour, and simply assume it should never happen). Just in case, the actual undefined behaviour might occur in the ngx_md5_body() function due to strict-aliasing rules being broken by the optimized GET() macro on platforms without strict alignment requirements if the original data buffer as provided to ngx_md5_update() cannot be aliased by uint32_t. See this commit in the original repository of the md5 code nginx uses: https://cvsweb.openwall.com/cgi/cvsweb.cgi/Owl/packages/popa3d/popa3d/md5/md5.c.diff?r1=1.14;r2=1.15 But nginx only uses ngx_md5_update() with text buffers, so strict-aliasing rules aren't broken. > > You may want to elaborate more on how to reproduce this, and, if > > possible, how to build a minimal test case which demonstrates the > > problem. > > Sure, let's elaborate a bit. To reproduce the bug you can simply apply the diff: > > diff --git a/src/core/ngx_md5.c b/src/core/ngx_md5.c > index c25d0025d..67cc06438 100644 > --- a/src/core/ngx_md5.c > +++ b/src/core/ngx_md5.c > @@ -24,6 +24,7 @@ ngx_md5_init(ngx_md5_t *ctx) > ctx->d = 0x10325476; > > ctx->bytes = 0; > + getrandom(ctx->buffer, 64, 0); > } > Note that this won't compile, it also needs "#include ". > This code will emulate the garbage for the stack-allocated 'ngx_md5_t md5;' in > ngx_http_file_cache_create_key when nginx is running under the load. Then you > can use simple configuration: > > upstream test_001_origin { > server 127.0.0.1:8000; > } > > proxy_cache_path /var/cache/nginx/test-001 keys_zone=test_001:10m max_size=5g > inactive=24h levels=1:2 use_temp_path=off; > > server { > listen 127.0.0.1:8000; > > location = /plain { > return 200; > } > > } > > server { > listen 127.0.0.1:80; > > location /oplain { > proxy_cache test_001; > proxy_cache_key /oplain; > proxy_pass http://test_001_origin/plain/; > } > } > > > Every time you call 'curl http://127.0.0.1/oplain' a new cache file will be > created, but the md5sum of the file will be the same, meaining that the key > stored in the file is absolutely the same. Note that the exact configuration will make "GET /plain/" requests to upstream server, resulting in 404 and nothing cached. I've fixed this to actually match "location = /plain" and added "proxy_cache_valid 200 1h;" to ensure caching will actually work. Still, I wasn't able to reproduce the issue you are seeing on FreeBSD 12.4 with gcc12, neither with default compilation flags as used by nginx, nor with --with-cc-opt="-flto -O2" and --with-ld-opt="-flto -O2". On RHEL 9 (Red Hat Enterprise Linux release 9.1 (Plow) from redhat/ubi9 image in Docker) with gcc11 (gcc version 11.3.1 20220421 (Red Hat 11.3.1-2) (GCC), as installed with "yum install gcc") I wasn't able to reproduce this as well (also tested both with default compilation flags as provided by nginx, and cc/ld options "-flto -O2"). You may want to provide more details on how to reproduce this. Some exact steps you've actually tested might the way to go. > >> Changing the code to use current implementation > >> of ngx_explicit_memzero() doesn't help because of link-time optimizations > >> enabled in RHEL 9 and derivatives. Glibc 2.34 found in RHEL 9 provides > >> explicit_bzero() function which should be used to avoid such optimization. > >> ngx_explicit_memzero() is changed to use explicit_bzero() if possible. > > > > The ngx_explicit_memzero() function is to be used when zeroed data > > are indeed not used afterwards, for example, to make sure > > passwords are actually eliminated from memory. It shouldn't be > > used instead of a real ngx_memzero() call - doing so might hide > > the problem, which is either in the compiler or in nginx, but > > won't fix it. > > In this case the nginx code should be fixed to avoid partial memory fillings, > but such change will come with performance penalty, especially on the CPUs > without proper `REP MOVSB/MOVSD/MOVSQ` implementation. Controlled usage of > explicit zeroing is much better is this case. You may want to elaborate on what "nginx code should be fixed to avoid partial memory fillings" means and why it should be fixed/avoided. > > As for using explicit_bzero() for it, we've looked into various > > OS-specific solutions, though there are too many variants out > > there, so it was decided that having our own simple implementation > > is a better way to go. If it doesn't work in the particular > > setup, it might make sense to adjust our implementation - but > > given the above, it might be the same issue which causes the > > original problem. > > Unfortunately, the memory barrier trick is not working anymore for linker-time > optimizations. Linker has better information about whether the stored > information is used again or not. And it will remove memset in such > implementation, and it will definitely affected security-related code you > mentioned above. Without link-time optimization, just a separate compilation unit with a function is more than enough. The ngx_memory_barrier() is additionally used as in many practical cases it introduces a compiler barrier, and this also defeats link-time optimization. This might not be the case for GCC though, as with GCC we currently use __sync_synchronize() for ngx_memory_barrier(). Adding an explicit compiler barrier (asm with the "memory" clobber should work for most compilers, but not all) might be the way to go if it's indeed the case. It does seem to work with GCC with link-time optimizations enabled though, as least in the RHEL 9 build with gcc11 "-flto -O2". I'm seeing this in the disassemble of ngx_http_auth_basic_handler(): 0x00000000004709c6 <+918>: rep stos %rax,%es:(%rdi) 0x00000000004709c9 <+921>: lock orq $0x0,(%rsp) So it looks like ngx_explicit_memzero() is inlined and optimized to use "rep stos" instead of memset() call, but not eliminated. > explicit_bzero() function is available in well-loved *BSD > systems now and is a proper way to do cleaning of the artifacts, doesn't matter > which implementation is used in the specific system. If I recall correctly, when I last checked there were something like 5 different interfaces out there, including explicit_bzero(), explicit_memset(), memset_s(), and SecureZeroMemory(). With memset_s() being required by C11 standard, but with absolutely brain-damaged interface. (It looks like now there is also memset_explicit(), which is going to become a standard in C23.) As such, the decision was to use our own function which does the trick in most practical cases. And if for some reason it doesn't, this isn't a big issue: that's a mitigation technique at most. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Wed Apr 19 01:23:03 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 18 Apr 2023 18:23:03 -0700 Subject: nginx 1.24 + njs build errors [-Werror=dangling-pointer=] after switch from GCC 12 (Fedora 37) -> GCC13 (Fedora 38) In-Reply-To: References: Message-ID: <25f8356b-5248-204c-168d-80798eb6cc7b@nginx.com> On 4/18/23 10:26 AM, PGNet Dev wrote: > I'm building nginx mainline v1.24 on Fedora. > > on F37, with gcc 12, > >     gcc --version >         gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4) >         Copyright (C) 2022 Free Software Foundation, Inc. >         This is free software; see the source for copying conditions. > There is NO >         warranty; not even for MERCHANTABILITY or FITNESS FOR A > PARTICULAR PURPOSE. > > > build's good. > > Upgrading to today's new/latest F38, with gcc 13, > >     gcc --version >         gcc (GCC) 13.0.1 20230401 (Red Hat 13.0.1-0) >         Copyright (C) 2023 Free Software Foundation, Inc. >         This is free software; see the source for copying conditions. > There is NO >         warranty; not even for MERCHANTABILITY or FITNESS FOR A > PARTICULAR PURPOSE. > > builds for target F38+ fail @ dangling-pointer errors, > >     ... >     src/njs_iterator.c: In function 'njs_object_iterate': >     src/njs_iterator.c:358:25: error: storing the address of local > variable 'string_obj' in '*args.value' [-Werror=dangling-pointer=] >       358 |             args->value = &string_obj; >           |             ~~~~~~~~~~~~^~~~~~~~~~~~~ >     ... >     cc1: all warnings being treated as errors > > adding > >     -Wno-dangling-pointer > > to build flags worksaround it, with successful build. > > for ref, > > FAILED build log: > >     https://download.copr.fedorainfracloud.org/results/pgfed/nginx-mainline/fedora-38-x86_64/05802768-nginx/build.log.gz > > OK build log: > >     https://download.copr.fedorainfracloud.org/results/pgfed/nginx-mainline/fedora-38-x86_64/05802814-nginx/build.log.gz > > I'm checking to see whether the error flag was added to GCC 13 upstream, > or just to Redhat/Fedora flags ... GCC 13 is not released yet, right? Can you reproduce the issue with -Wdangling-pointer option and GCC 12? > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel From pgnet.dev at gmail.com Wed Apr 19 01:48:38 2023 From: pgnet.dev at gmail.com (PGNet Dev) Date: Tue, 18 Apr 2023 21:48:38 -0400 Subject: nginx 1.24 + njs build errors [-Werror=dangling-pointer=] after switch from GCC 12 (Fedora 37) -> GCC13 (Fedora 38) In-Reply-To: <25f8356b-5248-204c-168d-80798eb6cc7b@nginx.com> References: <25f8356b-5248-204c-168d-80798eb6cc7b@nginx.com> Message-ID: > GCC 13 is not released yet, right? "Real Soon Now (tm)" GCC 13.0.1 Status Report (2023-04-17) https://gcc.gnu.org/pipermail/gcc/2023-April/241140.html It's in the Fedora 38 release, which dropped today: Fedora 38 Released With GNOME 44 Desktop, GCC 13, Many New Features https://www.phoronix.com/news/Fedora-38-Released gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/13/lto-wrapper OFFLOAD_TARGET_NAMES=nvptx-none OFFLOAD_TARGET_DEFAULT=1 Target: x86_64-redhat-linux Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,m2,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-libstdcxx-backtrace --with-libstdcxx-zoneinfo=/usr/share/zoneinfo --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl=/builddir/build/BUILD/gcc-13.0.1-20230401/obj-x86_64-redhat-linux/isl-install --enable-offload-targets=nvptx-none --without-cuda-driver --enable-offload-defaulted --enable-gnu-indirect-function --enable-cet --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux --with-build-config=bootstrap-lto --enable-link-serialization=1 Thread model: posix Supported LTO compression algorithms: zlib zstd gcc version 13.0.1 20230401 (Red Hat 13.0.1-0) (GCC) This looks likes the origin https://gcc.gnu.org/gcc-13/changes.html --> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106393 > Can you reproduce the issue with -Wdangling-pointer option and GCC 12? as of ~ an hour ago, the last of my boxes finished updates -- with all GCC 13. i can try to set something up on COPR build sys to check ... From xeioex at nginx.com Wed Apr 19 07:25:41 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 19 Apr 2023 07:25:41 +0000 Subject: [njs] Change: native methods are provided with retval argument. Message-ID: details: https://hg.nginx.org/njs/rev/0c95481158e4 branches: changeset: 2088:0c95481158e4 user: Dmitry Volyntsev date: Wed Apr 19 00:20:37 2023 -0700 description: Change: native methods are provided with retval argument. Previously, native methods were expected to return their retval using vm->retval. This caused problem in the part (1aa137411b09, 293fe42c5e1c) because vm->retval can be overwritten unexpectedly as a side-effect of operations like ToString(), ToNumber(). The fix is to never used a global retval. Instead methods are provided with a retval argument to store their retval value. As a part of the change, retval and exception values are split. The normal value is returned in the retval argument. The exception value is thrown by njs_vm_throw() or njs_vm_error(). The exception value can be acquired using njs_vm_exception_get(). diffstat: external/njs_crypto_module.c | 32 +- external/njs_fs_module.c | 289 +++++++++++++++++---------------- external/njs_query_string_module.c | 32 +- external/njs_webcrypto_module.c | 140 ++++++++------- external/njs_xml_module.c | 62 +++--- external/njs_zlib_module.c | 12 +- nginx/ngx_http_js_module.c | 67 ++++--- nginx/ngx_js.c | 26 ++- nginx/ngx_js.h | 4 +- nginx/ngx_js_fetch.c | 133 ++++++++------- nginx/ngx_js_fetch.h | 2 +- nginx/ngx_stream_js_module.c | 36 ++- src/njs.h | 32 +-- src/njs_array.c | 235 +++++++++++++-------------- src/njs_array.h | 2 +- src/njs_array_buffer.c | 20 +- src/njs_async.c | 60 +++--- src/njs_async.h | 4 +- src/njs_boolean.c | 15 +- src/njs_buffer.c | 216 +++++++++++++------------ src/njs_buffer.h | 2 +- src/njs_builtin.c | 10 +- src/njs_date.c | 46 ++-- src/njs_encoding.c | 41 ++-- src/njs_error.c | 70 +++++-- src/njs_error.h | 29 +- src/njs_function.c | 70 +++----- src/njs_function.h | 10 +- src/njs_generator.c | 2 +- src/njs_iterator.c | 57 +++--- src/njs_iterator.h | 10 +- src/njs_json.c | 43 +++- src/njs_math.c | 16 +- src/njs_module.c | 4 +- src/njs_module.h | 2 +- src/njs_number.c | 100 +++++----- src/njs_number.h | 8 +- src/njs_object.c | 155 ++++++++--------- src/njs_object.h | 2 +- src/njs_object_prop.c | 3 +- src/njs_parser.c | 23 +- src/njs_promise.c | 229 ++++++++++++++------------ src/njs_promise.h | 4 +- src/njs_regexp.c | 82 ++++---- src/njs_regexp.h | 5 +- src/njs_shell.c | 47 ++-- src/njs_string.c | 315 ++++++++++++++++++------------------ src/njs_string.h | 10 +- src/njs_symbol.c | 30 +- src/njs_timer.c | 18 +- src/njs_timer.h | 6 +- src/njs_typed_array.c | 159 +++++++++--------- src/njs_typed_array.h | 2 +- src/njs_value.c | 5 +- src/njs_vm.c | 107 ++++++------ src/njs_vm.h | 5 +- src/njs_vmcode.c | 267 +++++++++++++------------------ src/njs_vmcode.h | 2 +- src/test/njs_benchmark.c | 41 ++-- src/test/njs_externals_test.c | 33 +- src/test/njs_unit_test.c | 102 +++++++---- 61 files changed, 1825 insertions(+), 1766 deletions(-) diffs (truncated from 12324 to 1000 lines): diff -r 5665eebfd00c -r 0c95481158e4 external/njs_crypto_module.c --- a/external/njs_crypto_module.c Wed Apr 12 18:26:42 2023 -0700 +++ b/external/njs_crypto_module.c Wed Apr 19 00:20:37 2023 -0700 @@ -62,15 +62,15 @@ static njs_crypto_enc_t *njs_crypto_enco static njs_int_t njs_buffer_digest(njs_vm_t *vm, njs_value_t *value, const njs_str_t *src); static njs_int_t njs_crypto_create_hash(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t unused); + njs_uint_t nargs, njs_index_t unused, njs_value_t *retval); static njs_int_t njs_hash_prototype_update(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t hmac); + njs_uint_t nargs, njs_index_t hmac, njs_value_t *retval); static njs_int_t njs_hash_prototype_digest(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t hmac); + njs_uint_t nargs, njs_index_t hmac, njs_value_t *retval); static njs_int_t njs_hash_prototype_copy(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t hmac); + njs_uint_t nargs, njs_index_t hmac, njs_value_t *retval); static njs_int_t njs_crypto_create_hmac(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t unused); + njs_uint_t nargs, njs_index_t unused, njs_value_t *retval); static njs_int_t njs_crypto_init(njs_vm_t *vm); @@ -288,7 +288,7 @@ njs_module_t njs_crypto_module = { static njs_int_t njs_crypto_create_hash(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t unused) + njs_index_t unused, njs_value_t *retval) { njs_digest_t *dgst; njs_hash_alg_t *alg; @@ -308,14 +308,14 @@ njs_crypto_create_hash(njs_vm_t *vm, njs alg->init(&dgst->u); - return njs_vm_external_create(vm, &vm->retval, njs_crypto_hash_proto_id, + return njs_vm_external_create(vm, retval, njs_crypto_hash_proto_id, dgst, 0); } static njs_int_t njs_hash_prototype_update(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t hmac) + njs_index_t hmac, njs_value_t *retval) { njs_str_t data; njs_int_t ret; @@ -362,7 +362,7 @@ njs_hash_prototype_update(njs_vm_t *vm, switch (value->type) { case NJS_STRING: - encoding = njs_buffer_encoding(vm, njs_arg(args, nargs, 2)); + encoding = njs_buffer_encoding(vm, njs_arg(args, nargs, 2), 1); if (njs_slow_path(encoding == NULL)) { return NJS_ERROR; } @@ -402,7 +402,7 @@ njs_hash_prototype_update(njs_vm_t *vm, ctx->alg->update(&ctx->u, data.start, data.length); } - vm->retval = *this; + njs_value_assign(retval, this); return NJS_OK; } @@ -410,7 +410,7 @@ njs_hash_prototype_update(njs_vm_t *vm, static njs_int_t njs_hash_prototype_digest(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t hmac) + njs_index_t hmac, njs_value_t *retval) { njs_str_t str; njs_hmac_t *ctx; @@ -473,7 +473,7 @@ njs_hash_prototype_digest(njs_vm_t *vm, str.start = digest; str.length = alg->size; - return enc->encode(vm, &vm->retval, &str); + return enc->encode(vm, retval, &str); exception: @@ -484,7 +484,7 @@ exception: static njs_int_t njs_hash_prototype_copy(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t unused) + njs_index_t unused, njs_value_t *retval) { njs_digest_t *dgst, *copy; @@ -507,14 +507,14 @@ njs_hash_prototype_copy(njs_vm_t *vm, nj memcpy(copy, dgst, sizeof(njs_digest_t)); - return njs_vm_external_create(vm, njs_vm_retval(vm), + return njs_vm_external_create(vm, retval, njs_crypto_hash_proto_id, copy, 0); } static njs_int_t njs_crypto_create_hmac(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t unused) + njs_index_t unused, njs_value_t *retval) { njs_str_t key; njs_uint_t i; @@ -590,7 +590,7 @@ njs_crypto_create_hmac(njs_vm_t *vm, njs alg->init(&ctx->u); alg->update(&ctx->u, key_buf, 64); - return njs_vm_external_create(vm, &vm->retval, njs_crypto_hmac_proto_id, + return njs_vm_external_create(vm, retval, njs_crypto_hmac_proto_id, ctx, 0); } diff -r 5665eebfd00c -r 0c95481158e4 external/njs_fs_module.c --- a/external/njs_fs_module.c Wed Apr 12 18:26:42 2023 -0700 +++ b/external/njs_fs_module.c Wed Apr 19 00:20:37 2023 -0700 @@ -143,35 +143,35 @@ typedef njs_int_t (*njs_file_tree_walk_c static njs_int_t njs_fs_access(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_mkdir(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_open(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype); + njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_close(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype); + njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_read(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype); + njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_read_file(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_readdir(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_realpath(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_rename(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_rmdir(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_stat(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_symlink(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_unlink(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_write(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype); + njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_write_file(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t calltype); + njs_uint_t nargs, njs_index_t calltype, njs_value_t *retval); static njs_int_t njs_fs_constants(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *unused, njs_value_t *retval); @@ -179,21 +179,21 @@ static njs_int_t njs_fs_promises(njs_vm_ njs_value_t *value, njs_value_t *unused, njs_value_t *retval); static njs_int_t njs_fs_dirent_constructor(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t unused); + njs_uint_t nargs, njs_index_t unused, njs_value_t *retval); static njs_int_t njs_fs_dirent_test(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t testtype); + njs_uint_t nargs, njs_index_t testtype, njs_value_t *retval); static njs_int_t njs_fs_stats_test(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t testtype); + njs_uint_t nargs, njs_index_t testtype, njs_value_t *retval); static njs_int_t njs_fs_stats_prop(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); static njs_int_t njs_fs_stats_create(njs_vm_t *vm, struct stat *st, njs_value_t *retval); static njs_int_t njs_fs_filehandle_close(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t unused); + njs_uint_t nargs, njs_index_t unused, njs_value_t *retval); static njs_int_t njs_fs_filehandle_value_of(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t unused); + njs_uint_t nargs, njs_index_t unused, njs_value_t *retval); static njs_int_t njs_fs_filehandle_create(njs_vm_t *vm, int fd, njs_bool_t shadow, njs_value_t *retval); @@ -205,9 +205,10 @@ static njs_int_t njs_fs_bytes_written_cr static njs_int_t njs_fs_fd_read(njs_vm_t *vm, int fd, njs_str_t *data); static njs_int_t njs_fs_error(njs_vm_t *vm, const char *syscall, - const char *desc, const char *path, int errn, njs_value_t *retval); + const char *desc, const char *path, int errn, njs_value_t *result); static njs_int_t njs_fs_result(njs_vm_t *vm, njs_value_t *result, - njs_index_t calltype, const njs_value_t* callback, njs_uint_t nargs); + njs_index_t calltype, const njs_value_t* callback, njs_uint_t nargs, + njs_value_t *retval); static njs_int_t njs_file_tree_walk(const char *path, njs_file_tree_walk_cb_t cb, int fd_limit, njs_ftw_flags_t flags); @@ -1172,12 +1173,12 @@ njs_module_t njs_fs_module = { static njs_int_t njs_fs_access(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { int md; njs_int_t ret; const char *path; - njs_value_t retval, *callback, *mode; + njs_value_t result, *callback, *mode; char path_buf[NJS_MAX_PATH + 1]; path = njs_fs_path(vm, path_buf, njs_arg(args, nargs, 1), "path"); @@ -1214,15 +1215,15 @@ njs_fs_access(njs_vm_t *vm, njs_value_t return NJS_ERROR; } - njs_set_undefined(&retval); + njs_set_undefined(&result); ret = access(path, md); if (njs_slow_path(ret != 0)) { - ret = njs_fs_error(vm, "access", strerror(errno), path, errno, &retval); + ret = njs_fs_error(vm, "access", strerror(errno), path, errno, &result); } if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 1); + return njs_fs_result(vm, &result, calltype, callback, 1, retval); } return NJS_ERROR; @@ -1231,13 +1232,13 @@ njs_fs_access(njs_vm_t *vm, njs_value_t static njs_int_t njs_fs_open(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { int fd, flags; mode_t md; njs_int_t ret; const char *path; - njs_value_t retval, *value; + njs_value_t result, *value; char path_buf[NJS_MAX_PATH + 1]; path = njs_fs_path(vm, path_buf, njs_arg(args, nargs, 1), "path"); @@ -1267,23 +1268,23 @@ njs_fs_open(njs_vm_t *vm, njs_value_t *a fd = open(path, flags, md); if (njs_slow_path(fd < 0)) { - ret = njs_fs_error(vm, "open", strerror(errno), path, errno, &retval); + ret = njs_fs_error(vm, "open", strerror(errno), path, errno, &result); goto done; } - ret = njs_fs_filehandle_create(vm, fd, calltype == NJS_FS_DIRECT, &retval); + ret = njs_fs_filehandle_create(vm, fd, calltype == NJS_FS_DIRECT, &result); if (njs_slow_path(ret != NJS_OK)) { goto done; } if (calltype == NJS_FS_DIRECT) { - njs_value_number_set(&retval, fd); + njs_value_number_set(&result, fd); } done: if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, NULL, 2); + return njs_fs_result(vm, &result, calltype, NULL, 2, retval); } if (fd != -1) { @@ -1296,11 +1297,11 @@ done: static njs_int_t njs_fs_close(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { int64_t fd; njs_int_t ret; - njs_value_t retval, *fh; + njs_value_t result, *fh; fh = njs_arg(args, nargs, 1); @@ -1309,15 +1310,15 @@ njs_fs_close(njs_vm_t *vm, njs_value_t * return ret; } - njs_set_undefined(&retval); + njs_set_undefined(&result); ret = close((int) fd); if (njs_slow_path(ret != 0)) { - ret = njs_fs_error(vm, "close", strerror(errno), NULL, errno, &retval); + ret = njs_fs_error(vm, "close", strerror(errno), NULL, errno, &result); } if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, NULL, 1); + return njs_fs_result(vm, &result, calltype, NULL, 1, retval); } return NJS_ERROR; @@ -1326,12 +1327,12 @@ njs_fs_close(njs_vm_t *vm, njs_value_t * static njs_int_t njs_fs_mkdir(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { char *path; mode_t md; njs_int_t ret; - njs_value_t mode, recursive, retval, *callback, *options; + njs_value_t mode, recursive, result, *callback, *options; char path_buf[NJS_MAX_PATH + 1]; path = (char *) njs_fs_path(vm, path_buf, njs_arg(args, nargs, 1), "path"); @@ -1390,10 +1391,10 @@ njs_fs_mkdir(njs_vm_t *vm, njs_value_t * return NJS_ERROR; } - ret = njs_fs_make_path(vm, path, md, njs_is_true(&recursive), &retval); + ret = njs_fs_make_path(vm, path, md, njs_is_true(&recursive), &result); if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 1); + return njs_fs_result(vm, &result, calltype, callback, 1, retval); } return NJS_ERROR; @@ -1402,14 +1403,14 @@ njs_fs_mkdir(njs_vm_t *vm, njs_value_t * static njs_int_t njs_fs_read(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { int64_t fd, length, pos, offset; ssize_t n; njs_int_t ret; njs_str_t data; njs_uint_t fd_offset; - njs_value_t retval, *buffer, *value; + njs_value_t result, *buffer, *value; njs_typed_array_t *array; njs_array_buffer_t *array_buffer; @@ -1487,24 +1488,24 @@ njs_fs_read(njs_vm_t *vm, njs_value_t *a } if (njs_slow_path(n == -1)) { - ret = njs_fs_error(vm, "read", strerror(errno), NULL, errno, &retval); + ret = njs_fs_error(vm, "read", strerror(errno), NULL, errno, &result); goto done; } if (calltype == NJS_FS_PROMISE) { - ret = njs_fs_bytes_read_create(vm, n, buffer, &retval); + ret = njs_fs_bytes_read_create(vm, n, buffer, &result); if (njs_slow_path(ret != NJS_OK)) { goto done; } } else { - njs_value_number_set(&retval, n); + njs_value_number_set(&result, n); } done: if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, NULL, 1); + return njs_fs_result(vm, &result, calltype, NULL, 1, retval); } return NJS_ERROR; @@ -1513,13 +1514,13 @@ done: static njs_int_t njs_fs_read_file(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { int fd, flags; njs_str_t data; njs_int_t ret; const char *path; - njs_value_t flag, encode, retval, *callback, *options; + njs_value_t flag, encode, result, *callback, *options; struct stat sb; const njs_buffer_encoding_t *encoding; char path_buf[NJS_MAX_PATH + 1]; @@ -1584,7 +1585,7 @@ njs_fs_read_file(njs_vm_t *vm, njs_value encoding = NULL; if (njs_is_defined(&encode)) { - encoding = njs_buffer_encoding(vm, &encode); + encoding = njs_buffer_encoding(vm, &encode, 1); if (njs_slow_path(encoding == NULL)) { return NJS_ERROR; } @@ -1592,18 +1593,18 @@ njs_fs_read_file(njs_vm_t *vm, njs_value fd = open(path, flags); if (njs_slow_path(fd < 0)) { - ret = njs_fs_error(vm, "open", strerror(errno), path, errno, &retval); + ret = njs_fs_error(vm, "open", strerror(errno), path, errno, &result); goto done; } ret = fstat(fd, &sb); if (njs_slow_path(ret == -1)) { - ret = njs_fs_error(vm, "stat", strerror(errno), path, errno, &retval); + ret = njs_fs_error(vm, "stat", strerror(errno), path, errno, &result); goto done; } if (njs_slow_path(!S_ISREG(sb.st_mode))) { - ret = njs_fs_error(vm, "stat", "File is not regular", path, 0, &retval); + ret = njs_fs_error(vm, "stat", "File is not regular", path, 0, &result); goto done; } @@ -1614,17 +1615,17 @@ njs_fs_read_file(njs_vm_t *vm, njs_value if (njs_slow_path(ret != NJS_OK)) { if (ret == NJS_DECLINED) { ret = njs_fs_error(vm, "read", strerror(errno), path, errno, - &retval); + &result); } goto done; } if (encoding == NULL) { - ret = njs_buffer_set(vm, &retval, data.start, data.length); + ret = njs_buffer_set(vm, &result, data.start, data.length); } else { - ret = encoding->encode(vm, &retval, &data); + ret = encoding->encode(vm, &result, &data); njs_mp_free(vm->mem_pool, data.start); } @@ -1635,7 +1636,7 @@ done: } if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 2); + return njs_fs_result(vm, &result, calltype, callback, 2, retval); } return NJS_ERROR; @@ -1644,13 +1645,13 @@ done: static njs_int_t njs_fs_readdir(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { DIR *dir; njs_str_t s; njs_int_t ret; const char *path; - njs_value_t encode, types, ename, etype, retval, + njs_value_t encode, types, ename, etype, result, *callback, *options, *value; njs_array_t *results; struct dirent *entry; @@ -1712,7 +1713,7 @@ njs_fs_readdir(njs_vm_t *vm, njs_value_t encoding = NULL; if (!njs_is_string(&encode) || !njs_string_eq(&encode, &string_buffer)) { - encoding = njs_buffer_encoding(vm, &encode); + encoding = njs_buffer_encoding(vm, &encode, 1); if (njs_slow_path(encoding == NULL)) { return NJS_ERROR; } @@ -1723,12 +1724,12 @@ njs_fs_readdir(njs_vm_t *vm, njs_value_t return NJS_ERROR; } - njs_set_array(&retval, results); + njs_set_array(&result, results); dir = opendir(path); if (njs_slow_path(dir == NULL)) { ret = njs_fs_error(vm, "opendir", strerror(errno), path, errno, - &retval); + &result); goto done; } @@ -1740,7 +1741,7 @@ njs_fs_readdir(njs_vm_t *vm, njs_value_t if (njs_slow_path(entry == NULL)) { if (errno != 0) { ret = njs_fs_error(vm, "readdir", strerror(errno), path, errno, - &retval); + &result); } goto done; @@ -1791,7 +1792,7 @@ done: } if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 2); + return njs_fs_result(vm, &result, calltype, callback, 2, retval); } return NJS_ERROR; @@ -1800,12 +1801,12 @@ done: static njs_int_t njs_fs_realpath(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { njs_int_t ret; njs_str_t s; const char *path; - njs_value_t encode, retval, *callback, *options; + njs_value_t encode, result, *callback, *options; const njs_buffer_encoding_t *encoding; char path_buf[NJS_MAX_PATH + 1], dst_buf[NJS_MAX_PATH + 1]; @@ -1857,7 +1858,7 @@ njs_fs_realpath(njs_vm_t *vm, njs_value_ encoding = NULL; if (!njs_is_string(&encode) || !njs_string_eq(&encode, &string_buffer)) { - encoding = njs_buffer_encoding(vm, &encode); + encoding = njs_buffer_encoding(vm, &encode, 1); if (njs_slow_path(encoding == NULL)) { return NJS_ERROR; } @@ -1866,23 +1867,23 @@ njs_fs_realpath(njs_vm_t *vm, njs_value_ s.start = (u_char *) realpath(path, dst_buf); if (njs_slow_path(s.start == NULL)) { ret = njs_fs_error(vm, "realpath", strerror(errno), path, errno, - &retval); + &result); goto done; } s.length = njs_strlen(s.start); if (encoding == NULL) { - ret = njs_buffer_new(vm, &retval, s.start, s.length); + ret = njs_buffer_new(vm, &result, s.start, s.length); } else { - ret = encoding->encode(vm, &retval, &s); + ret = encoding->encode(vm, &result, &s); } done: if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 2); + return njs_fs_result(vm, &result, calltype, callback, 2, retval); } return NJS_ERROR; @@ -1891,11 +1892,11 @@ done: static njs_int_t njs_fs_rename(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { njs_int_t ret; const char *path, *newpath; - njs_value_t retval, *callback; + njs_value_t result, *callback; char path_buf[NJS_MAX_PATH + 1], newpath_buf[NJS_MAX_PATH + 1]; callback = NULL; @@ -1918,15 +1919,15 @@ njs_fs_rename(njs_vm_t *vm, njs_value_t return NJS_ERROR; } - njs_set_undefined(&retval); + njs_set_undefined(&result); ret = rename(path, newpath); if (njs_slow_path(ret != 0)) { - ret = njs_fs_error(vm, "rename", strerror(errno), NULL, errno, &retval); + ret = njs_fs_error(vm, "rename", strerror(errno), NULL, errno, &result); } if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 1); + return njs_fs_result(vm, &result, calltype, callback, 1, retval); } return NJS_ERROR; @@ -1935,11 +1936,11 @@ njs_fs_rename(njs_vm_t *vm, njs_value_t static njs_int_t njs_fs_rmdir(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { njs_int_t ret; const char *path; - njs_value_t recursive, retval, *callback, *options; + njs_value_t recursive, result, *callback, *options; char path_buf[NJS_MAX_PATH + 1]; path = njs_fs_path(vm, path_buf, njs_arg(args, nargs, 1), "path"); @@ -1982,10 +1983,10 @@ njs_fs_rmdir(njs_vm_t *vm, njs_value_t * } } - ret = njs_fs_rmtree(vm, path, njs_is_true(&recursive), &retval); + ret = njs_fs_rmtree(vm, path, njs_is_true(&recursive), &result); if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 1); + return njs_fs_result(vm, &result, calltype, callback, 1, retval); } return NJS_ERROR; @@ -1994,7 +1995,7 @@ njs_fs_rmdir(njs_vm_t *vm, njs_value_t * static njs_int_t njs_fs_stat(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t magic) + njs_index_t magic, njs_value_t *retval) { int64_t fd; njs_int_t ret; @@ -2002,7 +2003,7 @@ njs_fs_stat(njs_vm_t *vm, njs_value_t *a njs_bool_t throw; struct stat sb; const char *path; - njs_value_t retval, *callback, *options; + njs_value_t result, *callback, *options; njs_fs_calltype_t calltype; char path_buf[NJS_MAX_PATH + 1]; @@ -2059,24 +2060,24 @@ njs_fs_stat(njs_vm_t *vm, njs_value_t *a } ret = njs_value_property(vm, options, njs_value_arg(&string_bigint), - &retval); + &result); if (njs_slow_path(ret == NJS_ERROR)) { return ret; } - if (njs_bool(&retval)) { + if (njs_bool(&result)) { njs_type_error(vm, "\"bigint\" is not supported"); return NJS_ERROR; } if (calltype == NJS_FS_DIRECT) { ret = njs_value_property(vm, options, njs_value_arg(&string_throw), - &retval); + &result); if (njs_slow_path(ret == NJS_ERROR)) { return ret; } - throw = njs_bool(&retval); + throw = njs_bool(&result); } } @@ -2099,33 +2100,33 @@ njs_fs_stat(njs_vm_t *vm, njs_value_t *a if (errno != ENOENT || throw) { ret = njs_fs_error(vm, ((magic >> 2) == NJS_FS_STAT) ? "stat" : "lstat", - strerror(errno), path, errno, &retval); + strerror(errno), path, errno, &result); if (njs_slow_path(ret != NJS_OK)) { return NJS_ERROR; } } else { - njs_set_undefined(&retval); + njs_set_undefined(&result); } - return njs_fs_result(vm, &retval, calltype, callback, 2); + return njs_fs_result(vm, &result, calltype, callback, 2, retval); } - ret = njs_fs_stats_create(vm, &sb, &retval); + ret = njs_fs_stats_create(vm, &sb, &result); if (njs_slow_path(ret != NJS_OK)) { return NJS_ERROR; } - return njs_fs_result(vm, &retval, calltype, callback, 2); + return njs_fs_result(vm, &result, calltype, callback, 2, retval); } static njs_int_t njs_fs_symlink(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { njs_int_t ret; const char *target, *path; - njs_value_t retval, *callback, *type; + njs_value_t result, *callback, *type; char target_buf[NJS_MAX_PATH + 1], path_buf[NJS_MAX_PATH + 1]; target = njs_fs_path(vm, target_buf, njs_arg(args, nargs, 1), "target"); @@ -2158,16 +2159,16 @@ njs_fs_symlink(njs_vm_t *vm, njs_value_t return NJS_ERROR; } - njs_set_undefined(&retval); + njs_set_undefined(&result); ret = symlink(target, path); if (njs_slow_path(ret != 0)) { ret = njs_fs_error(vm, "symlink", strerror(errno), path, errno, - &retval); + &result); } if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 1); + return njs_fs_result(vm, &result, calltype, callback, 1, retval); } return NJS_ERROR; @@ -2176,11 +2177,11 @@ njs_fs_symlink(njs_vm_t *vm, njs_value_t static njs_int_t njs_fs_unlink(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { njs_int_t ret; const char *path; - njs_value_t retval, *callback; + njs_value_t result, *callback; char path_buf[NJS_MAX_PATH + 1]; path = njs_fs_path(vm, path_buf, njs_arg(args, nargs, 1), "path"); @@ -2198,15 +2199,15 @@ njs_fs_unlink(njs_vm_t *vm, njs_value_t } } - njs_set_undefined(&retval); + njs_set_undefined(&result); ret = unlink(path); if (njs_slow_path(ret != 0)) { - ret = njs_fs_error(vm, "unlink", strerror(errno), path, errno, &retval); + ret = njs_fs_error(vm, "unlink", strerror(errno), path, errno, &result); } if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 1); + return njs_fs_result(vm, &result, calltype, callback, 1, retval); } return NJS_ERROR; @@ -2215,14 +2216,14 @@ njs_fs_unlink(njs_vm_t *vm, njs_value_t static njs_int_t njs_fs_write(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t calltype) + njs_index_t calltype, njs_value_t *retval) { int64_t fd, length, pos, offset; ssize_t n; njs_int_t ret; njs_str_t data; njs_uint_t fd_offset; - njs_value_t retval, *buffer, *value; + njs_value_t result, *buffer, *value; const njs_buffer_encoding_t *encoding; fd_offset = !!(calltype == NJS_FS_DIRECT); @@ -2252,17 +2253,18 @@ njs_fs_write(njs_vm_t *vm, njs_value_t * } } - encoding = njs_buffer_encoding(vm, njs_arg(args, nargs, fd_offset + 3)); + encoding = njs_buffer_encoding(vm, njs_arg(args, nargs, fd_offset + 3), + 1); if (njs_slow_path(encoding == NULL)) { return NJS_ERROR; } - ret = njs_buffer_decode_string(vm, buffer, &retval, encoding); + ret = njs_buffer_decode_string(vm, buffer, &result, encoding); if (njs_slow_path(ret != NJS_OK)) { return NJS_ERROR; } - njs_string_get(&retval, &data); + njs_string_get(&result, &data); goto process; } @@ -2328,30 +2330,30 @@ process: } if (njs_slow_path(n == -1)) { - ret = njs_fs_error(vm, "write", strerror(errno), NULL, errno, &retval); + ret = njs_fs_error(vm, "write", strerror(errno), NULL, errno, &result); goto done; } if (njs_slow_path((size_t) n != data.length)) { ret = njs_fs_error(vm, "write", "failed to write all the data", NULL, - 0, &retval); + 0, &result); goto done; } if (calltype == NJS_FS_PROMISE) { - ret = njs_fs_bytes_written_create(vm, n, buffer, &retval); + ret = njs_fs_bytes_written_create(vm, n, buffer, &result); if (njs_slow_path(ret != NJS_OK)) { goto done; } } else { - njs_value_number_set(&retval, n); + njs_value_number_set(&result, n); } done: if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, NULL, 1); + return njs_fs_result(vm, &result, calltype, NULL, 1, retval); } return NJS_ERROR; @@ -2360,7 +2362,7 @@ done: static njs_int_t njs_fs_write_file(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t magic) + njs_index_t magic, njs_value_t *retval) { int fd, flags; u_char *p, *end; @@ -2369,7 +2371,7 @@ njs_fs_write_file(njs_vm_t *vm, njs_valu njs_str_t content; njs_int_t ret; const char *path; - njs_value_t flag, mode, encode, retval, *data, *callback, + njs_value_t flag, mode, encode, result, *data, *callback, *options; njs_typed_array_t *array; njs_fs_calltype_t calltype; @@ -2455,22 +2457,22 @@ njs_fs_write_file(njs_vm_t *vm, njs_valu case NJS_STRING: default: - encoding = njs_buffer_encoding(vm, &encode); + encoding = njs_buffer_encoding(vm, &encode, 1); if (njs_slow_path(encoding == NULL)) { return NJS_ERROR; } - ret = njs_value_to_string(vm, &retval, data); + ret = njs_value_to_string(vm, &result, data); if (njs_slow_path(ret != NJS_OK)) { return NJS_ERROR; } - ret = njs_buffer_decode_string(vm, &retval, &retval, encoding); + ret = njs_buffer_decode_string(vm, &result, &result, encoding); if (njs_slow_path(ret != NJS_OK)) { return NJS_ERROR; } - njs_string_get(&retval, &content); + njs_string_get(&result, &content); break; } @@ -2488,7 +2490,7 @@ njs_fs_write_file(njs_vm_t *vm, njs_valu fd = open(path, flags, md); if (njs_slow_path(fd < 0)) { - ret = njs_fs_error(vm, "open", strerror(errno), path, errno, &retval); + ret = njs_fs_error(vm, "open", strerror(errno), path, errno, &result); goto done; } @@ -2503,7 +2505,7 @@ njs_fs_write_file(njs_vm_t *vm, njs_valu } ret = njs_fs_error(vm, "write", strerror(errno), path, errno, - &retval); + &result); goto done; } @@ -2511,7 +2513,7 @@ njs_fs_write_file(njs_vm_t *vm, njs_valu } ret = NJS_OK; - njs_set_undefined(&retval); + njs_set_undefined(&result); done: @@ -2520,7 +2522,7 @@ done: } if (ret == NJS_OK) { - return njs_fs_result(vm, &retval, calltype, callback, 1); + return njs_fs_result(vm, &result, calltype, callback, 1, retval); } return NJS_ERROR; @@ -3070,7 +3072,7 @@ njs_fs_error(njs_vm_t *vm, const char *s static njs_int_t ngx_fs_promise_trampoline(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, - njs_index_t unused) + njs_index_t unused, njs_value_t *retval) { njs_value_t value; @@ -3085,15 +3087,20 @@ static const njs_value_t promise_trampo static njs_int_t njs_fs_result(njs_vm_t *vm, njs_value_t *result, njs_index_t calltype, - const njs_value_t *callback, njs_uint_t nargs) + const njs_value_t *callback, njs_uint_t nargs, njs_value_t *retval) { njs_int_t ret; njs_value_t promise, callbacks[2], arguments[2]; switch (calltype) { case NJS_FS_DIRECT: - vm->retval = *result; - return njs_is_error(result) ? NJS_ERROR : NJS_OK; + if (njs_is_error(result)) { + njs_vm_throw(vm, result); + return NJS_ERROR; + } + + njs_value_assign(retval, result); + return NJS_OK; case NJS_FS_PROMISE: ret = njs_vm_promise_create(vm, &promise, &callbacks[0]); @@ -3110,7 +3117,7 @@ njs_fs_result(njs_vm_t *vm, njs_value_t return ret; } - vm->retval = promise; + njs_value_assign(retval, &promise); return NJS_OK; @@ -3130,7 +3137,7 @@ njs_fs_result(njs_vm_t *vm, njs_value_t return ret; } - njs_set_undefined(&vm->retval); + njs_set_undefined(retval); return NJS_OK; @@ -3217,7 +3224,7 @@ njs_fs_dirent_create(njs_vm_t *vm, njs_v static njs_int_t njs_fs_dirent_constructor(njs_vm_t *vm, njs_value_t *args, - njs_uint_t nargs, njs_index_t unused) From vadim.fedorenko at cdnnow.ru Wed Apr 19 08:55:33 2023 From: vadim.fedorenko at cdnnow.ru (Vadim Fedorenko) Date: Wed, 19 Apr 2023 09:55:33 +0100 Subject: [PATCH 0 of 4] Avoid dead store elimination in GCC 11+ In-Reply-To: References: Message-ID: <0c6a9e03-28a8-0f0a-89ac-9fd36ed57d40@cdnnow.ru> Hi! On 18.04.2023 20:14, Maxim Dounin wrote: > Hello! > > On Tue, Apr 18, 2023 at 10:50:01AM +0100, Vadim Fedorenko wrote: > >> On 18.04.2023 02:54, Maxim Dounin wrote: >>> Hello! >>> >>> On Tue, Apr 18, 2023 at 02:07:06AM +0300, Vadim Fedorenko via nginx-devel wrote: >>> >>>> GCC version 11 and newer use more aggressive way to eliminate dead stores >>>> which ends up removing ngx_memzero() calls in several places. Such optimization >>>> affects calculations of md5 and sha1 implemented internally in nginx. The >>>> effect could be easily observed by adding a random data to buffer array in >>>> md5_init() or sha1_init() functions. With this simple modifications the result >>>> of the hash computation will be different each time even though the provided >>>> data to hash is not changed. >>> >>> If calculations of md5 and sha1 are affected, this means that the >>> stores in question are not dead, and they shouldn't be eliminated >>> in the first place. From your description this looks like a bug >>> in the compiler in question. >> >> Yeah, these ngx_memzero()s must not be dead, but according to the standart they >> are. In md5_final() the function is called this way: >> ngx_memzero(&ctx->buffer[used], free - 8); >> That means that a new variable of type 'char *' is created with the life time >> scoped to the call to ngx_memzero(). As the result of of the function is ignored >> explicitly, no other parameters are passed by pointer, and the variable is not >> accessed anywhere else, the whole call can be optimized out. > > The pointer is passed to the function, and the function modifies > the memory being pointed to by the pointer. While the pointer is > not used anywhere else and can be optimized out, the memory it > points to is used elsewhere, and this modification cannot be > optimized out, so it is incorrect to remove the call according to > my understanding of the C standard. > > If you still think it's correct and based on the C standard, > please provide relevant references (and quotes) which explain why > these calls can be optimized out. > >>> Alternatively, this can be a bug in nginx code which makes the >>> compiler think that it can eliminate these ngx_memzero() calls - for >>> example, GCC is known to do such things if it sees an undefined >>> behaviour in the code. >> >> There is no undefined behavior unfortunately, everything in this place is well >> defined. > > Well, I don't think so. There is a function call, and it cannot > be eliminated by the compiler unless the compiler thinks that the > results of the function call do not affect the program execution > as externally observed. Clearly the program execution is affected > (as per your claim). This leaves us the two possible > alternatives: > > - There is a bug in the compiler, and it incorrectly thinks that > the function call do not affect the program execution. > > - There is a bug in the code, and it triggers undefined behaviour, > so the compiler might not actually know what happens in the code > (because it not required to do anything meaningful in case of > undefined behaviour, and simply assume it should never happen). > > Just in case, the actual undefined behaviour might occur in the > ngx_md5_body() function due to strict-aliasing rules being broken > by the optimized GET() macro on platforms without strict alignment > requirements if the original data buffer as provided to > ngx_md5_update() cannot be aliased by uint32_t. See this commit in > the original repository of the md5 code nginx uses: > > https://cvsweb.openwall.com/cgi/cvsweb.cgi/Owl/packages/popa3d/popa3d/md5/md5.c.diff?r1=1.14;r2=1.15 > > But nginx only uses ngx_md5_update() with text buffers, so > strict-aliasing rules aren't broken. > >>> You may want to elaborate more on how to reproduce this, and, if >>> possible, how to build a minimal test case which demonstrates the >>> problem. >> >> Sure, let's elaborate a bit. To reproduce the bug you can simply apply the diff: >> >> diff --git a/src/core/ngx_md5.c b/src/core/ngx_md5.c >> index c25d0025d..67cc06438 100644 >> --- a/src/core/ngx_md5.c >> +++ b/src/core/ngx_md5.c >> @@ -24,6 +24,7 @@ ngx_md5_init(ngx_md5_t *ctx) >> ctx->d = 0x10325476; >> >> ctx->bytes = 0; >> + getrandom(ctx->buffer, 64, 0); >> } >> > > Note that this won't compile, it also needs "#include ". > >> This code will emulate the garbage for the stack-allocated 'ngx_md5_t md5;' in >> ngx_http_file_cache_create_key when nginx is running under the load. Then you >> can use simple configuration: >> >> upstream test_001_origin { >> server 127.0.0.1:8000; >> } >> >> proxy_cache_path /var/cache/nginx/test-001 keys_zone=test_001:10m max_size=5g >> inactive=24h levels=1:2 use_temp_path=off; >> >> server { >> listen 127.0.0.1:8000; >> >> location = /plain { >> return 200; >> } >> >> } >> >> server { >> listen 127.0.0.1:80; >> >> location /oplain { >> proxy_cache test_001; >> proxy_cache_key /oplain; >> proxy_pass http://test_001_origin/plain/; >> } >> } >> >> >> Every time you call 'curl http://127.0.0.1/oplain' a new cache file will be >> created, but the md5sum of the file will be the same, meaining that the key >> stored in the file is absolutely the same. > > Note that the exact configuration will make "GET /plain/" requests > to upstream server, resulting in 404 and nothing cached. I've > fixed this to actually match "location = /plain" and added > "proxy_cache_valid 200 1h;" to ensure caching will actually work. > > Still, I wasn't able to reproduce the issue you are seeing on > FreeBSD 12.4 with gcc12, neither with default compilation flags as > used by nginx, nor with --with-cc-opt="-flto -O2" and > --with-ld-opt="-flto -O2". > > On RHEL 9 (Red Hat Enterprise Linux release 9.1 (Plow) from > redhat/ubi9 image in Docker) with gcc11 (gcc version 11.3.1 > 20220421 (Red Hat 11.3.1-2) (GCC), as installed with "yum install > gcc") I wasn't able to reproduce this as well (also tested both > with default compilation flags as provided by nginx, and cc/ld > options "-flto -O2"). > > You may want to provide more details on how to reproduce this. > Some exact steps you've actually tested might the way to go. > >>>> Changing the code to use current implementation >>>> of ngx_explicit_memzero() doesn't help because of link-time optimizations >>>> enabled in RHEL 9 and derivatives. Glibc 2.34 found in RHEL 9 provides >>>> explicit_bzero() function which should be used to avoid such optimization. >>>> ngx_explicit_memzero() is changed to use explicit_bzero() if possible. >>> >>> The ngx_explicit_memzero() function is to be used when zeroed data >>> are indeed not used afterwards, for example, to make sure >>> passwords are actually eliminated from memory. It shouldn't be >>> used instead of a real ngx_memzero() call - doing so might hide >>> the problem, which is either in the compiler or in nginx, but >>> won't fix it. >> >> In this case the nginx code should be fixed to avoid partial memory fillings, >> but such change will come with performance penalty, especially on the CPUs >> without proper `REP MOVSB/MOVSD/MOVSQ` implementation. Controlled usage of >> explicit zeroing is much better is this case. > > You may want to elaborate on what "nginx code should be fixed to > avoid partial memory fillings" means and why it should be > fixed/avoided. > >>> As for using explicit_bzero() for it, we've looked into various >>> OS-specific solutions, though there are too many variants out >>> there, so it was decided that having our own simple implementation >>> is a better way to go. If it doesn't work in the particular >>> setup, it might make sense to adjust our implementation - but >>> given the above, it might be the same issue which causes the >>> original problem. >> >> Unfortunately, the memory barrier trick is not working anymore for linker-time >> optimizations. Linker has better information about whether the stored >> information is used again or not. And it will remove memset in such >> implementation, and it will definitely affected security-related code you >> mentioned above. > > Without link-time optimization, just a separate compilation unit > with a function is more than enough. > > The ngx_memory_barrier() is additionally used as in many practical > cases it introduces a compiler barrier, and this also defeats > link-time optimization. This might not be the case for GCC > though, as with GCC we currently use __sync_synchronize() for > ngx_memory_barrier(). Adding an explicit compiler barrier (asm > with the "memory" clobber should work for most compilers, but not > all) might be the way to go if it's indeed the case. > > It does seem to work with GCC with link-time optimizations enabled > though, as least in the RHEL 9 build with gcc11 "-flto -O2". I'm > seeing this in the disassemble of ngx_http_auth_basic_handler(): > > 0x00000000004709c6 <+918>: rep stos %rax,%es:(%rdi) > 0x00000000004709c9 <+921>: lock orq $0x0,(%rsp) > > So it looks like ngx_explicit_memzero() is inlined and optimized > to use "rep stos" instead of memset() call, but not eliminated. > >> explicit_bzero() function is available in well-loved *BSD >> systems now and is a proper way to do cleaning of the artifacts, doesn't matter >> which implementation is used in the specific system. > > If I recall correctly, when I last checked there were something > like 5 different interfaces out there, including explicit_bzero(), > explicit_memset(), memset_s(), and SecureZeroMemory(). With > memset_s() being required by C11 standard, but with absolutely > brain-damaged interface. (It looks like now there is also > memset_explicit(), which is going to become a standard in C23.) > > As such, the decision was to use our own function which does the > trick in most practical cases. And if for some reason it doesn't, > this isn't a big issue: that's a mitigation technique at most. > Looks like I found the root cause of the issue in the code added on top of nginx implementation of md5, which is using “type-punning” for optimization reasons, but ended with UB when NGX_HAVE_LITTLE_ENDIAN && NGX_HAVE_NONALIGNED are defined. Sorry for the noise and many thanks to Alejandro for help, hint with -fstrict-aliasing reminded me this very old change in the code. All best, Vadim From lynch.meng at hotmail.com Thu Apr 20 14:24:53 2023 From: lynch.meng at hotmail.com (meng lynch) Date: Thu, 20 Apr 2023 14:24:53 +0000 Subject: Remove unused codes in ngx_http_upstream_connect Message-ID: <162AC9EB-8C91-498F-BBAF-C20038A63E28@hotmail.com> Hello guys, Should the code from line 1517 to 1519 be removed? Because u->state is reallocated in line 1521. 1509 static void 1510 ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) 1511 { 1512 ngx_int_t rc; 1513 ngx_connection_t *c; 1514 1515 r->connection->log->action = "connecting to upstream"; 1516 - 1517 if (u->state && u->state->response_time == (ngx_msec_t) -1) { - 1518 u->state->response_time = ngx_current_msec - u->start_time; - 1519 } 1520 1521 u->state = ngx_array_push(r->upstream_states); 1522 if (u->state == NULL) { 1523 ngx_http_upstream_finalize_request(r, u, 1524 NGX_HTTP_INTERNAL_SERVER_ERROR); 1525 return; 1526 } From mdounin at mdounin.ru Thu Apr 20 14:58:27 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Apr 2023 17:58:27 +0300 Subject: Remove unused codes in ngx_http_upstream_connect In-Reply-To: <162AC9EB-8C91-498F-BBAF-C20038A63E28@hotmail.com> References: <162AC9EB-8C91-498F-BBAF-C20038A63E28@hotmail.com> Message-ID: Hello! On Thu, Apr 20, 2023 at 02:24:53PM +0000, meng lynch wrote: > Hello guys, > > Should the code from line 1517 to 1519 be removed? Because u->state is reallocated in line 1521. > > 1509 static void > 1510 ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) > 1511 { > 1512 ngx_int_t rc; > 1513 ngx_connection_t *c; > 1514 > 1515 r->connection->log->action = "connecting to upstream"; > 1516 > - 1517 if (u->state && u->state->response_time == (ngx_msec_t) -1) { > - 1518 u->state->response_time = ngx_current_msec - u->start_time; > - 1519 } > 1520 > 1521 u->state = ngx_array_push(r->upstream_states); > 1522 if (u->state == NULL) { > 1523 ngx_http_upstream_finalize_request(r, u, > 1524 NGX_HTTP_INTERNAL_SERVER_ERROR); > 1525 return; > 1526 } > In line 1521, the new state is allocated - the one which will be used for the connection started with this ngx_http_upstream_connect() call. The code in lines 1517..1519 finalizes the previous state, the one created by the previous connection (if any). So no, this code shouldn't be removed, it is actually used. -- Maxim Dounin http://mdounin.ru/ From a.hahn at f5.com Thu Apr 20 16:46:43 2023 From: a.hahn at f5.com (Ava Hahn) Date: Thu, 20 Apr 2023 16:46:43 +0000 Subject: Non blocking delay in header filters Message-ID: Hello All, I am currently implementing a response header filter that triggers one or more subrequests conditionally based on the status of the parent response. I want to introduce some delay between the checking of the response and the triggering of the subrequest, but I do not want to block the worker thread. So far I have allocated an event within the main request's pool, added a timer with my desired delay, and attached a callback that does two things. 1. triggers a subrequest as desired 2. proceeds to call the next response header filter In the meantime, after posting the event my handler returns NGX_OK. This is not working at all. Shortly after my filter returns NGX_OK the response is finalized, and the pool is deallocated. When the timer wakes up a segfault occurs in the worker process (in ngx_event_del_timer). Even if I allocate the event in a separate pool that outlives the request it is still not defined what happens when I try to trigger a subrequest on a request that has been finalized. My question is how can I make the finalization of the request contingent on the associated/posted event being handled first? OR, is there another facility for implementing non blocking delay that I can use? Thanks, Ava -------------- next part -------------- An HTML attachment was scrubbed... URL: From lynch.meng at hotmail.com Fri Apr 21 02:51:56 2023 From: lynch.meng at hotmail.com (meng lynch) Date: Fri, 21 Apr 2023 02:51:56 +0000 Subject: Remove unused codes in ngx_http_upstream_connect In-Reply-To: References: <162AC9EB-8C91-498F-BBAF-C20038A63E28@hotmail.com> Message-ID: Thanks Another question, can I remove line 666 to 675 in ngx_http_upstream_init_request? Because the state will be created in ngx_http_upstream_connect. 546 static void 547 ngx_http_upstream_init_request(ngx_http_request_t *r) 548 { 549 ngx_str_t *host; 550 ngx_uint_t i; 551 ngx_resolver_ctx_t *ctx, temp; 552 ngx_http_cleanup_t *cln; 553 ngx_http_upstream_t *u; 554 ngx_http_core_loc_conf_t *clcf; 555 ngx_http_upstream_srv_conf_t *uscf, **uscfp; 556 ngx_http_upstream_main_conf_t *umcf; 657 if (r->upstream_states == NULL) { 658 659 r->upstream_states = ngx_array_create(r->pool, 1, 660 sizeof(ngx_http_upstream_state_t)); 661 if (r->upstream_states == NULL) { 662 ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); 663 return; 664 } 665 - 666 } else { - 667 - 668 u->state = ngx_array_push(r->upstream_states); - 669 if (u->state == NULL) { - 670 ngx_http_upstream_finalize_request(r, u, - 671 NGX_HTTP_INTERNAL_SERVER_ERROR); - 672 return; - 673 } -674 -675 ngx_memzero(u->state, sizeof(ngx_http_upstream_state_t)); 676 } On 2023/4/20, 10:58 PM, "nginx-devel on behalf of Maxim Dounin" on behalf of mdounin at mdounin.ru > wrote: Hello! On Thu, Apr 20, 2023 at 02:24:53PM +0000, meng lynch wrote: > Hello guys, > > Should the code from line 1517 to 1519 be removed? Because u->state is reallocated in line 1521. > > 1509 static void > 1510 ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) > 1511 { > 1512 ngx_int_t rc; > 1513 ngx_connection_t *c; > 1514 > 1515 r->connection->log->action = "connecting to upstream"; > 1516 > - 1517 if (u->state && u->state->response_time == (ngx_msec_t) -1) { > - 1518 u->state->response_time = ngx_current_msec - u->start_time; > - 1519 } > 1520 > 1521 u->state = ngx_array_push(r->upstream_states); > 1522 if (u->state == NULL) { > 1523 ngx_http_upstream_finalize_request(r, u, > 1524 NGX_HTTP_INTERNAL_SERVER_ERROR); > 1525 return; > 1526 } > In line 1521, the new state is allocated - the one which will be used for the connection started with this ngx_http_upstream_connect() call. The code in lines 1517..1519 finalizes the previous state, the one created by the previous connection (if any). So no, this code shouldn't be removed, it is actually used. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel From serg.brester at sebres.de Fri Apr 21 09:05:18 2023 From: serg.brester at sebres.de (Dipl. Ing. Sergey Brester) Date: Fri, 21 Apr 2023 11:05:18 +0200 Subject: Non blocking delay in header filters In-Reply-To: References: Message-ID: Well, it is impossible if you'd use some memory blocks allocated by nginx within main request. The memory allocated inside the request is released on request end. An example how one can implement non-blocking delay can you see in https://github.com/openresty/echo-nginx-module#echo_sleep [2]. But again, ensure you have not stored references to some main request structures (request related memory range). If you'd need some of them (e. g. headers, args, etc), duplicate them and release in event handler if timer becomes set or after processing your sub-requests. However if I were you, I'd rather implement it on a backend side (not in nginx), e. g. using background sub-requests either with post_action (despite nonofficial undocumented) or with mirror [3], configured in a corresponding location. Especially if one expects some transaction safety (e. g. for save operation in corner cases like nginx restart/reload/shutdown/etc during the delay between main response and all the sub-requests) as a pipeline similar procedure. So one could register each step (your delayed request) in some queue, for instance storing the request chain in a database or file, to get it safe against shutdown. Although also without the transaction safety your approach to implement it completely in nginx is questionable for many reasons (particularly if the delay is not something artificial, but rather a real timing event). Regards, Serg. 20.04.2023 18:46, Ava Hahn wrote via nginx-devel: > Hello All, > > I am currently implementing a response header filter that triggers one or more subrequests conditionally based on the status of the parent response. > > I want to introduce some delay between the checking of the response and the triggering of the subrequest, but I do not want to block the worker thread. > > So far I have allocated an event within the main request's pool, added a timer with my desired delay, and attached a callback that does two things. > > * triggers a subrequest as desired > * proceeds to call the next response header filter > > In the meantime, after posting the event my handler returns NGX_OK. > > This is not working at all. Shortly after my filter returns NGX_OK the response is finalized, and the pool is deallocated. When the timer wakes up a segfault occurs in the worker process (in ngx_event_del_timer). Even if I allocate the event in a separate pool that outlives the request it is still not defined what happens when I try to trigger a subrequest on a request that has been finalized. > > My question is how can I make the finalization of the request contingent on the associated/posted event being handled first? > > OR, is there another facility for implementing non blocking delay that I can use? > > Thanks, > Ava > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel [1] Links: ------ [1] https://mailman.nginx.org/mailman/listinfo/nginx-devel [2] https://github.com/openresty/echo-nginx-module#echo_sleep [3] http://nginx.org/en/docs/http/ngx_http_mirror_module.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 21 17:38:43 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 21 Apr 2023 20:38:43 +0300 Subject: Remove unused codes in ngx_http_upstream_connect In-Reply-To: References: <162AC9EB-8C91-498F-BBAF-C20038A63E28@hotmail.com> Message-ID: Hello! On Fri, Apr 21, 2023 at 02:51:56AM +0000, meng lynch wrote: > Another question, can I remove line 666 to 675 in > ngx_http_upstream_init_request? Because the state will be > created in ngx_http_upstream_connect. An empty upstream state is used to separate different upstream module invocations, see ngx_http_upstream_addr_variable() and the description of the $upstream_addr variable (https://nginx.org/r/$upstream_addr). -- Maxim Dounin http://mdounin.ru/ From ssdrliu at gmail.com Sun Apr 23 09:27:02 2023 From: ssdrliu at gmail.com (=?UTF-8?B?54mn56ulRGFtaWFu?=) Date: Sun, 23 Apr 2023 17:27:02 +0800 Subject: Fwd: Removed unused ngx_http_headers_out struct In-Reply-To: References: Message-ID: # HG changeset patch # User Liu Yan # Date 1682238162 -28800 # Sun Apr 23 16:22:42 2023 +0800 # Node ID 6ae32d27ce0ae2e0be9f5999aa4ebf27a34e12a5 # Parent 77d5c662f3d9d9b90425128109d3369c30ef5f07 Removed unused code, maybe forgotten in 3b763d36e055 diff -r 77d5c662f3d9 -r 6ae32d27ce0a src/http/ngx_http_header_filter_module.c --- a/src/http/ngx_http_header_filter_module.c Tue Apr 18 06:28:46 2023 +0300 +++ b/src/http/ngx_http_header_filter_module.c Sun Apr 23 16:22:42 2023 +0800 @@ -132,27 +132,6 @@ }; -ngx_http_header_out_t ngx_http_headers_out[] = { - { ngx_string("Server"), offsetof(ngx_http_headers_out_t, server) }, - { ngx_string("Date"), offsetof(ngx_http_headers_out_t, date) }, - { ngx_string("Content-Length"), - offsetof(ngx_http_headers_out_t, content_length) }, - { ngx_string("Content-Encoding"), - offsetof(ngx_http_headers_out_t, content_encoding) }, - { ngx_string("Location"), offsetof(ngx_http_headers_out_t, location) }, - { ngx_string("Last-Modified"), - offsetof(ngx_http_headers_out_t, last_modified) }, - { ngx_string("Accept-Ranges"), - offsetof(ngx_http_headers_out_t, accept_ranges) }, - { ngx_string("Expires"), offsetof(ngx_http_headers_out_t, expires) }, - { ngx_string("Cache-Control"), - offsetof(ngx_http_headers_out_t, cache_control) }, - { ngx_string("ETag"), offsetof(ngx_http_headers_out_t, etag) }, - - { ngx_null_string, 0 } -}; - - static ngx_int_t ngx_http_header_filter(ngx_http_request_t *r) { diff -r 77d5c662f3d9 -r 6ae32d27ce0a src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h Tue Apr 18 06:28:46 2023 +0300 +++ b/src/http/ngx_http_request.h Sun Apr 23 16:22:42 2023 +0800 @@ -611,7 +611,6 @@ extern ngx_http_header_t ngx_http_headers_in[]; -extern ngx_http_header_out_t ngx_http_headers_out[]; #define ngx_http_set_log_request(log, r) \ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Mon Apr 24 12:15:21 2023 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 24 Apr 2023 16:15:21 +0400 Subject: [PATCH 1 of 3] QUIC: changed path validation timeout In-Reply-To: References: Message-ID: <121AA5DA-1F59-4544-A3CF-089CAC9B3CA8@nginx.com> > On 28 Mar 2023, at 18:51, Roman Arutyunyan wrote: > > # HG changeset patch > # User Roman Arutyunyan > # Date 1679925333 -14400 > # Mon Mar 27 17:55:33 2023 +0400 > # Branch quic > # Node ID f76e83412133085a6c82fce2c3e15b2c34a6e959 > # Parent 5fd628b89bb7fb5c95afa1dc914385f7ab79f6a3 > QUIC: changed path validation timeout. > > Path validation packets containing PATH_CHALLENGE frames are sent separately > from regular frame queue, because of the need to use a decicated path and > pad the packets. The packets are also resent separately from the regular > probe/lost detection mechanism. A path validation packet is resent 3 times, > each time after PTO expiration. Assuming constant PTO, the overall maximum > waiting time is 3 * PTO. According to RFC 9000, 8.2.4. Failed Path Validation, > the following value is recommended as a validation timeout: > > A value of three times the larger of the current PTO > or the PTO for the new path (using kInitialRtt, as > defined in [QUIC-RECOVERY]) is RECOMMENDED. > > The change adds PTO of the new path to the equation as the lower bound. > Also, max_ack_delay is now always accounted for, unlike previously, when > it was only used when there are packets in flight. As mentioned before, > PACH_CHALLENGE is not considered in-flight by nginx since it's processed > separately, but technically it is. I don't like an idea to make a separate function to calculate time for path validation retransmits. It looks like an existing function could be reused. I tend to think checking for inflight packets in ngx_quic_pto() isn't correct at the first place. The condition comes from the GetPtoTimeAndSpace example in 9002, A.8: : GetPtoTimeAndSpace(): : duration = (smoothed_rtt + max(4 * rttvar, kGranularity)) : * (2 ^ pto_count) : // Anti-deadlock PTO starts from the current time : if (no ack-eliciting packets in flight): : assert(!PeerCompletedAddressValidation()) : if (has handshake keys): : return (now() + duration), Handshake : else: : return (now() + duration), Initial : <..> : return pto_timeout, pto_space But PeerCompletedAddressValidation is always true for the server. The above anti-deadlock measure seems to only make sense for a client when it has no new data to send, but forced to send something to rise an anti-amplification limit for the server. This thought is supported by commentaries in places of GetPtoTimeAndSpace use. Removing the condition from ngx_quic_pto() makes possible to unify the function to use it for both regular PTO and path validation. Next is to make retransmits similar to a new connection establishment. Per RFC 9000, 8.2.1: : An endpoint SHOULD NOT probe a new path with packets containing a : PATH_CHALLENGE frame more frequently than it would send an Initial packet. I think we can improve path validation to use a separate backoff, path->tries can be used to base a backoff upon it. Since PATH_CHALLENGE are resent separately from the regular probe/lost detection mechanism, this needs to be moved out from ngx_quic_pto(). This makes the following series based on your patch. We could set an overall maximum waiting time of 3 * PTO and test it in pv handler in addition to the check for NGX_QUIC_PATH_RETRIES. # HG changeset patch # User Sergey Kandaurov # Date 1682332923 -14400 # Mon Apr 24 14:42:03 2023 +0400 # Branch quic # Node ID f49aba6e3fb54843d3e3bd5df26dbb45f5d3d687 # Parent d6861ecf8a9cf4e98d9ed6f4435054d106b29f48 QUIC: removed check for in-flight packets in computing PTO. The check is needed for clients in order to unblock a server due to anti-amplification limits, and it seems to make no sense for servers. See RFC 9002, A.6 and A.8 for a further explanation. This makes max_ack_delay to now always account, notably including PATH_CHALLENGE timers as noted in the last paragraph of 9000, 9.4, unlike when it was only used when there are packets in flight. While here, fixed nearby style. diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c --- a/src/event/quic/ngx_event_quic_ack.c +++ b/src/event/quic/ngx_event_quic_ack.c @@ -782,15 +782,11 @@ ngx_quic_pto(ngx_connection_t *c, ngx_qu qc = ngx_quic_get_connection(c); /* RFC 9002, Appendix A.8. Setting the Loss Detection Timer */ + duration = qc->avg_rtt; - duration += ngx_max(4 * qc->rttvar, NGX_QUIC_TIME_GRANULARITY); duration <<= qc->pto_count; - if (qc->congestion.in_flight == 0) { /* no in-flight packets */ - return duration; - } - if (ctx->level == ssl_encryption_application && c->ssl->handshaked) { duration += qc->ctp.max_ack_delay << qc->pto_count; } # HG changeset patch # User Sergey Kandaurov # Date 1682338151 -14400 # Mon Apr 24 16:09:11 2023 +0400 # Branch quic # Node ID 808fe808e276496a9b026690c141201720744ab3 # Parent f49aba6e3fb54843d3e3bd5df26dbb45f5d3d687 QUIC: separated path validation retransmit backoff. Path validation packets containing PATH_CHALLENGE frames are sent separately from regular frame queue, because of the need to use a decicated path and pad the packets. The packets are sent periodically, separately from the regular probe/lost detection mechanism. A path validation packet is resent up to 3 times, each time after PTO expiration, with increasing per-path PTO backoff. diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c --- a/src/event/quic/ngx_event_quic_ack.c +++ b/src/event/quic/ngx_event_quic_ack.c @@ -736,7 +736,8 @@ ngx_quic_set_lost_timer(ngx_connection_t q = ngx_queue_last(&ctx->sent); f = ngx_queue_data(q, ngx_quic_frame_t, queue); - w = (ngx_msec_int_t) (f->last + ngx_quic_pto(c, ctx) - now); + w = (ngx_msec_int_t) (f->last + (ngx_quic_pto(c, ctx) << qc->pto_count) + - now); if (w < 0) { w = 0; @@ -785,10 +786,9 @@ ngx_quic_pto(ngx_connection_t *c, ngx_qu duration = qc->avg_rtt; duration += ngx_max(4 * qc->rttvar, NGX_QUIC_TIME_GRANULARITY); - duration <<= qc->pto_count; if (ctx->level == ssl_encryption_application && c->ssl->handshaked) { - duration += qc->ctp.max_ack_delay << qc->pto_count; + duration += qc->ctp.max_ack_delay; } return duration; @@ -846,7 +846,9 @@ ngx_quic_pto_handler(ngx_event_t *ev) continue; } - if ((ngx_msec_int_t) (f->last + ngx_quic_pto(c, ctx) - now) > 0) { + if ((ngx_msec_int_t) (f->last + (ngx_quic_pto(c, ctx) << qc->pto_count) + - now) > 0) + { continue; } diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c +++ b/src/event/quic/ngx_event_quic_migration.c @@ -496,6 +496,7 @@ ngx_quic_validate_path(ngx_connection_t "quic initiated validation of path seq:%uL", path->seqnum); path->validating = 1; + path->tries = 0; if (RAND_bytes(path->challenge1, 8) != 1) { return NGX_ERROR; @@ -513,7 +514,6 @@ ngx_quic_validate_path(ngx_connection_t pto = ngx_quic_pto(c, ctx); path->expires = ngx_current_msec + pto; - path->tries = NGX_QUIC_PATH_RETRIES; if (!qc->path_validation.timer_set) { ngx_add_timer(&qc->path_validation, pto); @@ -578,7 +578,6 @@ ngx_quic_path_validation_handler(ngx_eve qc = ngx_quic_get_connection(c); ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); - pto = ngx_quic_pto(c, ctx); next = -1; now = ngx_current_msec; @@ -605,7 +604,9 @@ ngx_quic_path_validation_handler(ngx_eve continue; } - if (--path->tries) { + if (++path->tries < NGX_QUIC_PATH_RETRIES) { + pto = ngx_quic_pto(c, ctx) << path->tries; + path->expires = ngx_current_msec + pto; if (next == -1 || pto < next) { # HG changeset patch # User Sergey Kandaurov # Date 1682338293 -14400 # Mon Apr 24 16:11:33 2023 +0400 # Branch quic # Node ID 760ee5baed4d1370a92f5d3a2b82d4a28ac8bae5 # Parent 808fe808e276496a9b026690c141201720744ab3 QUIC: lower bound path validation PTO. According to RFC 9000, 8.2.4. Failed Path Validation, the following value is recommended as a validation timeout: A value of three times the larger of the current PTO or the PTO for the new path (using kInitialRtt, as defined in [QUIC-RECOVERY]) is RECOMMENDED. The change adds PTO of the new path to the equation as the lower bound. diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c +++ b/src/event/quic/ngx_event_quic_migration.c @@ -511,7 +511,7 @@ ngx_quic_validate_path(ngx_connection_t } ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); - pto = ngx_quic_pto(c, ctx); + pto = ngx_max(ngx_quic_pto(c, ctx), 1000); path->expires = ngx_current_msec + pto; @@ -605,7 +605,7 @@ ngx_quic_path_validation_handler(ngx_eve } if (++path->tries < NGX_QUIC_PATH_RETRIES) { - pto = ngx_quic_pto(c, ctx) << path->tries; + pto = ngx_max(ngx_quic_pto(c, ctx), 1000) << path->tries; path->expires = ngx_current_msec + pto; -- Sergey Kandaurov From pgnet.dev at gmail.com Wed Apr 26 17:25:42 2023 From: pgnet.dev at gmail.com (PGNet Dev) Date: Wed, 26 Apr 2023 13:25:42 -0400 Subject: nginx 1.24 + njs build errors [-Werror=dangling-pointer=] after switch from GCC 12 (Fedora 37) -> GCC13 (Fedora 38) In-Reply-To: References: <25f8356b-5248-204c-168d-80798eb6cc7b@nginx.com> Message-ID: >> GCC 13 is not released yet, right? > > "Real Soon Now (tm)" fyi, https://gcc.gnu.org/gcc-13 April 26, 2023 The GCC developers are pleased to announce the release of GCC 13.1. From xeioex at nginx.com Thu Apr 27 00:30:09 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Apr 2023 00:30:09 +0000 Subject: [njs] Tests: dropping all environment variables in a portable way. Message-ID: details: https://hg.nginx.org/njs/rev/9fbae1f025e2 branches: changeset: 2089:9fbae1f025e2 user: Dmitry Volyntsev date: Wed Apr 26 17:27:48 2023 -0700 description: Tests: dropping all environment variables in a portable way. This fixes njs_unit_test crash on macOS. The issue was introduced in 0.7.8. diffstat: src/test/njs_unit_test.c | 16 ++++++++++------ 1 files changed, 10 insertions(+), 6 deletions(-) diffs (35 lines): diff -r 0c95481158e4 -r 9fbae1f025e2 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Apr 19 00:20:37 2023 -0700 +++ b/src/test/njs_unit_test.c Wed Apr 26 17:27:48 2023 -0700 @@ -25223,6 +25223,14 @@ static njs_test_suite_t njs_suites[] = }; +static const char *restricted_environ[] = { + "TZ=UTC", + "DUP=bar", + "dup=foo", + NULL, +}; + + int njs_cdecl main(int argc, char **argv) { @@ -25239,14 +25247,10 @@ main(int argc, char **argv) return (ret == NJS_DONE) ? EXIT_SUCCESS: EXIT_FAILURE; } - environ = NULL; - - (void) putenv((char *) "TZ=UTC"); + environ = (char **) restricted_environ; + tzset(); - (void) putenv((char *) "DUP=bar"); - (void) putenv((char *) "dup=foo"); - njs_mm_denormals(1); njs_memzero(&stat, sizeof(njs_stat_t)); From xeioex at nginx.com Thu Apr 27 04:06:03 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Apr 2023 04:06:03 +0000 Subject: [njs] Types: added definitions for Hash.copy() method. Message-ID: details: https://hg.nginx.org/njs/rev/2efa017faaed branches: changeset: 2090:2efa017faaed user: Dmitry Volyntsev date: Wed Apr 26 19:38:13 2023 -0700 description: Types: added definitions for Hash.copy() method. diffstat: test/ts/test.ts | 1 + ts/njs_modules/crypto.d.ts | 6 ++++++ 2 files changed, 7 insertions(+), 0 deletions(-) diffs (27 lines): diff -r 9fbae1f025e2 -r 2efa017faaed test/ts/test.ts --- a/test/ts/test.ts Wed Apr 26 17:27:48 2023 -0700 +++ b/test/ts/test.ts Wed Apr 26 19:38:13 2023 -0700 @@ -188,6 +188,7 @@ function crypto_module(str: NjsByteStrin h = cr.createHash("sha1"); h = h.update(str).update(Buffer.from([0])); + h = h.copy(); b = h.digest(); s = cr.createHash("sha256").digest("hex"); diff -r 9fbae1f025e2 -r 2efa017faaed ts/njs_modules/crypto.d.ts --- a/ts/njs_modules/crypto.d.ts Wed Apr 26 17:27:48 2023 -0700 +++ b/ts/njs_modules/crypto.d.ts Wed Apr 26 19:38:13 2023 -0700 @@ -8,6 +8,12 @@ declare module "crypto" { export interface Hash { /** + * Returns a new Hash object that contains a deep copy of + * the internal state of the current Hash object. + */ + copy(): Hash; + + /** * Updates the hash content with the given `data` and returns self. */ update(data: NjsStringOrBuffer): Hash; From xeioex at nginx.com Thu Apr 27 04:06:05 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Apr 2023 04:06:05 +0000 Subject: [njs] Types: added forgotten ts types for XML modification API. Message-ID: details: https://hg.nginx.org/njs/rev/5f18ec3b9e53 branches: changeset: 2091:5f18ec3b9e53 user: Dmitry Volyntsev date: Wed Apr 26 19:38:21 2023 -0700 description: Types: added forgotten ts types for XML modification API. diffstat: test/ts/test.ts | 17 ++++++- ts/njs_modules/xml.d.ts | 118 +++++++++++++++++++++-------------------------- 2 files changed, 69 insertions(+), 66 deletions(-) diffs (181 lines): diff -r 2efa017faaed -r 5f18ec3b9e53 test/ts/test.ts --- a/test/ts/test.ts Wed Apr 26 19:38:13 2023 -0700 +++ b/test/ts/test.ts Wed Apr 26 19:38:21 2023 -0700 @@ -174,11 +174,24 @@ function xml_module(str: NjsByteString) children = node.$tags; selectedChildren = node.$tags$xxx; - node?.xxx?.yyy?.$attr$zzz; + node?.$tag$xxx?.$tag$yyy?.$attr$zzz; let buf:Buffer = xml.exclusiveC14n(node); - buf = xml.exclusiveC14n(doc, node.xxx, false); + buf = xml.exclusiveC14n(doc, node.$tag$xxx, false); buf = xml.exclusiveC14n(node, null, true, "aa bb"); + + node.setText("xxx"); + node.removeText(); + node.setText(null); + + node.addChild(node); + node.removeChildren('xx'); + + node.removeAttribute('xx'); + node.removeAllAttributes(); + node.setAttribute('xx', 'yy'); + node.setAttribute('xx', null); + node.$tags = [node, node]; } function crypto_module(str: NjsByteString) { diff -r 2efa017faaed -r 5f18ec3b9e53 ts/njs_modules/xml.d.ts --- a/ts/njs_modules/xml.d.ts Wed Apr 26 19:38:13 2023 -0700 +++ b/ts/njs_modules/xml.d.ts Wed Apr 26 19:38:21 2023 -0700 @@ -2,61 +2,6 @@ declare module "xml" { - type XMLTagName = - | `_${string}` - | `a${string}` - | `b${string}` - | `c${string}` - | `d${string}` - | `e${string}` - | `f${string}` - | `g${string}` - | `h${string}` - | `i${string}` - | `j${string}` - | `k${string}` - | `l${string}` - | `m${string}` - | `n${string}` - | `o${string}` - | `p${string}` - | `q${string}` - | `r${string}` - | `s${string}` - | `t${string}` - | `u${string}` - | `v${string}` - | `w${string}` - | `x${string}` - | `y${string}` - | `z${string}` - | `A${string}` - | `B${string}` - | `C${string}` - | `D${string}` - | `E${string}` - | `F${string}` - | `G${string}` - | `H${string}` - | `I${string}` - | `J${string}` - | `K${string}` - | `L${string}` - | `M${string}` - | `N${string}` - | `O${string}` - | `P${string}` - | `Q${string}` - | `R${string}` - | `S${string}` - | `T${string}` - | `U${string}` - | `V${string}` - | `W${string}` - | `X${string}` - | `Y${string}` - | `Z${string}`; - export interface XMLDoc { /** * The doc's root node. @@ -66,17 +11,68 @@ declare module "xml" { /** * The doc's root by its name or undefined. */ - readonly [rootTagName: XMLTagName]: XMLNode | undefined; + readonly [rootTagName: string]: XMLNode | undefined; } export interface XMLNode { /** - * node.$attr$xxx - the node's attribute value of "xxx". + * Adds a child node. Node is recursively copied before adding. + * @param node - XMLNode to be added. + * @since 0.7.11. + */ + addChild(node: XMLNode): void; + + /** + * node.$attr$xxx - value of the node's attribute "xxx". * @since 0.7.11 the property is writable. */ [key: `$attr$${string}`]: string | undefined; /** + * Removes attribute by name. + * @param name - name of the attribute to remove. + * @since 0.7.11. + */ + removeAttribute(name: string): void; + + /** + * Removes all the attribute of the node. + * @since 0.7.11. + */ + removeAllAttributes(): void; + + /** + * Removes all the children tags named tag_name. + * @param tag_name - name of the children's tags to remove. + * If tag_name is absent all children tags are removed. + * @since 0.7.11. + */ + removeChildren(tag_name?:string): void; + + /** + * Removes the text value of the node. + * @since 0.7.11. + */ + removeText(): void; + + /** + * Sets a value for the attribute. + * @param attr_name - name of the attribute to set. + * @param value - value of the attribute to set. When value is null + * the attribute is removed. + * @since 0.7.11. + */ + setAttribute(attr_name: string, value: string | null): void; + + /** + * Sets a text value for the node. + * @param text - a value to set as a text. If value is null the + * node's text is deleted. + * @since 0.7.11. + */ + setText(text:string | null): void; + + /** * node.$attrs - an XMLAttr wrapper object for all the attributes * of the node. */ @@ -118,13 +114,7 @@ declare module "xml" { /** * node.$tags - all the node's children tags. */ - readonly $tags: XMLNode[] | undefined; - - /** - * node.xxx is the same as node.$tag$xxx. - * @since 0.7.11 the property is writable. - */ - [key: XMLTagName]: XMLNode | undefined; + $tags: XMLNode[] | undefined; } export interface XMLAttr { From xeioex at nginx.com Thu Apr 27 04:06:07 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Apr 2023 04:06:07 +0000 Subject: [njs] Types: added ts types for "zlib" module. Message-ID: details: https://hg.nginx.org/njs/rev/677fc88d8d6d branches: changeset: 2092:677fc88d8d6d user: Dmitry Volyntsev date: Wed Apr 26 19:38:23 2023 -0700 description: Types: added ts types for "zlib" module. diffstat: test/ts/test.ts | 9 +++ ts/index.d.ts | 1 + ts/njs_modules/zlib.d.ts | 127 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 137 insertions(+), 0 deletions(-) diffs (165 lines): diff -r 5f18ec3b9e53 -r 677fc88d8d6d test/ts/test.ts --- a/test/ts/test.ts Wed Apr 26 19:38:21 2023 -0700 +++ b/test/ts/test.ts Wed Apr 26 19:38:23 2023 -0700 @@ -2,6 +2,7 @@ import fs from 'fs'; import qs from 'querystring'; import cr from 'crypto'; import xml from 'xml'; +import zlib from 'zlib'; async function http_module(r: NginxHTTPRequest) { var bs: NjsByteString; @@ -194,6 +195,14 @@ function xml_module(str: NjsByteString) node.$tags = [node, node]; } +function zlib_module(str: NjsByteString) { + zlib.deflateRawSync(str, {level: zlib.constants.Z_BEST_COMPRESSION, memLevel: 9}); + zlib.deflateSync(str, {strategy: zlib.constants.Z_RLE}); + + zlib.inflateRawSync(str, {windowBits: 14}); + zlib.inflateSync(str, {chunkSize: 2048}); +} + function crypto_module(str: NjsByteString) { var h; var b:Buffer; diff -r 5f18ec3b9e53 -r 677fc88d8d6d ts/index.d.ts --- a/ts/index.d.ts Wed Apr 26 19:38:21 2023 -0700 +++ b/ts/index.d.ts Wed Apr 26 19:38:23 2023 -0700 @@ -4,3 +4,4 @@ /// /// /// +/// diff -r 5f18ec3b9e53 -r 677fc88d8d6d ts/njs_modules/zlib.d.ts --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/ts/njs_modules/zlib.d.ts Wed Apr 26 19:38:23 2023 -0700 @@ -0,0 +1,127 @@ +/// + +declare module "zlib" { + interface NjsZlibOptions { + /** + * the buffer size for feeding data to and pulling data + * from the zlib routines, defaults to 1024. + */ + chunkSize?: number; + + /** + * The dictionary buffer. + */ + dictionary?: NjsStringOrBuffer; + + /** + * Compression level, from zlib.constants.Z_NO_COMPRESSION to + * zlib.constants.Z_BEST_COMPRESSION. Defaults to + * zlib.constants.Z_DEFAULT_COMPRESSION. + */ + level?: number; + + /** + * Specifies how much memory should be allocated for the internal compression state. + * 1 uses minimum memory but is slow and reduces compression ratio; + * 9 uses maximum memory for optimal speed. + * The default value is 8. + */ + memLevel?: number; + + /** + * The compression strategy, defaults to zlib.constants.Z_DEFAULT_STRATEGY. + */ + strategy?: number; + + /** + * The log2 of window size. + * -15 to -9 for raw data, from 9 to 15 for an ordinary stream. + */ + windowBits?: number; + } + + type NjsZlibConstants = { + /** + * No compression. + */ + Z_NO_COMPRESSION: number; + + /** + * Fastest, produces the least compression. + */ + Z_BEST_SPEED: number; + + /** + * Trade-off between speed and compression. + */ + Z_DEFAULT_COMPRESSION: number; + + /** + * Slowest, produces the most compression. + */ + Z_BEST_COMPRESSION: number; + + /** + * Filtered strategy: for the data produced by a filter or predictor. + */ + Z_FILTERED: number; + + /** + * Huffman-only strategy: only Huffman encoding, no string matching. + */ + Z_HUFFMAN_ONLY: number; + + /** + * Run Length Encoding strategy: limit match distances to one, + * better compression of PNG image data. + */ + Z_RLE: number; + + /** + * Fixed table strategy: prevents the use of dynamic Huffman codes, + * a simpler decoder for special applications. + */ + Z_FIXED: number; + + /** + * Default strategy, suitable for general purpose compression. + */ + Z_DEFAULT_STRATEGY: number; + }; + + interface Zlib { + /** + * Compresses data using deflate, and do not append a zlib header. + * + * @param data - The data to be compressed. + */ + deflateRawSync(data: NjsStringOrBuffer, options?:NjsZlibOptions): Buffer; + + /** + * Compresses data using deflate. + * + * @param data - The data to be compressed. + */ + deflateSync(data: NjsStringOrBuffer, options?:NjsZlibOptions): Buffer; + + /** + * Decompresses a raw deflate stream. + * + * @param data - The data to be decompressed. + */ + inflateRawSync(data: NjsStringOrBuffer, options?:NjsZlibOptions): Buffer; + + /** + * Decompresses a deflate stream. + * + * @param data - The data to be decompressed. + */ + inflateSync(data: NjsStringOrBuffer, options?:NjsZlibOptions): Buffer; + + constants: NjsZlibConstants; + } + + const zlib: Zlib; + + export default zlib; +} From xeioex at nginx.com Thu Apr 27 04:06:09 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Apr 2023 04:06:09 +0000 Subject: [njs] Source files sorted into groups. Message-ID: details: https://hg.nginx.org/njs/rev/ebb1eb0d4e43 branches: changeset: 2093:ebb1eb0d4e43 user: Dmitry Volyntsev date: Wed Apr 26 19:38:27 2023 -0700 description: Source files sorted into groups. System, data structures, VM, parser, generator, standard objects. diffstat: auto/sources | 24 ++++++++++++------------ 1 files changed, 12 insertions(+), 12 deletions(-) diffs (47 lines): diff -r 677fc88d8d6d -r ebb1eb0d4e43 auto/sources --- a/auto/sources Wed Apr 26 19:38:23 2023 -0700 +++ b/auto/sources Wed Apr 26 19:38:27 2023 -0700 @@ -26,6 +26,17 @@ NJS_LIB_SRCS=" \ src/njs_value.c \ src/njs_vm.c \ src/njs_vmcode.c \ + src/njs_lexer.c \ + src/njs_lexer_keyword.c \ + src/njs_parser.c \ + src/njs_variable.c \ + src/njs_scope.c \ + src/njs_generator.c \ + src/njs_disassembler.c \ + src/njs_timer.c \ + src/njs_module.c \ + src/njs_event.c \ + src/njs_extern.c \ src/njs_boolean.c \ src/njs_number.c \ src/njs_symbol.c \ @@ -39,24 +50,13 @@ NJS_LIB_SRCS=" \ src/njs_date.c \ src/njs_error.c \ src/njs_math.c \ - src/njs_timer.c \ - src/njs_module.c \ - src/njs_event.c \ - src/njs_extern.c \ - src/njs_variable.c \ - src/njs_builtin.c \ - src/njs_lexer.c \ - src/njs_lexer_keyword.c \ - src/njs_parser.c \ - src/njs_generator.c \ - src/njs_disassembler.c \ src/njs_array_buffer.c \ src/njs_typed_array.c \ src/njs_promise.c \ src/njs_encoding.c \ src/njs_iterator.c \ - src/njs_scope.c \ src/njs_async.c \ + src/njs_builtin.c \ " NJS_LIB_TEST_SRCS=" \ From xeioex at nginx.com Thu Apr 27 04:27:29 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Apr 2023 04:27:29 +0000 Subject: [njs] Refactored njs_object_iterate() API. Message-ID: details: https://hg.nginx.org/njs/rev/a868f772ef16 branches: changeset: 2094:a868f772ef16 user: Dmitry Volyntsev date: Wed Apr 26 21:19:48 2023 -0700 description: Refactored njs_object_iterate() API. As a side-effect it fixes dangling-pointer compilation error found by GCC 13.1. diffstat: external/njs_webcrypto_module.c | 2 +- src/njs_array.c | 68 ++++++++++++++++++---------------------- src/njs_iterator.c | 22 +++++------- src/njs_iterator.h | 12 +++--- src/njs_promise.c | 4 +- 5 files changed, 49 insertions(+), 59 deletions(-) diffs (325 lines): diff -r ebb1eb0d4e43 -r a868f772ef16 external/njs_webcrypto_module.c --- a/external/njs_webcrypto_module.c Wed Apr 26 19:38:27 2023 -0700 +++ b/external/njs_webcrypto_module.c Wed Apr 26 21:19:48 2023 -0700 @@ -4192,7 +4192,7 @@ njs_key_usage(njs_vm_t *vm, njs_value_t *mask = 0; - args.value = value; + njs_value_assign(&args.value, value); args.from = 0; args.to = length; args.data = mask; diff -r ebb1eb0d4e43 -r a868f772ef16 src/njs_array.c --- a/src/njs_array.c Wed Apr 26 19:38:27 2023 -0700 +++ b/src/njs_array.c Wed Apr 26 21:19:48 2023 -0700 @@ -1862,10 +1862,10 @@ njs_array_iterator_call(njs_vm_t *vm, nj arguments[0] = *entry; njs_set_number(&arguments[1], n); - arguments[2] = *args->value; - - return njs_function_call(vm, args->function, args->argument, arguments, 3, - retval); + njs_value_assign(&arguments[2], &args->value); + + return njs_function_call(vm, args->function, njs_value_arg(&args->argument), + arguments, 3, retval); } @@ -1921,7 +1921,7 @@ njs_array_handler_includes(njs_vm_t *vm, entry = njs_value_arg(&njs_value_undefined); } - if (njs_values_same_zero(args->argument, entry)) { + if (njs_values_same_zero(njs_value_arg(&args->argument), entry)) { njs_set_true(retval); return NJS_DONE; @@ -1935,7 +1935,7 @@ static njs_int_t njs_array_handler_index_of(njs_vm_t *vm, njs_iterator_args_t *args, njs_value_t *entry, int64_t n, njs_value_t *retval) { - if (njs_values_strict_equal(args->argument, entry)) { + if (njs_values_strict_equal(njs_value_arg(&args->argument), entry)) { njs_set_number(retval, n); return NJS_DONE; @@ -2023,21 +2023,21 @@ njs_array_handler_reduce(njs_vm_t *vm, n njs_value_t arguments[5]; if (njs_is_valid(entry)) { - if (!njs_is_valid(args->argument)) { - *(args->argument) = *entry; + if (!njs_value_is_valid(njs_value_arg(&args->argument))) { + njs_value_assign(&args->argument, entry); return NJS_OK; } /* GC: array elt, array */ njs_set_undefined(&arguments[0]); - arguments[1] = *args->argument; + njs_value_assign(&arguments[1], &args->argument); arguments[2] = *entry; njs_set_number(&arguments[3], n); - arguments[4] = *args->value; + njs_value_assign(&arguments[4], &args->value); ret = njs_function_apply(vm, args->function, arguments, 5, - args->argument); + njs_value_arg(&args->argument)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -2120,18 +2120,17 @@ njs_array_prototype_iterator(njs_vm_t *v int64_t i, length; njs_int_t ret; njs_array_t *array; - njs_value_t accumulator; njs_iterator_args_t iargs; njs_iterator_handler_t handler; - iargs.value = njs_argument(args, 0); - - ret = njs_value_to_object(vm, iargs.value); + njs_value_assign(&iargs.value, njs_argument(args, 0)); + + ret = njs_value_to_object(vm, njs_value_arg(&iargs.value)); if (njs_slow_path(ret != NJS_OK)) { return ret; } - ret = njs_value_length(vm, iargs.value, &iargs.to); + ret = njs_value_length(vm, njs_value_arg(&iargs.value), &iargs.to); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -2145,10 +2144,10 @@ njs_array_prototype_iterator(njs_vm_t *v } iargs.function = njs_function(njs_argument(args, 1)); - iargs.argument = njs_arg(args, nargs, 2); + njs_value_assign(&iargs.argument, njs_arg(args, nargs, 2)); } else { - iargs.argument = njs_arg(args, nargs, 1); + njs_value_assign(&iargs.argument, njs_arg(args, nargs, 1)); } switch (njs_array_type(magic)) { @@ -2206,13 +2205,10 @@ njs_array_prototype_iterator(njs_vm_t *v case NJS_ARRAY_REDUCE: handler = njs_array_handler_reduce; - njs_set_invalid(&accumulator); - - if (nargs > 2) { - accumulator = *iargs.argument; + if (nargs <= 2) { + njs_value_invalid_set(njs_value_arg(&iargs.argument)); } - iargs.argument = &accumulator; break; case NJS_ARRAY_FILTER: @@ -2277,12 +2273,12 @@ done: break; case NJS_ARRAY_REDUCE: - if (!njs_is_valid(&accumulator)) { + if (!njs_value_is_valid(njs_value_arg(&iargs.argument))) { njs_type_error(vm, "Reduce of empty object with no initial value"); return NJS_ERROR; } - njs_value_assign(retval, &accumulator); + njs_value_assign(retval, njs_value_arg(&iargs.argument)); break; case NJS_ARRAY_FILTER: @@ -2301,20 +2297,19 @@ njs_array_prototype_reverse_iterator(njs { int64_t from, length; njs_int_t ret; - njs_value_t accumulator; njs_iterator_args_t iargs; njs_iterator_handler_t handler; - iargs.value = njs_argument(args, 0); - - ret = njs_value_to_object(vm, iargs.value); + njs_value_assign(&iargs.value, njs_argument(args, 0)); + + ret = njs_value_to_object(vm, njs_value_arg(&iargs.value)); if (njs_slow_path(ret != NJS_OK)) { return ret; } - iargs.argument = njs_arg(args, nargs, 1); - - ret = njs_value_length(vm, iargs.value, &length); + njs_value_assign(&iargs.argument, njs_arg(args, nargs, 1)); + + ret = njs_value_length(vm, njs_value_arg(&iargs.value), &length); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -2355,13 +2350,12 @@ njs_array_prototype_reverse_iterator(njs return NJS_ERROR; } - njs_set_invalid(&accumulator); iargs.function = njs_function(njs_argument(args, 1)); - iargs.argument = &accumulator; + njs_value_invalid_set(njs_value_arg(&iargs.argument)); if (nargs > 2) { - accumulator = *njs_argument(args, 2); + njs_value_assign(&iargs.argument, njs_argument(args, 2)); } else if (length == 0) { goto done; @@ -2392,12 +2386,12 @@ done: case NJS_ARRAY_REDUCE_RIGHT: default: - if (!njs_is_valid(&accumulator)) { + if (!njs_value_is_valid(njs_value_arg(&iargs.argument))) { njs_type_error(vm, "Reduce of empty object with no initial value"); return NJS_ERROR; } - njs_value_assign(retval, &accumulator); + njs_value_assign(retval, njs_value_arg(&iargs.argument)); break; } diff -r ebb1eb0d4e43 -r a868f772ef16 src/njs_iterator.c --- a/src/njs_iterator.c Wed Apr 26 19:38:27 2023 -0700 +++ b/src/njs_iterator.c Wed Apr 26 21:19:48 2023 -0700 @@ -298,12 +298,12 @@ njs_object_iterate(njs_vm_t *vm, njs_ite int64_t length, i, from, to; njs_int_t ret; njs_array_t *array, *keys; - njs_value_t *value, *entry, prop, character, string_obj; + njs_value_t *value, *entry, prop, character; const u_char *p, *end, *pos; njs_string_prop_t string_prop; njs_object_value_t *object; - value = args->value; + value = njs_value_arg(&args->value); from = args->from; to = args->to; @@ -354,9 +354,7 @@ njs_object_iterate(njs_vm_t *vm, njs_ite return NJS_ERROR; } - njs_set_object_value(&string_obj, object); - - args->value = &string_obj; + njs_set_object_value(njs_value_arg(&args->value), object); } else { value = njs_object_value(value); @@ -460,12 +458,12 @@ njs_object_iterate_reverse(njs_vm_t *vm, int64_t i, from, to, length; njs_int_t ret; njs_array_t *array, *keys; - njs_value_t *entry, *value, prop, character, string_obj; + njs_value_t *entry, *value, prop, character; const u_char *p, *end, *pos; njs_string_prop_t string_prop; njs_object_value_t *object; - value = args->value; + value = njs_value_arg(&args->value); from = args->from; to = args->to; @@ -518,9 +516,7 @@ njs_object_iterate_reverse(njs_vm_t *vm, return NJS_ERROR; } - njs_set_object_value(&string_obj, object); - - args->value = &string_obj; + njs_set_object_value(njs_value_arg(&args->value), object); } else { value = njs_object_value(value); @@ -640,13 +636,13 @@ njs_iterator_object_handler(njs_vm_t *vm njs_value_t prop, *entry; if (key != NULL) { - ret = njs_value_property(vm, args->value, key, &prop); + ret = njs_value_property(vm, njs_value_arg(&args->value), key, &prop); if (njs_slow_path(ret == NJS_ERROR)) { return ret; } } else { - ret = njs_value_property_i64(vm, args->value, i, &prop); + ret = njs_value_property_i64(vm, njs_value_arg(&args->value), i, &prop); if (njs_slow_path(ret == NJS_ERROR)) { return ret; } @@ -687,7 +683,7 @@ njs_iterator_to_array(njs_vm_t *vm, njs_ return NULL; } - args.value = iterator; + njs_value_assign(&args.value, iterator); args.to = length; ret = njs_object_iterate(vm, &args, njs_iterator_to_array_handler, retval); diff -r ebb1eb0d4e43 -r a868f772ef16 src/njs_iterator.h --- a/src/njs_iterator.h Wed Apr 26 19:38:27 2023 -0700 +++ b/src/njs_iterator.h Wed Apr 26 21:19:48 2023 -0700 @@ -9,14 +9,14 @@ typedef struct { - njs_function_t *function; - njs_value_t *argument; - njs_value_t *value; + njs_function_t *function; + njs_opaque_value_t argument; + njs_opaque_value_t value; - void *data; + void *data; - int64_t from; - int64_t to; + int64_t from; + int64_t to; } njs_iterator_args_t; diff -r ebb1eb0d4e43 -r a868f772ef16 src/njs_promise.c --- a/src/njs_promise.c Wed Apr 26 19:38:27 2023 -0700 +++ b/src/njs_promise.c Wed Apr 26 21:19:48 2023 -0700 @@ -1337,7 +1337,7 @@ njs_promise_perform_all(njs_vm_t *vm, nj (*pargs->remaining) = 1; - pargs->args.value = iterator; + njs_value_assign(&pargs->args.value, iterator); pargs->args.to = length; ret = njs_object_iterate(vm, &pargs->args, handler, retval); @@ -1785,7 +1785,7 @@ njs_promise_race(njs_vm_t *vm, njs_value pargs.function = njs_function(&resolve); pargs.constructor = promise_ctor; - pargs.args.value = iterator; + njs_value_assign(&pargs.args.value, iterator); pargs.args.to = length; ret = njs_object_iterate(vm, &pargs.args, njs_promise_perform_race_handler, From pluknet at nginx.com Thu Apr 27 17:53:30 2023 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Thu, 27 Apr 2023 21:53:30 +0400 Subject: [PATCH] Variables: avoid possible buffer overrun with some "$sent_http_*" Message-ID: # HG changeset patch # User Sergey Kandaurov # Date 1682617947 -14400 # Thu Apr 27 21:52:27 2023 +0400 # Node ID c1ec385d885fba38f15d54263944eed2c74b5733 # Parent 77d5c662f3d9d9b90425128109d3369c30ef5f07 Variables: avoid possible buffer overrun with some "$sent_http_*". The existing logic to evaluate multi header "$sent_http_*" variables, such as $sent_http_cache_control, as previously introduced in 1.23.0, doesn't take into account that one or more elements can be cleared, yet still present in a linked list, pointed to by the next field. Such elements don't contribute to the resulting variable length, an attempt to append a separator for them ends up in out of bounds write. This is not possible with standard modules, though at least one third party module is known to override multi header values this way, so it makes sense to harden the logic. The fix restores a generic boundary check. diff --git a/src/http/ngx_http_variables.c b/src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c +++ b/src/http/ngx_http_variables.c @@ -828,7 +828,7 @@ ngx_http_variable_headers_internal(ngx_h ngx_http_variable_value_t *v, uintptr_t data, u_char sep) { size_t len; - u_char *p; + u_char *p, *end; ngx_table_elt_t *h, *th; h = *(ngx_table_elt_t **) ((char *) r + data); @@ -870,6 +870,8 @@ ngx_http_variable_headers_internal(ngx_h v->len = len; v->data = p; + end = p + len; + for (th = h; th; th = th->next) { if (th->hash == 0) { @@ -878,7 +880,7 @@ ngx_http_variable_headers_internal(ngx_h p = ngx_copy(p, th->value.data, th->value.len); - if (th->next == NULL) { + if (p == end) { break; } From xeioex at nginx.com Fri Apr 28 00:30:04 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 28 Apr 2023 00:30:04 +0000 Subject: [njs] WebCrypto: fixed retval of crypto.getRandomValues(). Message-ID: details: https://hg.nginx.org/njs/rev/4fa5ddc91108 branches: changeset: 2095:4fa5ddc91108 user: Dmitry Volyntsev date: Thu Apr 27 17:28:52 2023 -0700 description: WebCrypto: fixed retval of crypto.getRandomValues(). Previously, crypto.getRandomValues() did not return any value, but it has to return its buffer argument. diffstat: external/njs_webcrypto_module.c | 13 +++++++++---- src/test/njs_unit_test.c | 4 ++++ 2 files changed, 13 insertions(+), 4 deletions(-) diffs (44 lines): diff -r a868f772ef16 -r 4fa5ddc91108 external/njs_webcrypto_module.c --- a/external/njs_webcrypto_module.c Wed Apr 26 21:19:48 2023 -0700 +++ b/external/njs_webcrypto_module.c Thu Apr 27 17:28:52 2023 -0700 @@ -4034,10 +4034,13 @@ static njs_int_t njs_ext_get_random_values(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused, njs_value_t *retval) { - njs_int_t ret; - njs_str_t fill; - - ret = njs_vm_value_to_bytes(vm, &fill, njs_arg(args, nargs, 1)); + njs_int_t ret; + njs_str_t fill; + njs_value_t *buffer; + + buffer = njs_arg(args, nargs, 1); + + ret = njs_vm_value_to_bytes(vm, &fill, buffer); if (njs_slow_path(ret != NJS_OK)) { return NJS_ERROR; } @@ -4052,6 +4055,8 @@ njs_ext_get_random_values(njs_vm_t *vm, return NJS_ERROR; } + njs_value_assign(retval, buffer); + return NJS_OK; } diff -r a868f772ef16 -r 4fa5ddc91108 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Apr 26 21:19:48 2023 -0700 +++ b/src/test/njs_unit_test.c Thu Apr 27 17:28:52 2023 -0700 @@ -21923,6 +21923,10 @@ static njs_unit_test_t njs_webcrypto_te "let condition = bits1 > (mean - 10 * stddev) && bits1 < (mean + 10 * stddev);" "condition ? true : [buf, nbits, bits1, mean, stddev]"), njs_str("true") }, + + { njs_str("let buf = new Uint32Array(4);" + "buf === crypto.getRandomValues(buf)"), + njs_str("true") }, }; From mdounin at mdounin.ru Sat Apr 29 14:50:05 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 29 Apr 2023 17:50:05 +0300 Subject: [PATCH] Variables: avoid possible buffer overrun with some "$sent_http_*" In-Reply-To: References: Message-ID: Hello! On Thu, Apr 27, 2023 at 09:53:30PM +0400, Sergey Kandaurov wrote: > # HG changeset patch > # User Sergey Kandaurov > # Date 1682617947 -14400 > # Thu Apr 27 21:52:27 2023 +0400 > # Node ID c1ec385d885fba38f15d54263944eed2c74b5733 > # Parent 77d5c662f3d9d9b90425128109d3369c30ef5f07 > Variables: avoid possible buffer overrun with some "$sent_http_*". > > The existing logic to evaluate multi header "$sent_http_*" variables, > such as $sent_http_cache_control, as previously introduced in 1.23.0, > doesn't take into account that one or more elements can be cleared, > yet still present in a linked list, pointed to by the next field. > Such elements don't contribute to the resulting variable length, an > attempt to append a separator for them ends up in out of bounds write. > > This is not possible with standard modules, though at least one third > party module is known to override multi header values this way, so it > makes sense to harden the logic. > > The fix restores a generic boundary check. > > diff --git a/src/http/ngx_http_variables.c b/src/http/ngx_http_variables.c > --- a/src/http/ngx_http_variables.c > +++ b/src/http/ngx_http_variables.c > @@ -828,7 +828,7 @@ ngx_http_variable_headers_internal(ngx_h > ngx_http_variable_value_t *v, uintptr_t data, u_char sep) > { > size_t len; > - u_char *p; > + u_char *p, *end; > ngx_table_elt_t *h, *th; > > h = *(ngx_table_elt_t **) ((char *) r + data); > @@ -870,6 +870,8 @@ ngx_http_variable_headers_internal(ngx_h > v->len = len; > v->data = p; > > + end = p + len; > + > for (th = h; th; th = th->next) { > > if (th->hash == 0) { > @@ -878,7 +880,7 @@ ngx_http_variable_headers_internal(ngx_h > > p = ngx_copy(p, th->value.data, th->value.len); > > - if (th->next == NULL) { > + if (p == end) { > break; > } > Looks good. -- Maxim Dounin http://mdounin.ru/