From m15860198213 at 163.com Fri Oct 1 12:33:33 2021 From: m15860198213 at 163.com (=?GBK?B?0e7D973c?=) Date: Fri, 1 Oct 2021 20:33:33 +0800 (CST) Subject: nginx-quic: download speed is very slow when network has added a delay of 1500ms by tc Message-ID: <7827fadb.c6f.17c3bd87310.Coremail.m15860198213@163.com> Hi, when when network has added a delay of 1500ms by tc, doing e.g. tc qdisc add dev eno1 root netem delay 1500ms and nginx http conf : server { listen 443 ssl http2; listen [::]:443 ssl http2; listen 443 http3 reuseport; listen [::]:443 http3 reuseport; server_name localhost; ssl_certificate cert.pem; ssl_certificate_key cert.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_protocols TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; add_header Alt-Svc 'quic=":443"; h3-27=":443";h3-25=":443"; h3-T050=":443"; h3-Q050=":443";h3-Q049=":443";h3-Q048=":443"; h3-Q046=":443"; h3-Q043=":443"'; # Advertise that QUIC is available location / { root /usr/share/nginx; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } when I download a 3GB file whth firefox browser , then download speed is about 45 kb/s, but I confirm the prototal is http3. I might be doing wrong for something... Please help me, thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Mon Oct 4 09:16:26 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 4 Oct 2021 12:16:26 +0300 Subject: nginx-quic: download speed is very slow when network has added a delay of 1500ms by tc In-Reply-To: <7827fadb.c6f.17c3bd87310.Coremail.m15860198213@163.com> References: <7827fadb.c6f.17c3bd87310.Coremail.m15860198213@163.com> Message-ID: <9E9BC83D-6422-4B49-A5F0-69074DE9B577@nginx.com> > On 1 Oct 2021, at 15:33, ??? wrote: > > Hi, > when when network has added a delay of 1500ms by tc, doing e.g. > tc qdisc add dev eno1 root netem delay 1500ms > > [..] > > when I download a 3GB file whth firefox browser , then download speed is about 45 kb/s, but I confirm the prototal is http3. > I might be doing wrong for something... > Please help me, thanks. Thanks for sharing the results. What is your link speed? Can you compare with http2 ? -- Sergey Kandaurov From tracey at archive.org Mon Oct 4 22:41:47 2021 From: tracey at archive.org (Tracey Jaquith) Date: Mon, 4 Oct 2021 15:41:47 -0700 Subject: [PATCH] Add optional "mp4_exact_start" nginx config off/on to show video between keyframes In-Reply-To: <20210930134811.epttik4joflf2qj6@Romans-MacBook-Pro.local> References: <20210628095320.px3ggmmoyjalyv5m@Romans-MacBook-Pro.local> <5F32216C-A041-454C-A73C-0E1C259E434C@archive.org> <20210930134811.epttik4joflf2qj6@Romans-MacBook-Pro.local> Message-ID: <20241A9E-BDF1-42D8-9848-AF628717EFE3@archive.org> Hi Roman, OK, thanks! I?ve tested this on macosx & linux, so far with: chrome, safari, Firefox and iOS. However, I?m seeing Firefox is having alternate behavior where it plays video from the prior keyframe, without audio, until it hits the desired start time in at least one video, though it?s not consistently doing this. I suspect it?s the edit list ? a nice solve for this. I?ve had minor issues with edit lists in the past, for what that?s worth. I like the new name change, congrats. Very clear and easy to understand exactly what this new config flag will do. And deep apologies? > Another problem is track delay I *should have* mentioned when I initially wrote in, that I was aware of the a/v sync slight slip ? and that in practice and running for over 3 months now, it hasn?t seemed to be any kind of issue. Assuming: * the average (US TV) video might be 29.97 fps * and thus timescale / duration of 30000 / 1001 * and that a typical max distance between keyframe GOPs w/ ffmpeg encoders and similar is 300 frames or about 10s Then: * with a max of 10s between keyframes * and 300 frames max would get ?sped up? from 1001 => 1 Then we?re looking at a maximum additional video frames duration of 1/100th of a second. (300 * 1001 / 30000) == 10.01 (300 * 1 / 30000) == 0.01 So the most the A/V sync could ?drift? from those early added frames is 1/100th of a second, where average might be 2-3x smaller than that. In practice, it didn?t seem noticeable ? but I am quite impressed by your desire to minimize/eliminate that. (In practice, from the broadcasters at least in the US, 1/100th of a second A/V slip is not uncommon). If we really wanted to avoid that minor A/V ?slip?, another alternative could be to adjust the STTS of all the video frames going out the door, or the audio frames (and any other related supporting atom minor changes, if any other were needed). > On Sep 30, 2021, at 6:48 AM, Roman Arutyunyan wrote: > > Hi Tracey, > > On Mon, Sep 20, 2021 at 12:39:15PM -0700, Tracey Jaquith wrote: >> Hi Roman, >> >> I had an idea for considering for the feature name / config flag / function name, in case of interest. >> >> What about ?startfast?? (or even ?faststart?, I suppose) >> >> That parallels nicely with the `qt-faststart` utility and the `ffmpeg -movflags faststart` >> where the moov atom is moved to the front of the mp4. >> >> Since the `mp4` module is already rewriting a smaller moov atom for the desired clip, >> *and* the mp4 module will move the moov atom to the front >> (in case the source mp4 file has moov atom at the back), >> It seems like ?startfast? might convey the moov atom approach *and* the concept >> that we?re going to send early visually undesired frames out at ~30,000 fps :) >> >> For your consideration, thanks, >> -Tracey > > Thanks for your suggestion. Currently we're considering the name > "mp4_start_key_frame" which has the word "start" in it, which is the > argument that enables the feature. > > But there's something more important I want to talk about. While doing > internal review of the patch, we were conserned with some potential > problems the patch could introduce. Specifically, when the video track has > B-frames, PTS - DTS delay is stored in the "ctts" atom, which was not changed > by the patch. This means that some frames from the hidden part of the video > could show up in the visible part of it. I believe this could be handled I haven?t done too much with testing videos w/ B-frames, but the prior nginx mp4 module I was using and patching was doing CTTS changes. I never had to patch that part of the code ? but I could look at what they were up to and/or point out their code in case that?s of interest? -Tracey > properly, but the solution would be much more sophisticated than just > zeroing out the initial part of ctts. Another problem is track delay, which > was obvious from the start. The hidden part of the video still takes > some time to play, which ruins synchronization between tracks. This may or > may not be noticable in particular cases, but anyway the problem is still there. > > I've reimplemented the feature by using mp4 edit lists. In a nutshell, all > frames up to the latest key frame are included in the video. Then, the > initial part of the video is hidden from presentation by cutting it with an > edit list. Looks like this solution does not have the problems I mentioned > above. > > Can you try the new patch in your environment? We would really appreciate > your feedback. > >>> On Jun 28, 2021, at 2:53 AM, Roman Arutyunyan > wrote: >>> >>> Hi Tracey, >>> >>> On Tue, Jun 15, 2021 at 03:49:48PM -0700, Tracey Jaquith wrote: >>>> # HG changeset patch >>>> # User Tracey Jaquith >> >>>> # Date 1623797180 0 >>>> # Tue Jun 15 22:46:20 2021 +0000 >>>> # Node ID 1879d49fe0cf739f48287b5a38a83d3a1adab939 >>>> # Parent 5f765427c17ac8cf753967387562201cf4f78dc4 >>>> Add optional "mp4_exact_start" nginx config off/on to show video between keyframes. >>> >>> I've been thinking about a better name for this, but came up with nothing so >>> far. I feel like this name does not give the right clue to the user. >>> Moreover, when this feature is on, the start is not quite "exact", but shifted >>> a few milliseconds into the past. >>> >>>> archive.org has been using mod_h264_streaming with a similar "exact start" patch from me since 2013. >>>> We just moved to nginx mp4 module and are using this patch. >>>> The technique is to find the video keyframe just before the desired "start" time, and send >>>> that down the wire so video playback can start immediately. >>>> Next calculate how many video samples are between the keyframe and desired "start" time >>>> and update the STTS atom where those samples move the duration from (typically) 1001 to 1. >>>> This way, initial unwanted video frames play at ~1/30,000s -- so visually the >>>> video & audio start playing immediately. >>>> >>>> You can see an example before/after here (nginx binary built with mp4 module + patch): >>>> >>>> https://pi.archive.org/0/items/CSPAN_20160425_022500_2011_White_House_Correspondents_Dinner.mp4?start=12&end=30 >>>> https://pi.archive.org/0/items/CSPAN_20160425_022500_2011_White_House_Correspondents_Dinner.mp4?start=12&end=30&exact=1 >>>> >>>> Tested on linux and macosx. >>>> >>>> (this is me: https://github.com/traceypooh ) >>> >>> We have a few rules about patches and commit messages like 67-character limit >>> for the first line etc: >>> >>> http://nginx.org/en/docs/contributing_changes.html > >>> >>>> diff -r 5f765427c17a -r 1879d49fe0cf src/http/modules/ngx_http_mp4_module.c >>>> --- a/src/http/modules/ngx_http_mp4_module.c Tue Jun 01 17:37:51 2021 +0300 >>>> +++ b/src/http/modules/ngx_http_mp4_module.c Tue Jun 15 22:46:20 2021 +0000 >>>> @@ -43,6 +43,7 @@ >>>> typedef struct { >>>> size_t buffer_size; >>>> size_t max_buffer_size; >>>> + ngx_flag_t exact_start; >>>> } ngx_http_mp4_conf_t; >>>> >>>> >>>> @@ -340,6 +341,13 @@ >>>> offsetof(ngx_http_mp4_conf_t, max_buffer_size), >>>> NULL }, >>>> >>>> + { ngx_string("mp4_exact_start"), >>>> + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, >>> >>> NGX_CONF_TAKE1 -> NGX_CONF_FLAG >>> >>>> + ngx_conf_set_flag_slot, >>>> + NGX_HTTP_LOC_CONF_OFFSET, >>>> + offsetof(ngx_http_mp4_conf_t, exact_start), >>>> + NULL }, >>>> + >>>> ngx_null_command >>>> }; >>>> >>>> @@ -2156,6 +2164,83 @@ >>>> >>>> >>>> static ngx_int_t >>>> +ngx_http_mp4_exact_start_video(ngx_http_mp4_file_t *mp4, ngx_http_mp4_trak_t *trak) >>>> +{ >>>> + uint32_t n, speedup_samples, current_count; >>>> + ngx_uint_t sample_keyframe, start_sample_exact; >>>> + ngx_mp4_stts_entry_t *entry, *entries_array; >>>> + ngx_buf_t *data; >>>> + >>>> + data = trak->out[NGX_HTTP_MP4_STTS_DATA].buf; >>>> + >>>> + // Find the keyframe just before the desired start time - so that we can emit an mp4 >>>> + // where the first frame is a keyframe. We'll "speed up" the first frames to 1000x >>>> + // normal speed (typically), so they won't be noticed. But this way, perceptively, >>>> + // playback of the _video_ track can start immediately >>>> + // (and not have to wait until the keyframe _after_ the desired starting time frame). >>>> + start_sample_exact = trak->start_sample; >>>> + for (n = 0; n < trak->sync_samples_entries; n++) { >>>> + // each element of array is the sample number of a keyframe >>>> + // sync samples starts from 1 -- so subtract 1 >>>> + sample_keyframe = ngx_mp4_get_32value(trak->stss_data_buf.pos + (n * 4)) - 1; >>> >>> This can be simplified by introducing entry/end variables like we usually do. >>> >>> Also, we don't access trak->stss_data_buf directly, but prefer >>> trak->out[NGX_HTTP_MP4_STSS_ATOM].buf. >>> >>> ngx_http_mp4_crop_stss_data() provides an example of iterating over stss atom. >>> >>>> + if (sample_keyframe <= trak->start_sample) { >>>> + start_sample_exact = sample_keyframe; >>>> + } >>>> + if (sample_keyframe >= trak->start_sample) { >>>> + break; >>>> + } >>>> + } >>>> + >>>> + if (start_sample_exact < trak->start_sample) { >>>> + // We're going to prepend an entry with duration=1 for the frames we want to "not see". >>>> + // MOST of the time (eg: constant video framerate), >>>> + // we're taking a single element entry array and making it two. >>>> + speedup_samples = trak->start_sample - start_sample_exact; >>>> + >>>> + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0, >>>> + "exact trak start_sample move %l to %l (speed up %d samples)\n", >>>> + trak->start_sample, start_sample_exact, speedup_samples); >>>> + >>>> + entries_array = ngx_palloc(mp4->request->pool, >>>> + (1 + trak->time_to_sample_entries) * sizeof(ngx_mp4_stts_entry_t)); >>>> + if (entries_array == NULL) { >>>> + return NGX_ERROR; >>>> + } >>>> + entry = &(entries_array[1]); >>>> + ngx_memcpy(entry, (ngx_mp4_stts_entry_t *)data->pos, >>>> + trak->time_to_sample_entries * sizeof(ngx_mp4_stts_entry_t)); >>> >>> This reallocation can be avoided. Look at NGX_HTTP_MP4_STSC_START buffer >>> as an example of that. A new 1-element optional buffer NGX_HTTP_MP4_STTS_START >>> can be introduced right before the stts atom data. >>> >>>> + current_count = ngx_mp4_get_32value(entry->count); >>>> + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0, >>>> + "exact split in 2 video STTS entry from count:%d", current_count); >>>> + >>>> + if (current_count <= speedup_samples) { >>>> + return NGX_ERROR; >>>> + } >>>> + >>>> + ngx_mp4_set_32value(entry->count, current_count - speedup_samples); >>>> + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0, >>>> + "exact split new[1]: count:%d duration:%d", >>>> + ngx_mp4_get_32value(entry->count), >>>> + ngx_mp4_get_32value(entry->duration)); >>>> + entry--; >>>> + ngx_mp4_set_32value(entry->count, speedup_samples); >>>> + ngx_mp4_set_32value(entry->duration, 1); >>>> + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0, >>>> + "exact split new[0]: count:%d duration:1", >>>> + ngx_mp4_get_32value(entry->count)); >>>> + >>>> + data->pos = (u_char *) entry; >>>> + trak->time_to_sample_entries++; >>>> + trak->start_sample = start_sample_exact; >>>> + data->last = (u_char *) (entry + trak->time_to_sample_entries); >>>> + } >>>> + >>>> + return NGX_OK; >>>> +} >>>> + >>>> + >>>> +static ngx_int_t >>>> ngx_http_mp4_crop_stts_data(ngx_http_mp4_file_t *mp4, >>>> ngx_http_mp4_trak_t *trak, ngx_uint_t start) >>>> { >>>> @@ -2164,6 +2249,8 @@ >>>> ngx_buf_t *data; >>>> ngx_uint_t start_sample, entries, start_sec; >>>> ngx_mp4_stts_entry_t *entry, *end; >>>> + ngx_http_mp4_conf_t *conf; >>>> + >>> >>> No need for a new empty line here. >>> >>>> if (start) { >>>> start_sec = mp4->start; >>>> @@ -2238,6 +2325,10 @@ >>>> "start_sample:%ui, new count:%uD", >>>> trak->start_sample, count - rest); >>>> >>>> + conf = ngx_http_get_module_loc_conf(mp4->request, ngx_http_mp4_module); >>>> + if (conf->exact_start) { >>>> + ngx_http_mp4_exact_start_video(mp4, trak); >>>> + } >>>> } else { >>>> ngx_mp4_set_32value(entry->count, rest); >>>> data->last = (u_char *) (entry + 1); >>>> @@ -3590,6 +3681,7 @@ >>>> >>>> conf->buffer_size = NGX_CONF_UNSET_SIZE; >>>> conf->max_buffer_size = NGX_CONF_UNSET_SIZE; >>>> + conf->exact_start = NGX_CONF_UNSET; >>> >>> This is not enough, a merge is needed too. >>> >>>> >>>> return conf; >>>> } >>>> _______________________________________________ >>>> nginx-devel mailing list >>>> nginx-devel at nginx.org > >>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel > >>> >>> I've made a POC patch which incorporates the issues I've mentioned. >>> I didn't test is properly and the directive name is still not perfect. >>> >>> -- >>> Roman Arutyunyan >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org > >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel > >> -Tracey >> @tracey_pooh >> TV Architect https://archive.org/tv > >> >> >> >> >> > >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > -- > Roman Arutyunyan > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -Tracey @tracey_pooh TV Architect https://archive.org/tv -------------- next part -------------- An HTML attachment was scrubbed... URL: From m15860198213 at 163.com Tue Oct 5 02:03:29 2021 From: m15860198213 at 163.com (yang) Date: Tue, 5 Oct 2021 10:03:29 +0800 (CST) Subject: nginx-quic: download speed is very slow when network has added a delay of 1500ms by tc In-Reply-To: <9E9BC83D-6422-4B49-A5F0-69074DE9B577@nginx.com> References: <7827fadb.c6f.17c3bd87310.Coremail.m15860198213@163.com> <9E9BC83D-6422-4B49-A5F0-69074DE9B577@nginx.com> Message-ID: <5af8846c.6f0.17c4e310b27.Coremail.m15860198213@163.com> HI Sergey Kandaurov? Thanks for your reply. My network bandwidth is 100Mbps, and download speed is about 1MB/s with http2, when network has added a delay of 1500ms by tc. At 2021-10-04 17:16:26, "Sergey Kandaurov" wrote: > >> On 1 Oct 2021, at 15:33, ??? wrote: >> >> Hi, >> when when network has added a delay of 1500ms by tc, doing e.g. >> tc qdisc add dev eno1 root netem delay 1500ms >> >> [..] >> >> when I download a 3GB file whth firefox browser , then download speed is about 45 kb/s, but I confirm the prototal is http3. >> I might be doing wrong for something... >> Please help me, thanks. > >Thanks for sharing the results. >What is your link speed? Can you compare with http2 ? > >-- >Sergey Kandaurov > >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From eran.kornblau at kaltura.com Tue Oct 5 14:04:14 2021 From: eran.kornblau at kaltura.com (Eran Kornblau) Date: Tue, 5 Oct 2021 14:04:14 +0000 Subject: Sending a notification to the main nginx thread Message-ID: Hi all, I?m planning a module in which I want to send a notification from a side thread to the main nginx thread. I checked the implementation of the thread pool module, and saw that it uses ngx_notify for that. But, checking how that function is implemented (checked epoll), I saw that it can?t really be used for any other purpose? If I send my function to ngx_notify, it will overwrite the thread pool handler, and can lead to race conditions? Googling for this problem, I saw this old patch ? https://mailman.nginx.org/pipermail/nginx-devel/2016-August/008679.html which didn?t get any replies? Was wondering ? is there some alternative solution for what I?m trying to do? Will you consider applying this/some other patch to address this issue? Thank you Eran -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Oct 5 15:08:47 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 05 Oct 2021 15:08:47 +0000 Subject: [njs] Modules: simplified reporting of failures in ngx.fetch() method. Message-ID: details: https://hg.nginx.org/njs/rev/64d8d8eeebda branches: changeset: 1710:64d8d8eeebda user: Dmitry Volyntsev date: Tue Oct 05 13:01:11 2021 +0000 description: Modules: simplified reporting of failures in ngx.fetch() method. diffstat: nginx/ngx_js_fetch.c | 127 +++++++++++++++++++++----------------------------- 1 files changed, 54 insertions(+), 73 deletions(-) diffs (237 lines): diff -r 56e3f06da4f0 -r 64d8d8eeebda nginx/ngx_js_fetch.c --- a/nginx/ngx_js_fetch.c Wed Sep 29 16:13:36 2021 +0000 +++ b/nginx/ngx_js_fetch.c Tue Oct 05 13:01:11 2021 +0000 @@ -91,17 +91,17 @@ struct ngx_js_http_s { static ngx_js_http_t *ngx_js_http_alloc(njs_vm_t *vm, ngx_pool_t *pool, ngx_log_t *log); +static void njs_js_http_destructor(njs_external_ptr_t external, + njs_host_event_t host); static void ngx_js_resolve_handler(ngx_resolver_ctx_t *ctx); -static njs_int_t ngx_js_fetch_result(njs_vm_t *vm, ngx_js_http_t *http, - njs_value_t *result, njs_int_t rc); static njs_int_t ngx_js_fetch_promissified_result(njs_vm_t *vm, njs_value_t *result, njs_int_t rc); static void ngx_js_http_fetch_done(ngx_js_http_t *http, njs_opaque_value_t *retval, njs_int_t rc); static njs_int_t ngx_js_http_promise_trampoline(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused); -static njs_int_t ngx_js_http_connect(ngx_js_http_t *http); -static njs_int_t ngx_js_http_next(ngx_js_http_t *http); +static void ngx_js_http_connect(ngx_js_http_t *http); +static void ngx_js_http_next(ngx_js_http_t *http); static void ngx_js_http_write_handler(ngx_event_t *wev); static void ngx_js_http_read_handler(ngx_event_t *rev); static ngx_int_t ngx_js_http_process_status_line(ngx_js_http_t *http); @@ -537,26 +537,39 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value return NJS_ERROR; } - } else { - http->naddrs = 1; - ngx_memcpy(&http->addr, &u.addrs[0], sizeof(ngx_addr_t)); - http->addrs = &http->addr; - - ret = ngx_js_http_connect(http); + njs_vm_retval_set(vm, njs_value_arg(&http->promise)); + + return NJS_OK; } - return ngx_js_fetch_result(vm, http, njs_value_arg(&http->reply), ret); + http->naddrs = 1; + ngx_memcpy(&http->addr, &u.addrs[0], sizeof(ngx_addr_t)); + http->addrs = &http->addr; + + ngx_js_http_connect(http); + + njs_vm_retval_set(vm, njs_value_arg(&http->promise)); + + return NJS_OK; fail: - return ngx_js_fetch_result(vm, http, njs_vm_retval(vm), NJS_ERROR); + ngx_js_http_fetch_done(http, (njs_opaque_value_t *) njs_vm_retval(vm), + NJS_ERROR); + + njs_vm_retval_set(vm, njs_value_arg(&http->promise)); + + return NJS_OK; } static ngx_js_http_t * ngx_js_http_alloc(njs_vm_t *vm, ngx_pool_t *pool, ngx_log_t *log) { - ngx_js_http_t *http; + njs_int_t ret; + ngx_js_http_t *http; + njs_vm_event_t vm_event; + njs_function_t *callback; http = ngx_pcalloc(pool, sizeof(ngx_js_http_t)); if (http == NULL) { @@ -569,6 +582,24 @@ ngx_js_http_alloc(njs_vm_t *vm, ngx_pool http->timeout = 10000; + ret = njs_vm_promise_create(vm, njs_value_arg(&http->promise), + njs_value_arg(&http->promise_callbacks)); + if (ret != NJS_OK) { + goto failed; + } + + callback = njs_vm_function_alloc(vm, ngx_js_http_promise_trampoline); + if (callback == NULL) { + goto failed; + } + + vm_event = njs_vm_add_event(vm, callback, 1, http, njs_js_http_destructor); + if (vm_event == NULL) { + goto failed; + } + + http->vm_event = vm_event; + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, log, 0, "js http alloc:%p", http); return http; @@ -655,7 +686,7 @@ ngx_js_resolve_handler(ngx_resolver_ctx_ ngx_resolve_name_done(ctx); http->ctx = NULL; - (void) ngx_js_http_connect(http); + ngx_js_http_connect(http); return; @@ -688,55 +719,6 @@ njs_js_http_destructor(njs_external_ptr_ static njs_int_t -ngx_js_fetch_result(njs_vm_t *vm, ngx_js_http_t *http, njs_value_t *result, - njs_int_t rc) -{ - njs_int_t ret; - njs_function_t *callback; - njs_vm_event_t vm_event; - njs_opaque_value_t arguments[2]; - - ret = njs_vm_promise_create(vm, njs_value_arg(&http->promise), - njs_value_arg(&http->promise_callbacks)); - if (ret != NJS_OK) { - goto error; - } - - callback = njs_vm_function_alloc(vm, ngx_js_http_promise_trampoline); - if (callback == NULL) { - goto error; - } - - vm_event = njs_vm_add_event(vm, callback, 1, http, njs_js_http_destructor); - if (vm_event == NULL) { - goto error; - } - - http->vm_event = vm_event; - - if (rc == NJS_ERROR) { - njs_value_assign(&arguments[0], &http->promise_callbacks[1]); - njs_value_assign(&arguments[1], result); - - ret = njs_vm_post_event(vm, vm_event, njs_value_arg(&arguments), 2); - if (ret == NJS_ERROR) { - goto error; - } - } - - njs_vm_retval_set(vm, njs_value_arg(&http->promise)); - - return NJS_OK; - -error: - - njs_vm_error(vm, "internal error"); - - return NJS_ERROR; -} - - -static njs_int_t ngx_js_fetch_promissified_result(njs_vm_t *vm, njs_value_t *result, njs_int_t rc) { @@ -821,7 +803,7 @@ ngx_js_http_promise_trampoline(njs_vm_t } -static njs_int_t +static void ngx_js_http_connect(ngx_js_http_t *http) { ngx_int_t rc; @@ -843,11 +825,12 @@ ngx_js_http_connect(ngx_js_http_t *http) if (rc == NGX_ERROR) { ngx_js_http_error(http, 0, "connect failed"); - return NJS_ERROR; + return; } if (rc == NGX_BUSY || rc == NGX_DECLINED) { - return ngx_js_http_next(http); + ngx_js_http_next(http); + return; } http->peer.connection->data = http; @@ -866,19 +849,17 @@ ngx_js_http_connect(ngx_js_http_t *http) if (rc == NGX_OK) { ngx_js_http_write_handler(http->peer.connection->write); } - - return NJS_OK; } -static njs_int_t +static void ngx_js_http_next(ngx_js_http_t *http) { ngx_log_debug0(NGX_LOG_DEBUG_EVENT, http->log, 0, "js http next"); if (++http->naddr >= http->naddrs) { ngx_js_http_error(http, 0, "connect failed"); - return NJS_ERROR; + return; } if (http->peer.connection != NULL) { @@ -888,7 +869,7 @@ ngx_js_http_next(ngx_js_http_t *http) http->buffer = NULL; - return ngx_js_http_connect(http); + ngx_js_http_connect(http); } @@ -936,7 +917,7 @@ ngx_js_http_write_handler(ngx_event_t *w n = ngx_send(c, b->pos, size); if (n == NGX_ERROR) { - (void) ngx_js_http_next(http); + ngx_js_http_next(http); return; } @@ -1022,7 +1003,7 @@ ngx_js_http_read_handler(ngx_event_t *re } if (n == NGX_ERROR) { - (void) ngx_js_http_next(http); + ngx_js_http_next(http); return; } From xeioex at nginx.com Tue Oct 5 15:08:49 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 05 Oct 2021 15:08:49 +0000 Subject: [njs] Added support for HTTPS URLs to the Fetch API. Message-ID: details: https://hg.nginx.org/njs/rev/05a313868939 branches: changeset: 1711:05a313868939 user: Antoine Bonavita date: Wed Sep 01 20:43:56 2021 +0200 description: Added support for HTTPS URLs to the Fetch API. The fetch API now accepts an extra parameters: - verify: boolean (default true) to control server certificate verification. Verification process can be controlled with the following directives from the http js and stream js modules: - js_fetch_ciphers - js_fetch_protocols - js_fetch_verify_depth - js_fetch_trusted_certificate In collaboration with Dmitry Volyntsev. diffstat: nginx/ngx_http_js_module.c | 141 +++++++++++++++++++++++++- nginx/ngx_js.h | 3 + nginx/ngx_js_fetch.c | 236 ++++++++++++++++++++++++++++++++++++++++++- nginx/ngx_stream_js_module.c | 141 +++++++++++++++++++++++++- 4 files changed, 514 insertions(+), 7 deletions(-) diffs (758 lines): diff -r 64d8d8eeebda -r 05a313868939 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Tue Oct 05 13:01:11 2021 +0000 +++ b/nginx/ngx_http_js_module.c Wed Sep 01 20:43:56 2021 +0200 @@ -27,6 +27,13 @@ typedef struct { ngx_str_t header_filter; ngx_str_t body_filter; ngx_uint_t buffer_type; +#if (NGX_HTTP_SSL) + ngx_ssl_t *ssl; + ngx_str_t ssl_ciphers; + ngx_uint_t ssl_protocols; + ngx_int_t ssl_verify_depth; + ngx_str_t ssl_trusted_certificate; +#endif } ngx_http_js_loc_conf_t; @@ -222,6 +229,22 @@ static void *ngx_http_js_create_loc_conf static char *ngx_http_js_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); +#if (NGX_HTTP_SSL) +static char * ngx_http_js_set_ssl(ngx_conf_t *cf, ngx_http_js_loc_conf_t *plcf); +#endif +static ngx_ssl_t *ngx_http_js_ssl(njs_vm_t *vm, ngx_http_request_t *r); + +#if (NGX_HTTP_SSL) + +static ngx_conf_bitmask_t ngx_http_js_ssl_protocols[] = { + { ngx_string("TLSv1"), NGX_SSL_TLSv1 }, + { ngx_string("TLSv1.1"), NGX_SSL_TLSv1_1 }, + { ngx_string("TLSv1.2"), NGX_SSL_TLSv1_2 }, + { ngx_string("TLSv1.3"), NGX_SSL_TLSv1_3 }, + { ngx_null_string, 0 } +}; + +#endif static ngx_command_t ngx_http_js_commands[] = { @@ -281,6 +304,38 @@ static ngx_command_t ngx_http_js_comman 0, NULL }, +#if (NGX_HTTP_SSL) + + { ngx_string("js_fetch_ciphers"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_js_loc_conf_t, ssl_ciphers), + NULL }, + + { ngx_string("js_fetch_protocols"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, + ngx_conf_set_bitmask_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_js_loc_conf_t, ssl_protocols), + &ngx_http_js_ssl_protocols }, + + { ngx_string("js_fetch_verify_depth"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_js_loc_conf_t, ssl_verify_depth), + NULL }, + + { ngx_string("js_fetch_trusted_certificate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_js_loc_conf_t, ssl_trusted_certificate), + NULL }, + +#endif + ngx_null_command }; @@ -657,11 +712,12 @@ static uintptr_t ngx_http_js_uptr[] = { (uintptr_t) ngx_http_js_resolver, (uintptr_t) ngx_http_js_resolver_timeout, (uintptr_t) ngx_http_js_handle_event, + (uintptr_t) ngx_http_js_ssl, }; static njs_vm_meta_t ngx_http_js_metas = { - .size = 5, + .size = 6, .values = ngx_http_js_uptr }; @@ -3951,8 +4007,15 @@ ngx_http_js_create_loc_conf(ngx_conf_t * * conf->header_filter = { 0, NULL }; * conf->body_filter = { 0, NULL }; * conf->buffer_type = NGX_JS_UNSET; + * conf->ssl_ciphers = { 0, NULL }; + * conf->ssl_protocols = 0; + * conf->ssl_trusted_certificate = { 0, NULL }; */ +#if (NGX_HTTP_SSL) + conf->ssl_verify_depth = NGX_CONF_UNSET; +#endif + return conf; } @@ -3969,5 +4032,81 @@ ngx_http_js_merge_loc_conf(ngx_conf_t *c ngx_conf_merge_uint_value(conf->buffer_type, prev->buffer_type, NGX_JS_STRING); +#if (NGX_HTTP_SSL) + ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, "DEFAULT"); + + ngx_conf_merge_bitmask_value(conf->ssl_protocols, prev->ssl_protocols, + (NGX_CONF_BITMASK_SET|NGX_SSL_TLSv1 + |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); + + ngx_conf_merge_value(conf->ssl_verify_depth, prev->ssl_verify_depth, 100); + + ngx_conf_merge_str_value(conf->ssl_trusted_certificate, + prev->ssl_trusted_certificate, ""); + + return ngx_http_js_set_ssl(cf, conf); +#else + return NGX_CONF_OK; +#endif +} + + +#if (NGX_HTTP_SSL) + +static char * +ngx_http_js_set_ssl(ngx_conf_t *cf, ngx_http_js_loc_conf_t *plcf) +{ + ngx_ssl_t *ssl; + ngx_pool_cleanup_t *cln; + + ssl = ngx_pcalloc(cf->pool, sizeof(ngx_ssl_t)); + if (ssl == NULL) { + return NGX_CONF_ERROR; + } + + plcf->ssl = ssl; + ssl->log = cf->log; + + if (ngx_ssl_create(ssl, plcf->ssl_protocols, NULL) != NGX_OK) { + return NGX_CONF_ERROR; + } + + cln = ngx_pool_cleanup_add(cf->pool, 0); + if (cln == NULL) { + ngx_ssl_cleanup_ctx(ssl); + return NGX_CONF_ERROR; + } + + cln->handler = ngx_ssl_cleanup_ctx; + cln->data = ssl; + + if (ngx_ssl_ciphers(NULL, ssl, &plcf->ssl_ciphers, 0) != NGX_OK) { + return NGX_CONF_ERROR; + } + + if (ngx_ssl_trusted_certificate(cf, ssl, &plcf->ssl_trusted_certificate, + plcf->ssl_verify_depth) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + return NGX_CONF_OK; } + +#endif + + +static ngx_ssl_t * +ngx_http_js_ssl(njs_vm_t *vm, ngx_http_request_t *r) +{ +#if (NGX_HTTP_SSL) + ngx_http_js_loc_conf_t *plcf; + + plcf = ngx_http_get_module_loc_conf(r, ngx_http_js_module); + + return plcf->ssl; +#else + return NULL; +#endif +} diff -r 64d8d8eeebda -r 05a313868939 nginx/ngx_js.h --- a/nginx/ngx_js.h Tue Oct 05 13:01:11 2021 +0000 +++ b/nginx/ngx_js.h Wed Sep 01 20:43:56 2021 +0200 @@ -27,6 +27,7 @@ typedef ngx_resolver_t *(*ngx_external_r njs_external_ptr_t e); typedef ngx_msec_t (*ngx_external_resolver_timeout_pt)(njs_vm_t *vm, njs_external_ptr_t e); +typedef ngx_ssl_t *(*ngx_external_ssl_pt)(njs_vm_t *vm, njs_external_ptr_t e); #define ngx_external_connection(vm, e) \ @@ -39,6 +40,8 @@ typedef ngx_msec_t (*ngx_external_resolv ((ngx_external_resolver_timeout_pt) njs_vm_meta(vm, 3))(vm, e) #define ngx_external_event_handler(vm, e) \ ((ngx_js_event_handler_pt) njs_vm_meta(vm, 4)) +#define ngx_external_ssl(vm, e) \ + ((ngx_external_ssl_pt) njs_vm_meta(vm, 5))(vm, e) #define ngx_js_prop(vm, type, value, start, len) \ diff -r 64d8d8eeebda -r 05a313868939 nginx/ngx_js_fetch.c --- a/nginx/ngx_js_fetch.c Tue Oct 05 13:01:11 2021 +0000 +++ b/nginx/ngx_js_fetch.c Wed Sep 01 20:43:56 2021 +0200 @@ -2,6 +2,7 @@ /* * Copyright (C) Dmitry Volyntsev * Copyright (C) hongzhidao + * Copyright (C) Antoine Bonavita * Copyright (C) NGINX, Inc. */ @@ -65,6 +66,12 @@ struct ngx_js_http_s { njs_str_t url; ngx_array_t headers; +#if (NGX_SSL) + ngx_str_t tls_name; + ngx_ssl_t *ssl; + njs_bool_t ssl_verify; +#endif + ngx_buf_t *buffer; ngx_buf_t *chunk; njs_chb_t chain; @@ -142,6 +149,12 @@ static njs_int_t ngx_response_js_ext_typ static njs_int_t ngx_response_js_ext_body(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused); +#if (NGX_SSL) +static void ngx_js_http_ssl_init_connection(ngx_js_http_t *http); +static void ngx_js_http_ssl_handshake_handler(ngx_connection_t *c); +static void ngx_js_http_ssl_handshake(ngx_js_http_t *http); +static njs_int_t ngx_js_http_ssl_name(ngx_js_http_t *http); +#endif static njs_external_t ngx_js_ext_http_response_headers[] = { @@ -344,6 +357,9 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value static const njs_str_t buffer_size_key = njs_str("buffer_size"); static const njs_str_t body_size_key = njs_str("max_response_body_size"); static const njs_str_t method_key = njs_str("method"); +#if (NGX_SSL) + static const njs_str_t verify_key = njs_str("verify"); +#endif external = njs_vm_external(vm, NJS_PROTO_ID_ANY, njs_argument(args, 0)); if (external == NULL) { @@ -384,6 +400,17 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value u.url.len -= 7; u.url.data += 7; +#if (NGX_SSL) + } else if (u.url.len > 8 + && ngx_strncasecmp(u.url.data, (u_char *) "https://", 8) == 0) + { + u.url.len -= 8; + u.url.data += 8; + u.default_port = 443; + http->ssl = ngx_external_ssl(vm, external); + http->ssl_verify = 1; +#endif + } else { njs_vm_error(vm, "unsupported URL prefix"); goto fail; @@ -432,6 +459,13 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value { goto fail; } + +#if (NGX_SSL) + value = njs_vm_object_prop(vm, init, &verify_key, &lvalue); + if (value != NULL) { + http->ssl_verify = njs_value_bool(value); + } +#endif } njs_chb_init(&http->chain, njs_vm_memory_pool(vm)); @@ -501,6 +535,11 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value njs_chb_append_literal(&http->chain, CRLF); } +#if (NGX_SSL) + http->tls_name.data = u.host.data; + http->tls_name.len = u.host.len; +#endif + if (body.length != 0) { njs_chb_sprintf(&http->chain, 32, "Content-Length: %uz" CRLF CRLF, body.length); @@ -697,6 +736,29 @@ failed: static void +ngx_js_http_close_connection(ngx_connection_t *c) +{ + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, + "close js http connection: %d", c->fd); + +#if (NGX_SSL) + if (c->ssl) { + c->ssl->no_wait_shutdown = 1; + + if (ngx_ssl_shutdown(c) == NGX_AGAIN) { + c->ssl->handler = ngx_js_http_close_connection; + return; + } + } +#endif + + c->destroyed = 1; + + ngx_close_connection(c); +} + + +static void njs_js_http_destructor(njs_external_ptr_t external, njs_host_event_t host) { ngx_js_http_t *http; @@ -712,7 +774,7 @@ njs_js_http_destructor(njs_external_ptr_ } if (http->peer.connection != NULL) { - ngx_close_connection(http->peer.connection); + ngx_js_http_close_connection(http->peer.connection); http->peer.connection = NULL; } } @@ -773,7 +835,7 @@ ngx_js_http_fetch_done(ngx_js_http_t *ht "js fetch done http:%p rc:%i", http, (ngx_int_t) rc); if (http->peer.connection != NULL) { - ngx_close_connection(http->peer.connection); + ngx_js_http_close_connection(http->peer.connection); http->peer.connection = NULL; } @@ -846,12 +908,169 @@ ngx_js_http_connect(ngx_js_http_t *http) ngx_add_timer(http->peer.connection->write, http->timeout); } +#if (NGX_SSL) + if (http->ssl != NULL && http->peer.connection->ssl == NULL) { + ngx_js_http_ssl_init_connection(http); + return; + } +#endif + if (rc == NGX_OK) { ngx_js_http_write_handler(http->peer.connection->write); } } +#if (NGX_SSL) + +static void +ngx_js_http_ssl_init_connection(ngx_js_http_t *http) +{ + ngx_int_t rc; + ngx_connection_t *c; + + c = http->peer.connection; + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, http->log, 0, + "js http secure connect %ui/%ui", http->naddr, http->naddrs); + + if (ngx_ssl_create_connection(http->ssl, c, NGX_SSL_BUFFER|NGX_SSL_CLIENT) + != NGX_OK) + { + ngx_js_http_error(http, 0, "failed to create ssl connection"); + return; + } + + c->sendfile = 0; + + if (ngx_js_http_ssl_name(http) != NGX_OK) { + ngx_js_http_error(http, 0, "failed to create ssl connection"); + return; + } + + c->log->action = "SSL handshaking to fetch target"; + + rc = ngx_ssl_handshake(c); + + if (rc == NGX_AGAIN) { + c->data = http; + c->ssl->handler = ngx_js_http_ssl_handshake_handler; + return; + } + + ngx_js_http_ssl_handshake(http); +} + + +static void +ngx_js_http_ssl_handshake_handler(ngx_connection_t *c) +{ + ngx_js_http_t *http; + + http = c->data; + + http->peer.connection->write->handler = ngx_js_http_write_handler; + http->peer.connection->read->handler = ngx_js_http_read_handler; + + ngx_js_http_ssl_handshake(http); +} + + +static void +ngx_js_http_ssl_handshake(ngx_js_http_t *http) +{ + long rc; + ngx_connection_t *c; + + c = http->peer.connection; + + if (c->ssl->handshaked) { + if (http->ssl_verify) { + rc = SSL_get_verify_result(c->ssl->connection); + + if (rc != X509_V_OK) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "js http fetch SSL certificate verify " + "error: (%l:%s)", rc, + X509_verify_cert_error_string(rc)); + goto failed; + } + + if (ngx_ssl_check_host(c, &http->tls_name) != NGX_OK) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "js http SSL certificate does not match \"%V\"", + &http->tls_name); + goto failed; + } + } + + c->write->handler = ngx_js_http_write_handler; + c->read->handler = ngx_js_http_read_handler; + + http->process = ngx_js_http_process_status_line; + ngx_js_http_write_handler(c->write); + + return; + } + +failed: + + ngx_js_http_next(http); + } + + +static njs_int_t +ngx_js_http_ssl_name(ngx_js_http_t *http) +{ +#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME + u_char *p; + + /* as per RFC 6066, literal IPv4 and IPv6 addresses are not permitted */ + ngx_str_t *name = &http->tls_name; + + if (name->len == 0 || *name->data == '[') { + goto done; + } + + if (ngx_inet_addr(name->data, name->len) != INADDR_NONE) { + goto done; + } + + /* + * SSL_set_tlsext_host_name() needs a null-terminated string, + * hence we explicitly null-terminate name here + */ + + p = ngx_pnalloc(http->pool, name->len + 1); + if (p == NULL) { + return NGX_ERROR; + } + + (void) ngx_cpystrn(p, name->data, name->len + 1); + + name->data = p; + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, http->log, 0, + "js http SSL server name: \"%s\"", name->data); + + if (SSL_set_tlsext_host_name(http->peer.connection->ssl->connection, + (char *) name->data) + == 0) + { + ngx_ssl_error(NGX_LOG_ERR, http->log, 0, + "SSL_set_tlsext_host_name(\"%s\") failed", name->data); + return NGX_ERROR; + } + +#endif +done: + + return NJS_OK; +} + +#endif + + static void ngx_js_http_next(ngx_js_http_t *http) { @@ -863,7 +1082,7 @@ ngx_js_http_next(ngx_js_http_t *http) } if (http->peer.connection != NULL) { - ngx_close_connection(http->peer.connection); + ngx_js_http_close_connection(http->peer.connection); http->peer.connection = NULL; } @@ -891,6 +1110,13 @@ ngx_js_http_write_handler(ngx_event_t *w return; } +#if (NGX_SSL) + if (http->ssl != NULL && http->peer.connection->ssl == NULL) { + ngx_js_http_ssl_init_connection(http); + return; + } +#endif + b = http->buffer; if (b == NULL) { @@ -914,7 +1140,7 @@ ngx_js_http_write_handler(ngx_event_t *w size = b->last - b->pos; - n = ngx_send(c, b->pos, size); + n = c->send(c, b->pos, size); if (n == NGX_ERROR) { ngx_js_http_next(http); @@ -980,7 +1206,7 @@ ngx_js_http_read_handler(ngx_event_t *re b = http->buffer; size = b->end - b->last; - n = ngx_recv(c, b->last, size); + n = c->recv(c, b->last, size); if (n > 0) { b->last += n; diff -r 64d8d8eeebda -r 05a313868939 nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Tue Oct 05 13:01:11 2021 +0000 +++ b/nginx/ngx_stream_js_module.c Wed Sep 01 20:43:56 2021 +0200 @@ -34,6 +34,13 @@ typedef struct { ngx_str_t access; ngx_str_t preread; ngx_str_t filter; +#if (NGX_SSL) + ngx_ssl_t *ssl; + ngx_str_t ssl_ciphers; + ngx_uint_t ssl_protocols; + ngx_int_t ssl_verify_depth; + ngx_str_t ssl_trusted_certificate; +#endif } ngx_stream_js_srv_conf_t; @@ -135,6 +142,23 @@ static char *ngx_stream_js_merge_srv_con void *child); static ngx_int_t ngx_stream_js_init(ngx_conf_t *cf); +#if (NGX_SSL) +static char * ngx_stream_js_set_ssl(ngx_conf_t *cf, + ngx_stream_js_srv_conf_t *pscf); +#endif +static ngx_ssl_t *ngx_stream_js_ssl(njs_vm_t *vm, ngx_stream_session_t *s); + +#if (NGX_HTTP_SSL) + +static ngx_conf_bitmask_t ngx_stream_js_ssl_protocols[] = { + { ngx_string("TLSv1"), NGX_SSL_TLSv1 }, + { ngx_string("TLSv1.1"), NGX_SSL_TLSv1_1 }, + { ngx_string("TLSv1.2"), NGX_SSL_TLSv1_2 }, + { ngx_string("TLSv1.3"), NGX_SSL_TLSv1_3 }, + { ngx_null_string, 0 } +}; + +#endif static ngx_command_t ngx_stream_js_commands[] = { @@ -194,6 +218,38 @@ static ngx_command_t ngx_stream_js_comm offsetof(ngx_stream_js_srv_conf_t, filter), NULL }, +#if (NGX_SSL) + + { ngx_string("js_fetch_ciphers"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_js_srv_conf_t, ssl_ciphers), + NULL }, + + { ngx_string("js_fetch_protocols"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_1MORE, + ngx_conf_set_bitmask_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_js_srv_conf_t, ssl_protocols), + &ngx_stream_js_ssl_protocols }, + + { ngx_string("js_fetch_verify_depth"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_js_srv_conf_t, ssl_verify_depth), + NULL }, + + { ngx_string("js_fetch_trusted_certificate"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_js_srv_conf_t, ssl_trusted_certificate), + NULL }, + +#endif + ngx_null_command }; @@ -408,11 +464,12 @@ static uintptr_t ngx_stream_js_uptr[] = (uintptr_t) ngx_stream_js_resolver, (uintptr_t) ngx_stream_js_resolver_timeout, (uintptr_t) ngx_stream_js_handle_event, + (uintptr_t) ngx_stream_js_ssl, }; static njs_vm_meta_t ngx_stream_js_metas = { - .size = 5, + .size = 6, .values = ngx_stream_js_uptr }; @@ -1891,8 +1948,14 @@ ngx_stream_js_create_srv_conf(ngx_conf_t * conf->access = { 0, NULL }; * conf->preread = { 0, NULL }; * conf->filter = { 0, NULL }; + * conf->ssl_ciphers = { 0, NULL }; + * conf->ssl_protocols = 0; + * conf->ssl_trusted_certificate = { 0, NULL }; */ +#if (NGX_SSL) + conf->ssl_verify_depth = NGX_CONF_UNSET; +#endif return conf; } @@ -1907,7 +1970,22 @@ ngx_stream_js_merge_srv_conf(ngx_conf_t ngx_conf_merge_str_value(conf->preread, prev->preread, ""); ngx_conf_merge_str_value(conf->filter, prev->filter, ""); +#if (NGX_HTTP_SSL) + ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, "DEFAULT"); + + ngx_conf_merge_bitmask_value(conf->ssl_protocols, prev->ssl_protocols, + (NGX_CONF_BITMASK_SET|NGX_SSL_TLSv1 + |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); + + ngx_conf_merge_value(conf->ssl_verify_depth, prev->ssl_verify_depth, 100); + + ngx_conf_merge_str_value(conf->ssl_trusted_certificate, + prev->ssl_trusted_certificate, ""); + + return ngx_stream_js_set_ssl(cf, conf); +#else return NGX_CONF_OK; +#endif } @@ -1938,3 +2016,64 @@ ngx_stream_js_init(ngx_conf_t *cf) return NGX_OK; } + + +#if (NGX_SSL) + +static char * +ngx_stream_js_set_ssl(ngx_conf_t *cf, ngx_stream_js_srv_conf_t *pscf) +{ + ngx_ssl_t *ssl; + ngx_pool_cleanup_t *cln; + + ssl = ngx_pcalloc(cf->pool, sizeof(ngx_ssl_t)); + if (ssl == NULL) { + return NGX_CONF_ERROR; + } + + pscf->ssl = ssl; + ssl->log = cf->log; + + if (ngx_ssl_create(ssl, pscf->ssl_protocols, NULL) != NGX_OK) { + return NGX_CONF_ERROR; + } + + cln = ngx_pool_cleanup_add(cf->pool, 0); + if (cln == NULL) { + ngx_ssl_cleanup_ctx(ssl); + return NGX_CONF_ERROR; + } + + cln->handler = ngx_ssl_cleanup_ctx; + cln->data = ssl; + + if (ngx_ssl_ciphers(NULL, ssl, &pscf->ssl_ciphers, 0) != NGX_OK) { + return NGX_CONF_ERROR; + } + + if (ngx_ssl_trusted_certificate(cf, ssl, &pscf->ssl_trusted_certificate, + pscf->ssl_verify_depth) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + + return NGX_CONF_OK; +} + +#endif + + +static ngx_ssl_t * +ngx_stream_js_ssl(njs_vm_t *vm, ngx_stream_session_t *s) +{ +#if (NGX_SSL) + ngx_stream_js_srv_conf_t *pscf; + + pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_js_module); + + return pscf->ssl; +#else + return NULL; +#endif +} From pluknet at nginx.com Wed Oct 6 12:58:32 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 06 Oct 2021 12:58:32 +0000 Subject: [njs] Fixed timeouts with Fetch, SSL and select. Message-ID: details: https://hg.nginx.org/njs/rev/15a26b25a328 branches: changeset: 1712:15a26b25a328 user: Sergey Kandaurov date: Wed Oct 06 15:57:14 2021 +0300 description: Fixed timeouts with Fetch, SSL and select. Similar to the connection hang fixed in 058a67435e83 in nginx, it is possible that an established connection is ready for reading after the handshake. Further, events might be already disabled in case of level-triggered event methods. Fix is to post a read event if the c->read->ready flag is set. diffstat: nginx/ngx_js_fetch.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r 05a313868939 -r 15a26b25a328 nginx/ngx_js_fetch.c --- a/nginx/ngx_js_fetch.c Wed Sep 01 20:43:56 2021 +0200 +++ b/nginx/ngx_js_fetch.c Wed Oct 06 15:57:14 2021 +0300 @@ -1007,6 +1007,10 @@ ngx_js_http_ssl_handshake(ngx_js_http_t c->write->handler = ngx_js_http_write_handler; c->read->handler = ngx_js_http_read_handler; + if (c->read->ready) { + ngx_post_event(c->read, &ngx_posted_events); + } + http->process = ngx_js_http_process_status_line; ngx_js_http_write_handler(c->write); From mdounin at mdounin.ru Wed Oct 6 13:11:52 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Oct 2021 16:11:52 +0300 Subject: Sending a notification to the main nginx thread In-Reply-To: References: Message-ID: Hello! On Tue, Oct 05, 2021 at 02:04:14PM +0000, Eran Kornblau wrote: > Hi all, > > I?m planning a module in which I want to send a notification from a side thread to the main nginx thread. > I checked the implementation of the thread pool module, and saw that it uses ngx_notify for that. > But, checking how that function is implemented (checked epoll), I saw that it can?t really be used for any other purpose? > If I send my function to ngx_notify, it will overwrite the thread pool handler, and can lead to race conditions? > > Googling for this problem, I saw this old patch ? > https://mailman.nginx.org/pipermail/nginx-devel/2016-August/008679.html > which didn?t get any replies? > > Was wondering ? is there some alternative solution for what I?m trying to do? > Will you consider applying this/some other patch to address this issue? First of all, you may want to take a look at this warning in the development guide: http://nginx.org/en/docs/dev/development_guide.html#threads_pitfalls Quoting it here: It is recommended to avoid using threads in nginx because it will definitely break things: most nginx functions are not thread-safe. It is expected that a thread will be executing only system calls and thread-safe library functions. If you need to run some code that is not related to client request processing, the proper way is to schedule a timer in the init_process module handler and perform required actions in timer handler. Internally nginx makes use of threads to boost IO-related operations, but this is a special case with a lot of limitations. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Oct 6 13:17:12 2021 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Wed, 06 Oct 2021 16:17:12 +0300 Subject: [PATCH] Changed ngx_chain_update_chains() to test tag first (ticket #2248) Message-ID: # HG changeset patch # User Maxim Dounin # Date 1633526031 -10800 # Wed Oct 06 16:13:51 2021 +0300 # Node ID ac42b4b31026ec24345331e9bd5c38ac4b6e7502 # Parent bfad703459b4e2416548ac66f548e96c2197d9cc Changed ngx_chain_update_chains() to test tag first (ticket #2248). Without this change, aio used with HTTP/2 can result in connection hang, as observed with "aio threads; aio_write on;" and proxying (ticket #2248). The problem is that HTTP/2 updates buffers outside of the output filters (notably, marks them as sent), and then posts a write event to call output filters. If a filter does not call the next one for some reason (for example, because of an AIO operation in progress), this might result in a state when the owner of a buffer already called ngx_chain_update_chains() and can reuse the buffer, the same buffer is still sitting in the busy chain of some other filter. In the particular case a buffer was sitting in output chain's ctx->busy, and was reused by even pipe. Output chain's ctx->busy was permanently blocked by it, and this resulted in connection hang. Fix is to change ngx_chain_update_chains() to skip buffers from other modules unconditionally, without trying to wait for these buffers to become empty. diff --git a/src/core/ngx_buf.c b/src/core/ngx_buf.c --- a/src/core/ngx_buf.c +++ b/src/core/ngx_buf.c @@ -203,16 +203,16 @@ ngx_chain_update_chains(ngx_pool_t *p, n while (*busy) { cl = *busy; - if (ngx_buf_size(cl->buf) != 0) { - break; - } - if (cl->buf->tag != tag) { *busy = cl->next; ngx_free_chain(p, cl); continue; } + if (ngx_buf_size(cl->buf) != 0) { + break; + } + cl->buf->pos = cl->buf->start; cl->buf->last = cl->buf->start; From xeioex at nginx.com Wed Oct 6 13:23:33 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 06 Oct 2021 13:23:33 +0000 Subject: [njs] Style. Message-ID: details: https://hg.nginx.org/njs/rev/5aceb5eaf2b2 branches: changeset: 1713:5aceb5eaf2b2 user: Dmitry Volyntsev date: Wed Oct 06 13:16:09 2021 +0000 description: Style. diffstat: nginx/ngx_js.h | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diffs (27 lines): diff -r 15a26b25a328 -r 5aceb5eaf2b2 nginx/ngx_js.h --- a/nginx/ngx_js.h Wed Oct 06 15:57:14 2021 +0300 +++ b/nginx/ngx_js.h Wed Oct 06 13:16:09 2021 +0000 @@ -22,7 +22,7 @@ typedef ngx_pool_t *(*ngx_external_pool_pt)(njs_vm_t *vm, njs_external_ptr_t e); typedef void (*ngx_js_event_handler_pt)(njs_external_ptr_t e, - njs_vm_event_t vm_event, njs_value_t *args, njs_uint_t nargs); + njs_vm_event_t vm_event, njs_value_t *args, njs_uint_t nargs); typedef ngx_resolver_t *(*ngx_external_resolver_pt)(njs_vm_t *vm, njs_external_ptr_t e); typedef ngx_msec_t (*ngx_external_resolver_timeout_pt)(njs_vm_t *vm, @@ -33,11 +33,11 @@ typedef ngx_ssl_t *(*ngx_external_ssl_pt #define ngx_external_connection(vm, e) \ (*((ngx_connection_t **) ((u_char *) (e) + njs_vm_meta(vm, 0)))) #define ngx_external_pool(vm, e) \ - ((ngx_external_pool_pt) njs_vm_meta(vm, 1))(vm, e) + ((ngx_external_pool_pt) njs_vm_meta(vm, 1))(vm, e) #define ngx_external_resolver(vm, e) \ - ((ngx_external_resolver_pt) njs_vm_meta(vm, 2))(vm, e) + ((ngx_external_resolver_pt) njs_vm_meta(vm, 2))(vm, e) #define ngx_external_resolver_timeout(vm, e) \ - ((ngx_external_resolver_timeout_pt) njs_vm_meta(vm, 3))(vm, e) + ((ngx_external_resolver_timeout_pt) njs_vm_meta(vm, 3))(vm, e) #define ngx_external_event_handler(vm, e) \ ((ngx_js_event_handler_pt) njs_vm_meta(vm, 4)) #define ngx_external_ssl(vm, e) \ From mdounin at mdounin.ru Wed Oct 6 17:12:16 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 06 Oct 2021 17:12:16 +0000 Subject: [nginx] Fixed $content_length cacheability with chunked (ticket #2252). Message-ID: details: https://hg.nginx.org/nginx/rev/ae7c767aa491 branches: changeset: 7930:ae7c767aa491 user: Maxim Dounin date: Wed Oct 06 18:01:42 2021 +0300 description: Fixed $content_length cacheability with chunked (ticket #2252). diffstat: src/http/ngx_http_variables.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r bfad703459b4 -r ae7c767aa491 src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Wed Sep 22 10:20:00 2021 +0300 +++ b/src/http/ngx_http_variables.c Wed Oct 06 18:01:42 2021 +0300 @@ -1179,6 +1179,10 @@ ngx_http_variable_content_length(ngx_htt v->no_cacheable = 0; v->not_found = 0; + } else if (r->headers_in.chunked) { + v->not_found = 1; + v->no_cacheable = 1; + } else { v->not_found = 1; } From eran.kornblau at kaltura.com Wed Oct 6 19:53:00 2021 From: eran.kornblau at kaltura.com (Eran Kornblau) Date: Wed, 6 Oct 2021 19:53:00 +0000 Subject: Sending a notification to the main nginx thread In-Reply-To: References: Message-ID: > > -----Original Message----- > From: nginx-devel On Behalf Of Maxim Dounin > Sent: Wednesday, 6 October 2021 16:12 > To: nginx-devel at nginx.org > Subject: Re: Sending a notification to the main nginx thread > > Hello! > > First of all, you may want to take a look at this warning in the development guide: > > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnginx.org%2Fen%2Fdocs%2Fdev%2Fdevelopment_guide.html%23threads_pitfalls&data=04%7C01%7C%7Cd0de648d5c294e07aac008d988cae55b%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C1%7C637691227285308685%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2BiDgCqNY6Wtl0elz%2FYAG7UvdOmoW%2BsZg9v0i5Oicc%2FY%3D&reserved=0 > > Quoting it here: > > It is recommended to avoid using threads in nginx because it will definitely break things: most nginx functions are not thread-safe. It is expected that a thread will be executing only system calls and thread-safe library functions. > If you need to run some code that is not related to client request processing, the proper way is to schedule a timer in the init_process module handler and perform required actions in timer handler. Internally nginx makes use of threads to boost IO-related operations, but this is a special case with a lot of limitations. > Thanks Maxim, I completely get that, that is the reason I was looking for way to send a notification to the main thread, and didn't just try to call nginx functions from some other thread. In my case, I need to integrate with some 3rd party library that has its own event loop, it would require significant changes to the library to make it run inside nginx's event loop... So, my plan is to run it on a side thread, and send notifications between the threads, which would trigger some handler on the main thread whenever new data arrives. I can use ngx_notify for this, but if someone will use the module and also nginx's thread pool or some 3rd party module that uses ngx_notify, it will break. I think I can live with that, but would be nice to have a solution that is complete. Eran > > -- > Maxim Dounin > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=04%7C01%7C%7Cd0de648d5c294e07aac008d988cae55b%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C1%7C637691227285318679%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=j6ifgLEZxh6B60ZP2wGOoJMnb1JnaVig3Ja26LNoMk0%3D&reserved=0 > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx-devel&data=04%7C01%7C%7Cd0de648d5c294e07aac008d988cae55b%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C1%7C637691227285318679%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=dOXGqrvoFn8AnO1CGa67SCgmoiqIUdoNU8Aw9t6xGcA%3D&reserved=0 > From arut at nginx.com Thu Oct 7 11:36:13 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 07 Oct 2021 14:36:13 +0300 Subject: [PATCH 0 of 5] QUIC flood detection Message-ID: This series adds support for flood detection in QUIC and HTTP/3 smilar to HTTP/2. - patch 1 removes client-side encoder support from HTTP/3 for simplicity - patch 2 fixes a minor issue with $request_length calculation - patch 3 adds HTTP/3 traffic-based flood detection - patch 4 adds QUIC traffic-based flood detection - patch 5 adds a limit on frames number similar to HTTP/2 As for the patch 3, both input and output traffic is analyzed similar to HTTP/2. Probably only input should be analyzed because current HTTP/3 implementation does not seem to allow amplification (the only exception is Stream Cancellation, but keepalive_requests limits the damage anyway). Also, we can never be sure the output traffic we counted actually reached the client and was not rejected by stream reset. We can discuss this later. From arut at nginx.com Thu Oct 7 11:36:14 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 07 Oct 2021 14:36:14 +0300 Subject: [PATCH 1 of 5] HTTP/3: removed client-side encoder support In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1633520939 -10800 # Wed Oct 06 14:48:59 2021 +0300 # Branch quic # Node ID d53039c3224e8227979c113f621e532aef7c0f9b # Parent 1ead7d64e9934c1a6c0d9dd3c5f1a3d643b926d6 HTTP/3: removed client-side encoder support. Dynamic tables are not used when generating responses anyway. diff --git a/src/http/v3/ngx_http_v3_streams.c b/src/http/v3/ngx_http_v3_streams.c --- a/src/http/v3/ngx_http_v3_streams.c +++ b/src/http/v3/ngx_http_v3_streams.c @@ -480,155 +480,6 @@ failed: ngx_int_t -ngx_http_v3_send_ref_insert(ngx_connection_t *c, ngx_uint_t dynamic, - ngx_uint_t index, ngx_str_t *value) -{ - u_char *p, buf[NGX_HTTP_V3_PREFIX_INT_LEN * 2]; - size_t n; - ngx_connection_t *ec; - - ngx_log_debug3(NGX_LOG_DEBUG_HTTP, c->log, 0, - "http3 client ref insert, %s[%ui] \"%V\"", - dynamic ? "dynamic" : "static", index, value); - - ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER); - if (ec == NULL) { - return NGX_ERROR; - } - - p = buf; - - *p = (dynamic ? 0x80 : 0xc0); - p = (u_char *) ngx_http_v3_encode_prefix_int(p, index, 6); - - /* XXX option for huffman? */ - *p = 0; - p = (u_char *) ngx_http_v3_encode_prefix_int(p, value->len, 7); - - n = p - buf; - - if (ec->send(ec, buf, n) != (ssize_t) n) { - goto failed; - } - - if (ec->send(ec, value->data, value->len) != (ssize_t) value->len) { - goto failed; - } - - return NGX_OK; - -failed: - - ngx_http_v3_close_uni_stream(ec); - - return NGX_ERROR; -} - - -ngx_int_t -ngx_http_v3_send_insert(ngx_connection_t *c, ngx_str_t *name, ngx_str_t *value) -{ - u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; - size_t n; - ngx_connection_t *ec; - - ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, - "http3 client insert \"%V\":\"%V\"", name, value); - - ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER); - if (ec == NULL) { - return NGX_ERROR; - } - - /* XXX option for huffman? */ - buf[0] = 0x40; - n = (u_char *) ngx_http_v3_encode_prefix_int(buf, name->len, 5) - buf; - - if (ec->send(ec, buf, n) != (ssize_t) n) { - goto failed; - } - - if (ec->send(ec, name->data, name->len) != (ssize_t) name->len) { - goto failed; - } - - /* XXX option for huffman? */ - buf[0] = 0; - n = (u_char *) ngx_http_v3_encode_prefix_int(buf, value->len, 7) - buf; - - if (ec->send(ec, buf, n) != (ssize_t) n) { - goto failed; - } - - if (ec->send(ec, value->data, value->len) != (ssize_t) value->len) { - goto failed; - } - - return NGX_OK; - -failed: - - ngx_http_v3_close_uni_stream(ec); - - return NGX_ERROR; -} - - -ngx_int_t -ngx_http_v3_send_set_capacity(ngx_connection_t *c, ngx_uint_t capacity) -{ - u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; - size_t n; - ngx_connection_t *ec; - - ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, - "http3 client set capacity %ui", capacity); - - ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER); - if (ec == NULL) { - return NGX_ERROR; - } - - buf[0] = 0x20; - n = (u_char *) ngx_http_v3_encode_prefix_int(buf, capacity, 5) - buf; - - if (ec->send(ec, buf, n) != (ssize_t) n) { - ngx_http_v3_close_uni_stream(ec); - return NGX_ERROR; - } - - return NGX_OK; -} - - -ngx_int_t -ngx_http_v3_send_duplicate(ngx_connection_t *c, ngx_uint_t index) -{ - u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; - size_t n; - ngx_connection_t *ec; - - ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, - "http3 client duplicate %ui", index); - - ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER); - if (ec == NULL) { - return NGX_ERROR; - } - - buf[0] = 0; - n = (u_char *) ngx_http_v3_encode_prefix_int(buf, index, 5) - buf; - - if (ec->send(ec, buf, n) != (ssize_t) n) { - ngx_http_v3_close_uni_stream(ec); - return NGX_ERROR; - } - - return NGX_OK; -} - - -ngx_int_t ngx_http_v3_send_ack_section(ngx_connection_t *c, ngx_uint_t stream_id) { u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; diff --git a/src/http/v3/ngx_http_v3_streams.h b/src/http/v3/ngx_http_v3_streams.h --- a/src/http/v3/ngx_http_v3_streams.h +++ b/src/http/v3/ngx_http_v3_streams.h @@ -27,13 +27,6 @@ ngx_int_t ngx_http_v3_cancel_stream(ngx_ ngx_int_t ngx_http_v3_send_settings(ngx_connection_t *c); ngx_int_t ngx_http_v3_send_goaway(ngx_connection_t *c, uint64_t id); -ngx_int_t ngx_http_v3_send_ref_insert(ngx_connection_t *c, ngx_uint_t dynamic, - ngx_uint_t index, ngx_str_t *value); -ngx_int_t ngx_http_v3_send_insert(ngx_connection_t *c, ngx_str_t *name, - ngx_str_t *value); -ngx_int_t ngx_http_v3_send_set_capacity(ngx_connection_t *c, - ngx_uint_t capacity); -ngx_int_t ngx_http_v3_send_duplicate(ngx_connection_t *c, ngx_uint_t index); ngx_int_t ngx_http_v3_send_ack_section(ngx_connection_t *c, ngx_uint_t stream_id); ngx_int_t ngx_http_v3_send_cancel_stream(ngx_connection_t *c, From arut at nginx.com Thu Oct 7 11:36:15 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 07 Oct 2021 14:36:15 +0300 Subject: [PATCH 2 of 5] HTTP/3: fixed request length calculation In-Reply-To: References: Message-ID: <1b87f4e196cce2b7aae3.1633606575@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1633521076 -10800 # Wed Oct 06 14:51:16 2021 +0300 # Branch quic # Node ID 1b87f4e196cce2b7aae33a63ca6dfc857b99f2b7 # Parent d53039c3224e8227979c113f621e532aef7c0f9b HTTP/3: fixed request length calculation. Previously, when request was blocked, r->request_length was not updated. diff --git a/src/http/v3/ngx_http_v3_request.c b/src/http/v3/ngx_http_v3_request.c --- a/src/http/v3/ngx_http_v3_request.c +++ b/src/http/v3/ngx_http_v3_request.c @@ -297,6 +297,8 @@ ngx_http_v3_process_request(ngx_event_t break; } + r->request_length += b->pos - p; + if (rc == NGX_BUSY) { if (rev->error) { ngx_http_close_request(r, NGX_HTTP_CLOSE); @@ -310,8 +312,6 @@ ngx_http_v3_process_request(ngx_event_t break; } - r->request_length += b->pos - p; - if (rc == NGX_AGAIN) { continue; } From arut at nginx.com Thu Oct 7 11:36:16 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 07 Oct 2021 14:36:16 +0300 Subject: [PATCH 3 of 5] HTTP/3: traffic-based flood detection In-Reply-To: References: Message-ID: <31561ac584b74d29af9a.1633606576@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1633602162 -10800 # Thu Oct 07 13:22:42 2021 +0300 # Branch quic # Node ID 31561ac584b74d29af9a442afca47821a98217b2 # Parent 1b87f4e196cce2b7aae33a63ca6dfc857b99f2b7 HTTP/3: traffic-based flood detection. With this patch, all traffic over HTTP/3 bidi and uni streams is counted in the h3c->total_bytes field, and payload traffic is counted in the h3c->payload_bytes field. As long as total traffic is many times larger than payload traffic, we consider this to be a flood. Request header traffic is counted as if all fields are literal. Response header traffic is counted as is. diff --git a/src/http/v3/ngx_http_v3.c b/src/http/v3/ngx_http_v3.c --- a/src/http/v3/ngx_http_v3.c +++ b/src/http/v3/ngx_http_v3.c @@ -86,3 +86,22 @@ ngx_http_v3_cleanup_session(void *data) ngx_del_timer(&h3c->keepalive); } } + + +ngx_int_t +ngx_http_v3_check_flood(ngx_connection_t *c) +{ + ngx_http_v3_session_t *h3c; + + h3c = ngx_http_v3_get_session(c); + + if (h3c->total_bytes / 8 > h3c->payload_bytes + 1048576) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, "http3 flood detected"); + + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_NO_ERROR, + "HTTP/3 flood detected"); + return NGX_ERROR; + } + + return NGX_OK; +} diff --git a/src/http/v3/ngx_http_v3.h b/src/http/v3/ngx_http_v3.h --- a/src/http/v3/ngx_http_v3.h +++ b/src/http/v3/ngx_http_v3.h @@ -128,6 +128,9 @@ struct ngx_http_v3_session_s { uint64_t max_push_id; uint64_t goaway_push_id; + off_t total_bytes; + off_t payload_bytes; + ngx_uint_t goaway; /* unsigned goaway:1; */ ngx_connection_t *known_streams[NGX_HTTP_V3_MAX_KNOWN_STREAM]; @@ -136,6 +139,7 @@ struct ngx_http_v3_session_s { void ngx_http_v3_init(ngx_connection_t *c); ngx_int_t ngx_http_v3_init_session(ngx_connection_t *c); +ngx_int_t ngx_http_v3_check_flood(ngx_connection_t *c); ngx_int_t ngx_http_v3_read_request_body(ngx_http_request_t *r); ngx_int_t ngx_http_v3_read_unbuffered_request_body(ngx_http_request_t *r); diff --git a/src/http/v3/ngx_http_v3_filter_module.c b/src/http/v3/ngx_http_v3_filter_module.c --- a/src/http/v3/ngx_http_v3_filter_module.c +++ b/src/http/v3/ngx_http_v3_filter_module.c @@ -101,6 +101,7 @@ ngx_http_v3_header_filter(ngx_http_reque ngx_list_part_t *part; ngx_table_elt_t *header; ngx_connection_t *c; + ngx_http_v3_session_t *h3c; ngx_http_v3_filter_ctx_t *ctx; ngx_http_core_loc_conf_t *clcf; ngx_http_core_srv_conf_t *cscf; @@ -120,6 +121,8 @@ ngx_http_v3_header_filter(ngx_http_reque return NGX_OK; } + h3c = ngx_http_v3_get_session(r->connection); + if (r->method == NGX_HTTP_HEAD) { r->header_only = 1; } @@ -531,6 +534,8 @@ ngx_http_v3_header_filter(ngx_http_reque n = b->last - b->pos; + h3c->payload_bytes += n; + len = ngx_http_v3_encode_varlen_int(NULL, NGX_HTTP_V3_FRAME_HEADERS) + ngx_http_v3_encode_varlen_int(NULL, n); @@ -571,6 +576,9 @@ ngx_http_v3_header_filter(ngx_http_reque b->last = (u_char *) ngx_http_v3_encode_varlen_int(b->last, r->headers_out.content_length_n); + h3c->payload_bytes += r->headers_out.content_length_n; + h3c->total_bytes += r->headers_out.content_length_n; + cl = ngx_alloc_chain_link(r->pool); if (cl == NULL) { return NGX_ERROR; @@ -590,6 +598,10 @@ ngx_http_v3_header_filter(ngx_http_reque ngx_http_set_ctx(r, ctx, ngx_http_v3_filter_module); } + for (cl = out; cl; cl = cl->next) { + h3c->total_bytes += cl->buf->last - cl->buf->pos; + } + return ngx_http_write_filter(r, out); } @@ -1096,9 +1108,12 @@ static ngx_chain_t * ngx_http_v3_create_push_promise(ngx_http_request_t *r, ngx_str_t *path, uint64_t push_id) { - size_t n, len; - ngx_buf_t *b; - ngx_chain_t *hl, *cl; + size_t n, len; + ngx_buf_t *b; + ngx_chain_t *hl, *cl; + ngx_http_v3_session_t *h3c; + + h3c = ngx_http_v3_get_session(r->connection); ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http3 create push promise id:%uL", push_id); @@ -1233,6 +1248,8 @@ ngx_http_v3_create_push_promise(ngx_http n = b->last - b->pos; + h3c->payload_bytes += n; + len = ngx_http_v3_encode_varlen_int(NULL, NGX_HTTP_V3_FRAME_PUSH_PROMISE) + ngx_http_v3_encode_varlen_int(NULL, n); @@ -1265,6 +1282,7 @@ ngx_http_v3_body_filter(ngx_http_request ngx_int_t rc; ngx_buf_t *b; ngx_chain_t *out, *cl, *tl, **ll; + ngx_http_v3_session_t *h3c; ngx_http_v3_filter_ctx_t *ctx; if (in == NULL) { @@ -1276,6 +1294,8 @@ ngx_http_v3_body_filter(ngx_http_request return ngx_http_next_body_filter(r, in); } + h3c = ngx_http_v3_get_session(r->connection); + out = NULL; ll = &out; @@ -1340,6 +1360,8 @@ ngx_http_v3_body_filter(ngx_http_request tl->next = out; out = tl; + + h3c->payload_bytes += size; } if (cl->buf->last_buf) { @@ -1356,6 +1378,10 @@ ngx_http_v3_body_filter(ngx_http_request *ll = NULL; } + for (cl = out; cl; cl = cl->next) { + h3c->total_bytes += cl->buf->last - cl->buf->pos; + } + rc = ngx_http_next_body_filter(r, out); ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &out, @@ -1369,13 +1395,16 @@ static ngx_chain_t * ngx_http_v3_create_trailers(ngx_http_request_t *r, ngx_http_v3_filter_ctx_t *ctx) { - size_t len, n; - u_char *p; - ngx_buf_t *b; - ngx_uint_t i; - ngx_chain_t *cl, *hl; - ngx_list_part_t *part; - ngx_table_elt_t *header; + size_t len, n; + u_char *p; + ngx_buf_t *b; + ngx_uint_t i; + ngx_chain_t *cl, *hl; + ngx_list_part_t *part; + ngx_table_elt_t *header; + ngx_http_v3_session_t *h3c; + + h3c = ngx_http_v3_get_session(r->connection); len = 0; @@ -1461,6 +1490,8 @@ ngx_http_v3_create_trailers(ngx_http_req n = b->last - b->pos; + h3c->payload_bytes += n; + hl = ngx_chain_get_free_buf(r->pool, &ctx->free); if (hl == NULL) { return NULL; diff --git a/src/http/v3/ngx_http_v3_request.c b/src/http/v3/ngx_http_v3_request.c --- a/src/http/v3/ngx_http_v3_request.c +++ b/src/http/v3/ngx_http_v3_request.c @@ -218,6 +218,7 @@ ngx_http_v3_process_request(ngx_event_t ngx_int_t rc; ngx_connection_t *c; ngx_http_request_t *r; + ngx_http_v3_session_t *h3c; ngx_http_core_srv_conf_t *cscf; ngx_http_v3_parse_headers_t *st; @@ -233,6 +234,8 @@ ngx_http_v3_process_request(ngx_event_t return; } + h3c = ngx_http_v3_get_session(c); + st = &r->v3_parse->headers; b = r->header_in; @@ -298,6 +301,12 @@ ngx_http_v3_process_request(ngx_event_t } r->request_length += b->pos - p; + h3c->total_bytes += b->pos - p; + + if (ngx_http_v3_check_flood(c) != NGX_OK) { + ngx_http_close_request(r, NGX_HTTP_CLOSE); + break; + } if (rc == NGX_BUSY) { if (rev->error) { @@ -318,6 +327,10 @@ ngx_http_v3_process_request(ngx_event_t /* rc == NGX_OK || rc == NGX_DONE */ + h3c->payload_bytes += ngx_http_v3_encode_field_l(NULL, + &st->field_rep.field.name, + &st->field_rep.field.value); + if (ngx_http_v3_process_header(r, &st->field_rep.field.name, &st->field_rep.field.value) != NGX_OK) @@ -1080,6 +1093,7 @@ ngx_http_v3_request_body_filter(ngx_http ngx_buf_t *b; ngx_uint_t last; ngx_chain_t *cl, *out, *tl, **ll; + ngx_http_v3_session_t *h3c; ngx_http_request_body_t *rb; ngx_http_core_loc_conf_t *clcf; ngx_http_core_srv_conf_t *cscf; @@ -1088,6 +1102,8 @@ ngx_http_v3_request_body_filter(ngx_http rb = r->request_body; st = &r->v3_parse->body; + h3c = ngx_http_v3_get_session(r->connection); + if (rb->rest == -1) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, @@ -1135,6 +1151,11 @@ ngx_http_v3_request_body_filter(ngx_http rc = ngx_http_v3_parse_data(r->connection, st, cl->buf); r->request_length += cl->buf->pos - p; + h3c->total_bytes += cl->buf->pos - p; + + if (ngx_http_v3_check_flood(r->connection) != NGX_OK) { + return NGX_HTTP_CLOSE; + } if (rc == NGX_AGAIN) { continue; @@ -1178,6 +1199,8 @@ ngx_http_v3_request_body_filter(ngx_http { rb->received += st->length; r->request_length += st->length; + h3c->total_bytes += st->length; + h3c->payload_bytes += st->length; if (st->length < 8) { @@ -1222,12 +1245,16 @@ ngx_http_v3_request_body_filter(ngx_http cl->buf->pos += (size_t) st->length; rb->received += st->length; r->request_length += st->length; + h3c->total_bytes += st->length; + h3c->payload_bytes += st->length; st->length = 0; } else { st->length -= size; rb->received += size; r->request_length += size; + h3c->total_bytes += size; + h3c->payload_bytes += size; cl->buf->pos = cl->buf->last; } diff --git a/src/http/v3/ngx_http_v3_streams.c b/src/http/v3/ngx_http_v3_streams.c --- a/src/http/v3/ngx_http_v3_streams.c +++ b/src/http/v3/ngx_http_v3_streams.c @@ -171,6 +171,7 @@ ngx_http_v3_uni_read_handler(ngx_event_t ngx_buf_t b; ngx_int_t rc; ngx_connection_t *c; + ngx_http_v3_session_t *h3c; ngx_http_v3_uni_stream_t *us; c = rev->data; @@ -207,6 +208,14 @@ ngx_http_v3_uni_read_handler(ngx_event_t b.pos = buf; b.last = buf + n; + h3c = ngx_http_v3_get_session(c); + h3c->total_bytes += n; + + if (ngx_http_v3_check_flood(c) != NGX_OK) { + ngx_http_v3_close_uni_stream(c); + return; + } + rc = ngx_http_v3_parse_uni(c, &us->parse, &b); if (rc == NGX_DONE) { @@ -282,6 +291,9 @@ ngx_http_v3_create_push_stream(ngx_conne p = (u_char *) ngx_http_v3_encode_varlen_int(p, push_id); n = p - buf; + h3c = ngx_http_v3_get_session(c); + h3c->total_bytes += n; + if (sc->send(sc, buf, n) != (ssize_t) n) { goto failed; } @@ -291,7 +303,6 @@ ngx_http_v3_create_push_stream(ngx_conne goto failed; } - h3c = ngx_http_v3_get_session(c); h3c->npushing++; cln->handler = ngx_http_v3_push_cleanup; @@ -383,6 +394,9 @@ ngx_http_v3_get_uni_stream(ngx_connectio n = (u_char *) ngx_http_v3_encode_varlen_int(buf, type) - buf; + h3c = ngx_http_v3_get_session(c); + h3c->total_bytes += n; + if (sc->send(sc, buf, n) != (ssize_t) n) { goto failed; } @@ -403,6 +417,7 @@ ngx_http_v3_send_settings(ngx_connection u_char *p, buf[NGX_HTTP_V3_VARLEN_INT_LEN * 6]; size_t n; ngx_connection_t *cc; + ngx_http_v3_session_t *h3c; ngx_http_v3_srv_conf_t *h3scf; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 send settings"); @@ -431,6 +446,9 @@ ngx_http_v3_send_settings(ngx_connection p = (u_char *) ngx_http_v3_encode_varlen_int(p, h3scf->max_blocked_streams); n = p - buf; + h3c = ngx_http_v3_get_session(c); + h3c->total_bytes += n; + if (cc->send(cc, buf, n) != (ssize_t) n) { goto failed; } @@ -448,9 +466,10 @@ failed: ngx_int_t ngx_http_v3_send_goaway(ngx_connection_t *c, uint64_t id) { - u_char *p, buf[NGX_HTTP_V3_VARLEN_INT_LEN * 3]; - size_t n; - ngx_connection_t *cc; + u_char *p, buf[NGX_HTTP_V3_VARLEN_INT_LEN * 3]; + size_t n; + ngx_connection_t *cc; + ngx_http_v3_session_t *h3c; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 send goaway %uL", id); @@ -465,6 +484,9 @@ ngx_http_v3_send_goaway(ngx_connection_t p = (u_char *) ngx_http_v3_encode_varlen_int(p, id); n = p - buf; + h3c = ngx_http_v3_get_session(c); + h3c->total_bytes += n; + if (cc->send(cc, buf, n) != (ssize_t) n) { goto failed; } @@ -482,9 +504,10 @@ failed: ngx_int_t ngx_http_v3_send_ack_section(ngx_connection_t *c, ngx_uint_t stream_id) { - u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; - size_t n; - ngx_connection_t *dc; + u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; + size_t n; + ngx_connection_t *dc; + ngx_http_v3_session_t *h3c; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 client ack section %ui", stream_id); @@ -497,6 +520,9 @@ ngx_http_v3_send_ack_section(ngx_connect buf[0] = 0x80; n = (u_char *) ngx_http_v3_encode_prefix_int(buf, stream_id, 7) - buf; + h3c = ngx_http_v3_get_session(c); + h3c->total_bytes += n; + if (dc->send(dc, buf, n) != (ssize_t) n) { ngx_http_v3_close_uni_stream(dc); return NGX_ERROR; @@ -509,9 +535,10 @@ ngx_http_v3_send_ack_section(ngx_connect ngx_int_t ngx_http_v3_send_cancel_stream(ngx_connection_t *c, ngx_uint_t stream_id) { - u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; - size_t n; - ngx_connection_t *dc; + u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; + size_t n; + ngx_connection_t *dc; + ngx_http_v3_session_t *h3c; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 client cancel stream %ui", stream_id); @@ -524,6 +551,9 @@ ngx_http_v3_send_cancel_stream(ngx_conne buf[0] = 0x40; n = (u_char *) ngx_http_v3_encode_prefix_int(buf, stream_id, 6) - buf; + h3c = ngx_http_v3_get_session(c); + h3c->total_bytes += n; + if (dc->send(dc, buf, n) != (ssize_t) n) { ngx_http_v3_close_uni_stream(dc); return NGX_ERROR; @@ -536,9 +566,10 @@ ngx_http_v3_send_cancel_stream(ngx_conne ngx_int_t ngx_http_v3_send_inc_insert_count(ngx_connection_t *c, ngx_uint_t inc) { - u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; - size_t n; - ngx_connection_t *dc; + u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; + size_t n; + ngx_connection_t *dc; + ngx_http_v3_session_t *h3c; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 client increment insert count %ui", inc); @@ -551,6 +582,9 @@ ngx_http_v3_send_inc_insert_count(ngx_co buf[0] = 0; n = (u_char *) ngx_http_v3_encode_prefix_int(buf, inc, 6) - buf; + h3c = ngx_http_v3_get_session(c); + h3c->total_bytes += n; + if (dc->send(dc, buf, n) != (ssize_t) n) { ngx_http_v3_close_uni_stream(dc); return NGX_ERROR; From arut at nginx.com Thu Oct 7 11:36:17 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 07 Oct 2021 14:36:17 +0300 Subject: [PATCH 4 of 5] QUIC: traffic-based flood detection In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1633602816 -10800 # Thu Oct 07 13:33:36 2021 +0300 # Branch quic # Node ID e20f00b8ac9005621993ea19375b1646c9182e7b # Parent 31561ac584b74d29af9a442afca47821a98217b2 QUIC: traffic-based flood detection. With this patch, all traffic over a QUIC connection is compared to traffic over QUIC streams. As long as total traffic is many times larger than stream traffic, we consider this to be a flood. diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c --- a/src/event/quic/ngx_event_quic.c +++ b/src/event/quic/ngx_event_quic.c @@ -662,13 +662,17 @@ ngx_quic_close_timer_handler(ngx_event_t static ngx_int_t ngx_quic_input(ngx_connection_t *c, ngx_buf_t *b, ngx_quic_conf_t *conf) { - u_char *p; - ngx_int_t rc; - ngx_uint_t good; - ngx_quic_header_t pkt; + size_t size; + u_char *p; + ngx_int_t rc; + ngx_uint_t good; + ngx_quic_header_t pkt; + ngx_quic_connection_t *qc; good = 0; + size = b->last - b->pos; + p = b->pos; while (p < b->last) { @@ -701,7 +705,8 @@ ngx_quic_input(ngx_connection_t *c, ngx_ if (rc == NGX_DONE) { /* stop further processing */ - return NGX_DECLINED; + good = 0; + break; } if (rc == NGX_OK) { @@ -733,7 +738,27 @@ ngx_quic_input(ngx_connection_t *c, ngx_ p = b->pos; } - return good ? NGX_OK : NGX_DECLINED; + if (!good) { + return NGX_DECLINED; + } + + qc = ngx_quic_get_connection(c); + + if (qc) { + qc->received += size; + + if ((uint64_t) (c->sent + qc->received) / 8 > + (qc->streams.sent + qc->streams.recv_last) + 1048576) + { + ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic flood detected"); + + qc->error = NGX_QUIC_ERR_NO_ERROR; + qc->error_reason = "QUIC flood detected"; + return NGX_ERROR; + } + } + + return NGX_OK; } diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h +++ b/src/event/quic/ngx_event_quic_connection.h @@ -236,6 +236,8 @@ struct ngx_quic_connection_s { ngx_quic_streams_t streams; ngx_quic_congestion_t congestion; + off_t received; + ngx_uint_t error; enum ssl_encryption_level_t error_level; ngx_uint_t error_ftype; From arut at nginx.com Thu Oct 7 11:36:18 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 07 Oct 2021 14:36:18 +0300 Subject: [PATCH 5 of 5] QUIC: limited the total number of frames In-Reply-To: References: Message-ID: <25aeebb9432182a6246f.1633606578@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1633603050 -10800 # Thu Oct 07 13:37:30 2021 +0300 # Branch quic # Node ID 25aeebb9432182a6246fedba6b1024f3d61e959b # Parent e20f00b8ac9005621993ea19375b1646c9182e7b QUIC: limited the total number of frames. Exceeding 10000 allocated frames is considered a flood. diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h +++ b/src/event/quic/ngx_event_quic_connection.h @@ -228,10 +228,8 @@ struct ngx_quic_connection_s { ngx_chain_t *free_bufs; ngx_buf_t *free_shadow_bufs; -#ifdef NGX_QUIC_DEBUG_ALLOC ngx_uint_t nframes; ngx_uint_t nbufs; -#endif ngx_quic_streams_t streams; ngx_quic_congestion_t congestion; diff --git a/src/event/quic/ngx_event_quic_frames.c b/src/event/quic/ngx_event_quic_frames.c --- a/src/event/quic/ngx_event_quic_frames.c +++ b/src/event/quic/ngx_event_quic_frames.c @@ -38,18 +38,22 @@ ngx_quic_alloc_frame(ngx_connection_t *c "quic reuse frame n:%ui", qc->nframes); #endif - } else { + } else if (qc->nframes < 10000) { frame = ngx_palloc(c->pool, sizeof(ngx_quic_frame_t)); if (frame == NULL) { return NULL; } -#ifdef NGX_QUIC_DEBUG_ALLOC ++qc->nframes; +#ifdef NGX_QUIC_DEBUG_ALLOC ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic alloc frame n:%ui", qc->nframes); #endif + + } else { + ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic flood detected"); + return NULL; } ngx_memzero(frame, sizeof(ngx_quic_frame_t)); @@ -372,9 +376,9 @@ ngx_quic_alloc_buf(ngx_connection_t *c) cl->buf = b; -#ifdef NGX_QUIC_DEBUG_ALLOC ++qc->nbufs; +#ifdef NGX_QUIC_DEBUG_ALLOC ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic alloc buffer n:%ui", qc->nbufs); #endif From arut at nginx.com Thu Oct 7 11:45:05 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 7 Oct 2021 14:45:05 +0300 Subject: [PATCH 0 of 5] QUIC flood detection In-Reply-To: References: Message-ID: <20211007114505.hxncyiigz5z6uycf@Romans-MacBook-Pro.local> On Thu, Oct 07, 2021 at 02:36:13PM +0300, Roman Arutyunyan wrote: > This series adds support for flood detection in QUIC and HTTP/3 smilar to > HTTP/2. > > - patch 1 removes client-side encoder support from HTTP/3 for simplicity > - patch 2 fixes a minor issue with $request_length calculation > - patch 3 adds HTTP/3 traffic-based flood detection > - patch 4 adds QUIC traffic-based flood detection > - patch 5 adds a limit on frames number similar to HTTP/2 > > As for the patch 3, both input and output traffic is analyzed similar to HTTP/2. > Probably only input should be analyzed because current HTTP/3 implementation > does not seem to allow amplification (the only exception is Stream Cancellation, > but keepalive_requests limits the damage anyway). Also, we can never be sure > the output traffic we counted actually reached the client and was not rejected > by stream reset. We can discuss this later. Testing: I patched nghttp3/ngtcp2 to enable flooding: examples/client --http3-flood=10000000 127.0.0.1 8443 https://example.com:8443/bar examples/client --quic-flood=10000000 127.0.0.1 8443 https://example.com:8443/bar Patches (quite dirty) are attached. With --http3-flood, a big reserved (0x1f + 0x21, see quic-http 34, section 7.2.8) frame is sent before HEADERS frame. With --quic-flood, a big number of PING frames are sent. -- Roman Arutyunyan -------------- next part -------------- diff --git a/lib/nghttp3_conn.c b/lib/nghttp3_conn.c index ab608a2..93f8b52 100644 --- a/lib/nghttp3_conn.c +++ b/lib/nghttp3_conn.c @@ -2012,7 +2012,7 @@ int nghttp3_conn_add_ack_offset(nghttp3_conn *conn, int64_t stream_id, static int conn_submit_headers_data(nghttp3_conn *conn, nghttp3_stream *stream, const nghttp3_nv *nva, size_t nvlen, const nghttp3_data_reader *dr) { - int rv; + int rv, i; nghttp3_nv *nnva; nghttp3_frame_entry frent; @@ -2021,6 +2021,22 @@ static int conn_submit_headers_data(nghttp3_conn *conn, nghttp3_stream *stream, return rv; } + for (i = 0; i < nvlen; i++) { + if (strcmp((char *) nva[i].name, "x-flood") == 0) { + break; + } + } + + if (i < nvlen) { + frent.fr.hd.type = NGHTTP3_FRAME_FLOOD; + frent.fr.flood.size = atoi((char *) nva[i].value); + + rv = nghttp3_stream_frq_add(stream, &frent); + if (rv != 0) { + return rv; + } + } + frent.fr.hd.type = NGHTTP3_FRAME_HEADERS; frent.fr.headers.nva = nnva; frent.fr.headers.nvlen = nvlen; diff --git a/lib/nghttp3_frame.c b/lib/nghttp3_frame.c index 38c395e..dd27f36 100644 --- a/lib/nghttp3_frame.c +++ b/lib/nghttp3_frame.c @@ -79,6 +79,17 @@ uint8_t *nghttp3_frame_write_goaway(uint8_t *p, return p; } +uint8_t *nghttp3_frame_write_flood(uint8_t *p, + const nghttp3_frame_flood *fr) { + p = nghttp3_frame_write_hd(p, &fr->hd); + + for (int i = 0; i < fr->size; i++) { + *p++ = 0xfe; + } + + return p; +} + size_t nghttp3_frame_write_goaway_len(int64_t *ppayloadlen, const nghttp3_frame_goaway *fr) { size_t payloadlen = nghttp3_put_varint_len(fr->id); @@ -89,6 +100,16 @@ size_t nghttp3_frame_write_goaway_len(int64_t *ppayloadlen, nghttp3_put_varint_len((int64_t)payloadlen) + payloadlen; } +size_t nghttp3_frame_write_flood_len(int64_t *ppayloadlen, + const nghttp3_frame_flood *fr) { + size_t payloadlen = fr->size; + + *ppayloadlen = (int64_t)payloadlen; + + return nghttp3_put_varint_len(NGHTTP3_FRAME_FLOOD) + + nghttp3_put_varint_len((int64_t)payloadlen) + payloadlen; +} + uint8_t * nghttp3_frame_write_priority_update(uint8_t *p, const nghttp3_frame_priority_update *fr) { diff --git a/lib/nghttp3_frame.h b/lib/nghttp3_frame.h index 216b346..06861a0 100644 --- a/lib/nghttp3_frame.h +++ b/lib/nghttp3_frame.h @@ -46,6 +46,7 @@ typedef enum nghttp3_frame_type { https://tools.ietf.org/html/draft-ietf-httpbis-priority-03 */ NGHTTP3_FRAME_PRIORITY_UPDATE = 0x0f0700, NGHTTP3_FRAME_PRIORITY_UPDATE_PUSH_ID = 0x0f0701, + NGHTTP3_FRAME_FLOOD = 0x1f + 0x21 } nghttp3_frame_type; typedef enum nghttp3_h2_reserved_type { @@ -95,6 +96,11 @@ typedef struct nghttp3_frame_goaway { int64_t id; } nghttp3_frame_goaway; +typedef struct nghttp3_frame_flood { + nghttp3_frame_hd hd; + size_t size; +} nghttp3_frame_flood; + typedef struct nghttp3_frame_priority_update { nghttp3_frame_hd hd; /* pri_elem_id is stream ID if hd.type == @@ -111,6 +117,7 @@ typedef union nghttp3_frame { nghttp3_frame_headers headers; nghttp3_frame_settings settings; nghttp3_frame_goaway goaway; + nghttp3_frame_flood flood; nghttp3_frame_priority_update priority_update; } nghttp3_frame; @@ -154,6 +161,15 @@ size_t nghttp3_frame_write_settings_len(int64_t *pppayloadlen, uint8_t *nghttp3_frame_write_goaway(uint8_t *dest, const nghttp3_frame_goaway *fr); +/* + * nghttp3_frame_write_flood writes FLOOD frame |fr| to |dest|. + * This function assumes that |dest| has enough space to write |fr|. + * + * This function returns |dest| plus the number of bytes written. + */ +uint8_t *nghttp3_frame_write_flood(uint8_t *dest, + const nghttp3_frame_flood *fr); + /* * nghttp3_frame_write_goaway_len returns the number of bytes required * to write |fr|. fr->hd.length is ignored. This function stores @@ -162,6 +178,14 @@ uint8_t *nghttp3_frame_write_goaway(uint8_t *dest, size_t nghttp3_frame_write_goaway_len(int64_t *ppayloadlen, const nghttp3_frame_goaway *fr); +/* + * nghttp3_frame_write_flood_len returns the number of bytes required + * to write |fr|. fr->hd.length is ignored. This function stores + * payload length in |*ppayloadlen|. + */ +size_t nghttp3_frame_write_flood_len(int64_t *ppayloadlen, + const nghttp3_frame_flood *fr); + /* * nghttp3_frame_write_priority_update writes PRIORITY_UPDATE frame * |fr| to |dest|. This function assumes that |dest| has enough space diff --git a/lib/nghttp3_stream.c b/lib/nghttp3_stream.c index c91be7e..19ddc46 100644 --- a/lib/nghttp3_stream.c +++ b/lib/nghttp3_stream.c @@ -281,6 +281,12 @@ int nghttp3_stream_fill_outq(nghttp3_stream *stream) { return rv; } break; + case NGHTTP3_FRAME_FLOOD: + rv = nghttp3_stream_write_flood(stream, frent); + if (rv != 0) { + return rv; + } + break; case NGHTTP3_FRAME_PRIORITY_UPDATE: rv = nghttp3_stream_write_priority_update(stream, frent); if (rv != 0) { @@ -390,6 +396,31 @@ int nghttp3_stream_write_goaway(nghttp3_stream *stream, return nghttp3_stream_outq_add(stream, &tbuf); } +int nghttp3_stream_write_flood(nghttp3_stream *stream, + nghttp3_frame_entry *frent) { + nghttp3_frame_flood *fr = &frent->fr.flood; + size_t len; + int rv; + nghttp3_buf *chunk; + nghttp3_typed_buf tbuf; + + len = nghttp3_frame_write_flood_len(&fr->hd.length, fr); + + rv = nghttp3_stream_ensure_chunk(stream, len); + if (rv != 0) { + return rv; + } + + chunk = nghttp3_stream_get_chunk(stream); + typed_buf_shared_init(&tbuf, chunk); + + chunk->last = nghttp3_frame_write_flood(chunk->last, fr); + + tbuf.buf.last = chunk->last; + + return nghttp3_stream_outq_add(stream, &tbuf); +} + int nghttp3_stream_write_priority_update(nghttp3_stream *stream, nghttp3_frame_entry *frent) { nghttp3_frame_priority_update *fr = &frent->fr.priority_update; diff --git a/lib/nghttp3_stream.h b/lib/nghttp3_stream.h index 047475e..a07a763 100644 --- a/lib/nghttp3_stream.h +++ b/lib/nghttp3_stream.h @@ -301,6 +301,9 @@ int nghttp3_stream_write_settings(nghttp3_stream *stream, int nghttp3_stream_write_goaway(nghttp3_stream *stream, nghttp3_frame_entry *frent); +int nghttp3_stream_write_flood(nghttp3_stream *stream, + nghttp3_frame_entry *frent); + int nghttp3_stream_write_priority_update(nghttp3_stream *stream, nghttp3_frame_entry *frent); -------------- next part -------------- diff --git a/examples/client.cc b/examples/client.cc index 990688eb..556a5168 100644 --- a/examples/client.cc +++ b/examples/client.cc @@ -1611,7 +1611,7 @@ int Client::submit_http_request(const Stream *stream) { const auto &req = stream->req; - std::array nva{ + std::array nva{ util::make_nv(":method", config.http_method), util::make_nv(":scheme", req.scheme), util::make_nv(":authority", req.authority), @@ -1623,6 +1623,11 @@ int Client::submit_http_request(const Stream *stream) { content_length_str = std::to_string(config.datalen); nva[nvlen++] = util::make_nv("content-length", content_length_str); } + if (config.http3_flood) { + static char buf[1024]; + snprintf(buf, sizeof(buf), "%d", (int) config.http3_flood); + nva[nvlen++] = util::make_nv("x-flood", std::string(buf)); + } if (!config.quiet) { debug::print_http_request_headers(stream->stream_id, nva.data(), nvlen); @@ -1937,6 +1942,10 @@ int Client::setup_httpconn() { return -1; } + if (config.quic_flood) { + ngxtcp2_conn_flood(conn_, config.quic_flood); + } + nghttp3_callbacks callbacks{ ::http_acked_stream_data, ::http_stream_close, ::http_recv_data, ::http_deferred_consume, ::http_begin_headers, ::http_recv_header, @@ -2384,6 +2393,8 @@ int main(int argc, char **argv) { {"max-window", required_argument, &flag, 32}, {"max-stream-window", required_argument, &flag, 33}, {"scid", required_argument, &flag, 34}, + {"http3-flood", required_argument, &flag, 35}, + {"quic-flood", required_argument, &flag, 36}, {nullptr, 0, nullptr, 0}, }; @@ -2672,6 +2683,24 @@ int main(int argc, char **argv) { config.scid_present = true; break; } + case 35: + // --http3-flood + if (auto n = util::parse_uint_iec(optarg); !n) { + std::cerr << "http3-flood: invalid argument" << std::endl; + exit(EXIT_FAILURE); + } else { + config.http3_flood = *n; + } + break; + case 36: + // --quic-flood + if (auto n = util::parse_uint_iec(optarg); !n) { + std::cerr << "quic-flood: invalid argument" << std::endl; + exit(EXIT_FAILURE); + } else { + config.quic_flood = *n; + } + break; } break; default: diff --git a/examples/client_base.h b/examples/client_base.h index e118bac2..87ba246a 100644 --- a/examples/client_base.h +++ b/examples/client_base.h @@ -163,6 +163,10 @@ struct Config { std::string_view sni; // initial_rtt is an initial RTT. ngtcp2_duration initial_rtt; + // HTTP/3 flood + uint64_t http3_flood; + // QUIC flood + uint64_t quic_flood; }; struct Buffer { diff --git a/lib/includes/ngtcp2/ngtcp2.h b/lib/includes/ngtcp2/ngtcp2.h index 235d6e6b..e737215a 100644 --- a/lib/includes/ngtcp2/ngtcp2.h +++ b/lib/includes/ngtcp2/ngtcp2.h @@ -5173,6 +5173,13 @@ NGTCP2_EXTERN void ngtcp2_path_copy(ngtcp2_path *dest, const ngtcp2_path *src); */ NGTCP2_EXTERN int ngtcp2_path_eq(const ngtcp2_path *a, const ngtcp2_path *b); +/** + * @function + * + * `ngtcp2_conn_flood` floods with PING frames. + */ +NGTCP2_EXTERN int ngxtcp2_conn_flood(ngtcp2_conn *conn, size_t size); + #ifdef __cplusplus } #endif diff --git a/lib/ngtcp2_conn.c b/lib/ngtcp2_conn.c index b4b4145b..2ea05132 100644 --- a/lib/ngtcp2_conn.c +++ b/lib/ngtcp2_conn.c @@ -8844,6 +8844,28 @@ static int conn_enqueue_handshake_done(ngtcp2_conn *conn) { return 0; } +/* + * ngtcp2_conn_flood floods with PING frames + */ +int ngxtcp2_conn_flood(ngtcp2_conn *conn, size_t size) { + ngtcp2_pktns *pktns = &conn->pktns; + ngtcp2_frame_chain *nfrc; + int rv; + + while (size--) { + rv = ngtcp2_frame_chain_new(&nfrc, conn->mem); + if (rv != 0) { + return rv; + } + + nfrc->fr.type = NGTCP2_FRAME_PING; + nfrc->next = pktns->tx.frq; + pktns->tx.frq = nfrc; + } + + return 0; +} + /** * @function * From mdounin at mdounin.ru Thu Oct 7 13:07:12 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 Oct 2021 16:07:12 +0300 Subject: Sending a notification to the main nginx thread In-Reply-To: References: Message-ID: Hello! On Wed, Oct 06, 2021 at 07:53:00PM +0000, Eran Kornblau wrote: > > > > -----Original Message----- > > From: nginx-devel On Behalf Of Maxim Dounin > > Sent: Wednesday, 6 October 2021 16:12 > > To: nginx-devel at nginx.org > > Subject: Re: Sending a notification to the main nginx thread > > > > Hello! > > > > First of all, you may want to take a look at this warning in the development guide: > > > > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnginx.org%2Fen%2Fdocs%2Fdev%2Fdevelopment_guide.html%23threads_pitfalls&data=04%7C01%7C%7Cd0de648d5c294e07aac008d988cae55b%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C1%7C637691227285308685%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2BiDgCqNY6Wtl0elz%2FYAG7UvdOmoW%2BsZg9v0i5Oicc%2FY%3D&reserved=0 > > > > Quoting it here: > > > > It is recommended to avoid using threads in nginx because it will definitely break things: most nginx functions are not thread-safe. It is expected that a thread will be executing only system calls and thread-safe library functions. > > If you need to run some code that is not related to client request processing, the proper way is to schedule a timer in the init_process module handler and perform required actions in timer handler. Internally nginx makes use of threads to boost IO-related operations, but this is a special case with a lot of limitations. > > > Thanks Maxim, I completely get that, that is the reason I was looking for way to send a notification to the main thread, and didn't just try to call nginx functions from some other thread. > In my case, I need to integrate with some 3rd party library that has its own event loop, it would require significant changes to the library to make it run inside nginx's event loop... > So, my plan is to run it on a side thread, and send notifications between the threads, which would trigger some handler on the main thread whenever new data arrives. > I can use ngx_notify for this, but if someone will use the module and also nginx's thread pool or some 3rd party module that uses ngx_notify, it will break. > I think I can live with that, but would be nice to have a solution that is complete. To re-iterate: inside a "side thread" you are not allowed to call any non-thread-safe libc functions. It is reasonable to assume this excludes all 3rd party libraries with their own even loops. -- Maxim Dounin http://mdounin.ru/ From eran.kornblau at kaltura.com Thu Oct 7 15:16:37 2021 From: eran.kornblau at kaltura.com (Eran Kornblau) Date: Thu, 7 Oct 2021 15:16:37 +0000 Subject: Sending a notification to the main nginx thread In-Reply-To: References: Message-ID: > -----Original Message----- > From: nginx-devel On Behalf Of Maxim Dounin > Sent: Thursday, 7 October 2021 16:07 > To: nginx-devel at nginx.org > Subject: Re: Sending a notification to the main nginx thread > > Hello! > > To re-iterate: inside a "side thread" you are not allowed to call any non-thread-safe libc functions. It is reasonable to assume this excludes all 3rd party libraries with their own even loops. This is a valid point. I checked it out, with the specific build options I'm using on Linux, these are the non-thread-safe libc functions used by nginx - (list of all non-thread-safe functions taken from https://man7.org/linux/man-pages/man7/pthreads.7.html, assuming POSIX.1-2008) - $ (objdump -T /usr/local/nginx/sbin/nginx | grep GLIBC | awk '{print $NF}' ; cat /tmp/libc-non-thread-safe) | sort | uniq -c | grep -vw 1 2 dlerror 2 getenv 2 getgrnam 2 getpwnam 2 localtime 2 strerror First 4 functions are called only early on nginx startup from what I see, so not relevant in this context. That leaves me only with strerror & localtime. I verified these 2 functions are not used by the specific library I'm linking against. So, assuming I don't use any nginx functions on the side thread (other than ngx_notify), it feels quite safe to me... Thanks, Eran > -- > Maxim Dounin > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=04%7C01%7C%7C90afff8c33b442f4d52808d989936e28%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C0%7C637692088574018462%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=lXE%2FhlbsINRDQkC722mGPme0ChawmmJp%2FVa6IPm2jrg%3D&reserved=0 > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx-devel&data=04%7C01%7C%7C90afff8c33b442f4d52808d989936e28%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C0%7C637692088574018462%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=OxEoMCuekC9H3MrYRAF2ZCxYWm%2BrLAs3R45BpQJsZ6w%3D&reserved=0 > From jiri.setnicka at cdn77.com Thu Oct 7 16:37:11 2021 From: jiri.setnicka at cdn77.com (=?UTF-8?B?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Thu, 7 Oct 2021 18:37:11 +0200 Subject: Possible error on revalidate in ngx_http_upstream Message-ID: <7f42d31a-60fe-8521-5337-d6b6a9948978@cdn77.com> Hello, I use nginx as a proxy with enabled cache. I encountered strange behavior on revalidate. When upstream does not return any caching headers it is ok - file is cached with default cachetime and on revalidate the r->cache->valid_sec is updated to now + default cachetime. Also when upstream consistently returns caching headers it is still ok - file is cached according to caching headers and on revalidate the r->cache->valid_sec is updated by value from 304 response caching headers. Problem is when upstream previously returned absolute caching headers on 200 response (so the file is cached according to these headers and these headers are saved into cache file on disk) but later it changed its behavior and on 304 response it does not return any caching headers. In such case, I would expect that now + default cachetime would be used as the new r->cache->valid_sec, but old absolute time is used instead and this yields in revalidate on each request. In ngx_http_upstream_test_next(...) in revalidate part there is firstly cache time from upstream 304 response saved to temporal variable (valid = r->cache->valid_sec) and then request is reinited and r->cache->valid_sec is set according to headers in the cached file. Problem is when value == 0 (no caching info from upstream) and there is an absolute time in the cached file headers. This patch should fix this behavior - time computed from cached file is used only when it is in the future otherwise, time calculated by ngx_http_file_cache_valid(...) is used. Thanks for your feedback Jiri Setnicka # HG changeset patch # User Ji?? Setni?ka # Date 1633624103 -7200 #????? Thu Oct 07 18:28:23 2021 +0200 # Node ID 7149a1553b48a7403a8b8cea09580b103aab23b1 # Parent? ae7c767aa491fa55d3168dfc028a22f43ac8cf89 Do not use cache->valid_sec in the past from cached file when revalidating diff -r ae7c767aa491 -r 7149a1553b48 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c????? Wed Oct 06 18:01:42 2021 +0300 +++ b/src/http/ngx_http_upstream.c????? Thu Oct 07 18:28:23 2021 +0200 @@ -2606,7 +2606,7 @@ ???????????? rc = NGX_HTTP_INTERNAL_SERVER_ERROR; ???????? } -??????? if (valid == 0) { +??????? if (valid == 0 && r->cache->valid_sec >= now) { ???????????? valid = r->cache->valid_sec; ???????????? updating = r->cache->updating_sec; ???????????? error = r->cache->error_sec; From mdounin at mdounin.ru Thu Oct 7 17:17:00 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 Oct 2021 20:17:00 +0300 Subject: Sending a notification to the main nginx thread In-Reply-To: References: Message-ID: Hello! On Thu, Oct 07, 2021 at 03:16:37PM +0000, Eran Kornblau wrote: > > -----Original Message----- > > From: nginx-devel On Behalf Of Maxim Dounin > > Sent: Thursday, 7 October 2021 16:07 > > To: nginx-devel at nginx.org > > Subject: Re: Sending a notification to the main nginx thread > > > > Hello! > > > > To re-iterate: inside a "side thread" you are not allowed to call any non-thread-safe libc functions. It is reasonable to assume this excludes all 3rd party libraries with their own even loops. > > This is a valid point. > > I checked it out, with the specific build options I'm using on Linux, these are the non-thread-safe libc functions used by nginx - > (list of all non-thread-safe functions taken from https://man7.org/linux/man-pages/man7/pthreads.7.html, assuming POSIX.1-2008) - > > $ (objdump -T /usr/local/nginx/sbin/nginx | grep GLIBC | awk '{print $NF}' ; cat /tmp/libc-non-thread-safe) | sort | uniq -c | grep -vw 1 > 2 dlerror > 2 getenv > 2 getgrnam > 2 getpwnam > 2 localtime > 2 strerror > > First 4 functions are called only early on nginx startup from what I see, so not relevant in this context. > That leaves me only with strerror & localtime. > > I verified these 2 functions are not used by the specific library I'm linking against. > So, assuming I don't use any nginx functions on the side thread (other than ngx_notify), it feels quite safe to me... In no particular order: - Assuming nginx uses only POSIX functions is wrong, it does use platform-specific functions and various portable functions not specified by POSIX, such as - The above objdump results look incorrect: for example, nginx certainly uses readdir(), which is in the POSIX non-thread-safe list, but not in your list. - There are other libraries nginx uses, which makes the problem much worse. If you insist on using threads in your module - you are free to do so, you were warned and it's your choice. -- Maxim Dounin http://mdounin.ru/ From dnj0496 at gmail.com Thu Oct 7 17:27:26 2021 From: dnj0496 at gmail.com (Dk Jack) Date: Thu, 7 Oct 2021 10:27:26 -0700 Subject: Sending a notification to the main nginx thread In-Reply-To: References: Message-ID: I haven't played with them myself, but have considered sub-requests? Since your are using a library with its own event-loop, perhaps it's best to run it in its own process and use nginx sub-requests to bridge the two processes? Seems doable. https://nginx.org/en/docs/dev/development_guide.html#http_subrequests Dk. On Wed, Oct 6, 2021 at 12:53 PM Eran Kornblau wrote: > > > > -----Original Message----- > > From: nginx-devel On Behalf Of Maxim > Dounin > > Sent: Wednesday, 6 October 2021 16:12 > > To: nginx-devel at nginx.org > > Subject: Re: Sending a notification to the main nginx thread > > > > Hello! > > > > First of all, you may want to take a look at this warning in the > development guide: > > > > > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnginx.org%2Fen%2Fdocs%2Fdev%2Fdevelopment_guide.html%23threads_pitfalls&data=04%7C01%7C%7Cd0de648d5c294e07aac008d988cae55b%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C1%7C637691227285308685%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2BiDgCqNY6Wtl0elz%2FYAG7UvdOmoW%2BsZg9v0i5Oicc%2FY%3D&reserved=0 > > > > Quoting it here: > > > > It is recommended to avoid using threads in nginx because it will > definitely break things: most nginx functions are not thread-safe. It is > expected that a thread will be executing only system calls and thread-safe > library functions. > > If you need to run some code that is not related to client request > processing, the proper way is to schedule a timer in the init_process > module handler and perform required actions in timer handler. Internally > nginx makes use of threads to boost IO-related operations, but this is a > special case with a lot of limitations. > > > Thanks Maxim, I completely get that, that is the reason I was looking for > way to send a notification to the main thread, and didn't just try to call > nginx functions from some other thread. > In my case, I need to integrate with some 3rd party library that has its > own event loop, it would require significant changes to the library to make > it run inside nginx's event loop... > So, my plan is to run it on a side thread, and send notifications between > the threads, which would trigger some handler on the main thread whenever > new data arrives. > I can use ngx_notify for this, but if someone will use the module and also > nginx's thread pool or some 3rd party module that uses ngx_notify, it will > break. > I think I can live with that, but would be nice to have a solution that is > complete. > > Eran > > > > > -- > > Maxim Dounin > > > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=04%7C01%7C%7Cd0de648d5c294e07aac008d988cae55b%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C1%7C637691227285318679%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=j6ifgLEZxh6B60ZP2wGOoJMnb1JnaVig3Ja26LNoMk0%3D&reserved=0 > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx-devel&data=04%7C01%7C%7Cd0de648d5c294e07aac008d988cae55b%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C1%7C637691227285318679%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=dOXGqrvoFn8AnO1CGa67SCgmoiqIUdoNU8Aw9t6xGcA%3D&reserved=0 > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eran.kornblau at kaltura.com Thu Oct 7 19:46:51 2021 From: eran.kornblau at kaltura.com (Eran Kornblau) Date: Thu, 7 Oct 2021 19:46:51 +0000 Subject: Sending a notification to the main nginx thread In-Reply-To: References: Message-ID: > > -----Original Message----- > From: nginx-devel On Behalf Of Maxim Dounin > Sent: Thursday, 7 October 2021 20:17 > To: nginx-devel at nginx.org > Subject: Re: Sending a notification to the main nginx thread > > > Hello! > > In no particular order: > > - Assuming nginx uses only POSIX functions is wrong, it does use > platform-specific functions and various portable functions not > specified by POSIX, such as > > - The above objdump results look incorrect: for example, nginx > certainly uses readdir(), which is in the POSIX non-thread-safe > list, but not in your list. > > - There are other libraries nginx uses, which makes the problem > much worse. > > If you insist on using threads in your module - you are free to do so, you were warned and it's your choice. > Thanks Maxim, did some more research following your feedback, just for my understanding... I'm sharing the results here in case someone will find it useful. First of all, regarding readdir, you are right, of course. The reason it was missing is that it shows as readdir64 in the import table, while the list of non-thread-safe POSIX functions lists only readdir. I ran a looser check - grepped the unsafe POSIX functions, but without assuming an exact match - objdump -T /usr/local/nginx/sbin/nginx | grep -v ngx_ | grep -e asctime -e basename ... -e wctomb 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 getenv 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 localtime 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 getpwnam 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 getgrnam 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 readdir64 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strerror 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 dlerror 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 srandom 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 random 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 localtime_r 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 crypt_r 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 gmtime_r Other than readdir64, this added only a few false positives. Regarding external dependencies, I looked at the basic deps used by nginx (didn't check more esoteric deps, such as libxslt etc.) zlib & pcre claim to be thread-safe, openssl seems more problematic... Didn't dig too deep, but seems that at least on older versions, you have to explicitly provide some callbacks to make it thread safe. I then proceeded to pull a list of all POSIX functions - (for i in {a..z}; do curl -s https://pubs.opengroup.org/onlinepubs/9699919799/idx/i$i.html | grep '()' | awk '-F(' '{print $1}' | awk '-F>' '{print $NF}'; done) > /tmp/posix-list Compiled nginx with some basic settings - auto/configure --with-file-aio --with-http_ssl_module --with-http_v2_module --with-stream --with-debug --with-threads --with-cc-opt="-O0" ; make install And checked the imports while excluding POSIX, zlib, pcre & openssl - (objdump -T /usr/local/nginx/sbin/nginx | grep UND | grep -v OPENSSL | grep -v pcre_ | grep -v deflate | awk '{print $NF}' ; cat /tmp/posix-list /tmp/posix-list) | sort | uniq -c | grep -w 1 1 accept4 1 __cmsg_nxthdr 1 crypt_r 1 epoll_create 1 epoll_ctl 1 epoll_wait 1 __errno_location 1 eventfd 1 ftruncate64 1 __fxstat64 1 __fxstatat64 1 getpagesize 1 getrlimit64 1 glob64 1 globfree64 1 __gmon_start__ 1 initgroups 1 _ITM_deregisterTMCloneTable 1 _ITM_registerTMCloneTable 1 _Jv_RegisterClasses 1 __libc_start_main 1 __lxstat64 1 mmap64 1 open64 1 openat64 1 posix_fadvise64 1 prctl 1 pread64 1 pwrite64 1 pwritev64 1 readdir64 1 sched_setaffinity 1 sendfile64 1 setrlimit64 1 __stack_chk_fail 1 statfs64 1 syscall 1 usleep 1 __xstat64 Other than readdir64, which was already discussed, I don't spot any non-thread-safe functions, but maybe I'm missing something (have to admit, I'm not familiar with some of the weird ones here...) Bottom line, if we put aside openssl which is a bit unclear (I don't use it in my use case, so don't care much ;-)), the problematic library functions used by nginx (at least on Linux...) seem to be - strerror, localtime, readdir. Even though I agree with the general statement that thread use in nginx should be discouraged, sometimes it's the lesser evil... IMHO, would be better to update nginx to use the thread-safe / _r variants on platforms that support them. I don't see any downside to it, other than a couple more #if's in the code... The world would have been better if such unsafe functions would have never been born :) Thank you for your time! Eran > -- > Maxim Dounin > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=04%7C01%7C%7C60f75b5309df4989493108d989b64f04%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C0%7C637692238383728263%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=2g%2FXvh1VsIgO%2F3jBOYQF%2Ba%2Fo3QfQFKUNcjZ5jCN%2Fke8%3D&reserved=0 > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx-devel&data=04%7C01%7C%7C60f75b5309df4989493108d989b64f04%7C0c503748de3f4e2597e26819d53a42b6%7C1%7C0%7C637692238383728263%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=znw%2FdNVtzw6gd42ceMMeUgM20JbGvQClpksJYidQx8M%3D&reserved=0 > From alexander.borisov at nginx.com Fri Oct 8 09:33:46 2021 From: alexander.borisov at nginx.com (Alexander Borisov) Date: Fri, 08 Oct 2021 09:33:46 +0000 Subject: [njs] Fixed unhandled promise rejection in handle events. Message-ID: details: https://hg.nginx.org/njs/rev/b5d102eb81c1 branches: changeset: 1714:b5d102eb81c1 user: Alexander Borisov date: Fri Oct 08 12:32:42 2021 +0300 description: Fixed unhandled promise rejection in handle events. This closes #423 issue on GitHub. diffstat: src/njs.h | 3 +++ src/njs_shell.c | 2 +- src/njs_vm.c | 26 +++++++++++--------------- test/js/promise_rejection_tracker.js | 1 + test/njs_expect_test.exp | 4 ++++ 5 files changed, 20 insertions(+), 16 deletions(-) diffs (84 lines): diff -r 5aceb5eaf2b2 -r b5d102eb81c1 src/njs.h --- a/src/njs.h Wed Oct 06 13:16:09 2021 +0000 +++ b/src/njs.h Fri Oct 08 12:32:42 2021 +0300 @@ -265,6 +265,9 @@ NJS_EXPORT njs_int_t njs_vm_posted(njs_v #define njs_vm_pending(vm) (njs_vm_waiting(vm) || njs_vm_posted(vm)) +#define njs_vm_unhandled_rejection(vm) \ + ((vm)->options.unhandled_rejection == NJS_VM_OPT_UNHANDLED_REJECTION_THROW \ + && (vm)->promise_reason != NULL && (vm)->promise_reason->length != 0) /* * Runs the specified function with provided arguments. diff -r 5aceb5eaf2b2 -r b5d102eb81c1 src/njs_shell.c --- a/src/njs_shell.c Wed Oct 06 13:16:09 2021 +0000 +++ b/src/njs_shell.c Fri Oct 08 12:32:42 2021 +0300 @@ -868,7 +868,7 @@ njs_process_script(njs_opts_t *opts, njs } for ( ;; ) { - if (!njs_vm_pending(vm)) { + if (!njs_vm_pending(vm) && !njs_vm_unhandled_rejection(vm)) { break; } diff -r 5aceb5eaf2b2 -r b5d102eb81c1 src/njs_vm.c --- a/src/njs_vm.c Wed Oct 06 13:16:09 2021 +0000 +++ b/src/njs_vm.c Fri Oct 08 12:32:42 2021 +0300 @@ -504,24 +504,20 @@ njs_vm_handle_events(njs_vm_t *vm) } } - if (vm->options.unhandled_rejection - == NJS_VM_OPT_UNHANDLED_REJECTION_THROW) - { - if (vm->promise_reason != NULL && vm->promise_reason->length != 0) { - ret = njs_value_to_string(vm, &string, - &vm->promise_reason->start[0]); - if (njs_slow_path(ret != NJS_OK)) { - return ret; - } + if (njs_vm_unhandled_rejection(vm)) { + ret = njs_value_to_string(vm, &string, + &vm->promise_reason->start[0]); + if (njs_slow_path(ret != NJS_OK)) { + return ret; + } - njs_string_get(&string, &str); - njs_vm_error(vm, "unhandled promise rejection: %V", &str); + njs_string_get(&string, &str); + njs_vm_error(vm, "unhandled promise rejection: %V", &str); - njs_mp_free(vm->mem_pool, vm->promise_reason); - vm->promise_reason = NULL; + njs_mp_free(vm->mem_pool, vm->promise_reason); + vm->promise_reason = NULL; - return NJS_ERROR; - } + return NJS_ERROR; } for ( ;; ) { diff -r 5aceb5eaf2b2 -r b5d102eb81c1 test/js/promise_rejection_tracker.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/promise_rejection_tracker.js Fri Oct 08 12:32:42 2021 +0300 @@ -0,0 +1,1 @@ +Promise.reject(1); \ No newline at end of file diff -r 5aceb5eaf2b2 -r b5d102eb81c1 test/njs_expect_test.exp --- a/test/njs_expect_test.exp Wed Oct 06 13:16:09 2021 +0000 +++ b/test/njs_expect_test.exp Fri Oct 08 12:32:42 2021 +0300 @@ -1085,6 +1085,10 @@ njs_run {"./test/js/promise_reject_catch njs_run {"./test/js/promise_reject_post_catch.js"} \ "Error: unhandled promise rejection: undefined" +njs_run {"./test/js/promise_rejection_tracker.js"} \ +"Thrown: +Error: unhandled promise rejection: 1" + njs_run {"./test/js/promise_all.js"} \ "resolved:\\\[\\\['one','two'],\\\['three','four']]" From xeioex at nginx.com Fri Oct 8 13:42:04 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 08 Oct 2021 13:42:04 +0000 Subject: [njs] Types: updated TS definitions. Message-ID: details: https://hg.nginx.org/njs/rev/9291aef80a73 branches: changeset: 1715:9291aef80a73 user: Dmitry Volyntsev date: Fri Oct 08 13:40:58 2021 +0000 description: Types: updated TS definitions. diffstat: ts/ngx_core.d.ts | 8 +++++++- 1 files changed, 7 insertions(+), 1 deletions(-) diffs (25 lines): diff -r b5d102eb81c1 -r 9291aef80a73 ts/ngx_core.d.ts --- a/ts/ngx_core.d.ts Fri Oct 08 12:32:42 2021 +0300 +++ b/ts/ngx_core.d.ts Fri Oct 08 13:40:58 2021 +0000 @@ -95,6 +95,12 @@ interface NgxFetchOptions { * Request method, by default the GET method is used. */ method?: NjsStringLike; + /** + * Enables or disables verification of the HTTPS server certificate, + * by default is true. + * @since 0.7.0 + */ + verify?: boolean; } interface NgxObject { @@ -111,7 +117,7 @@ interface NgxObject { /** * Makes a request to fetch an URL. * Returns a Promise that resolves with the NgxResponse object. - * Only the http:// scheme is supported, redirects are not handled. + * Since 0.7.0 HTTPS is supported, redirects are not handled. * @param url URL of a resource to fetch. * @param options An object containing additional settings. * @since 0.5.1 From xeioex at nginx.com Fri Oct 8 13:42:06 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 08 Oct 2021 13:42:06 +0000 Subject: [njs] Modules: introduced common ngx_js_retval(). Message-ID: details: https://hg.nginx.org/njs/rev/5e3973c2216d branches: changeset: 1716:5e3973c2216d user: Dmitry Volyntsev date: Fri Oct 08 13:41:00 2021 +0000 description: Modules: introduced common ngx_js_retval(). diffstat: nginx/ngx_http_js_module.c | 29 ++++++++++++-------------- nginx/ngx_js.c | 47 +++++++++++++++++++++++++++++++++++++++---- nginx/ngx_js.h | 2 + nginx/ngx_stream_js_module.c | 39 +++++++++++++++++++---------------- 4 files changed, 78 insertions(+), 39 deletions(-) diffs (332 lines): diff -r 9291aef80a73 -r 5e3973c2216d nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Fri Oct 08 13:40:58 2021 +0000 +++ b/nginx/ngx_http_js_module.c Fri Oct 08 13:41:00 2021 +0000 @@ -50,6 +50,7 @@ typedef struct { ngx_log_t *log; ngx_uint_t done; ngx_int_t status; + njs_opaque_value_t retval; njs_opaque_value_t request; njs_opaque_value_t request_body; njs_opaque_value_t response_body; @@ -910,7 +911,6 @@ ngx_http_js_body_filter(ngx_http_request size_t len; u_char *p; ngx_int_t rc; - njs_str_t exception; njs_int_t ret, pending; ngx_buf_t *b; ngx_chain_t *out, *cl; @@ -988,11 +988,6 @@ ngx_http_js_body_filter(ngx_http_request 3); if (rc == NGX_ERROR) { - njs_vm_retval_string(ctx->vm, &exception); - - ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %*s", - exception.length, exception.start); - return NGX_ERROR; } @@ -1044,7 +1039,7 @@ ngx_http_js_variable_set(ngx_http_reques ngx_int_t rc; njs_int_t pending; - njs_str_t value; + ngx_str_t value; ngx_http_js_ctx_t *ctx; rc = ngx_http_js_init_vm(r); @@ -1078,15 +1073,15 @@ ngx_http_js_variable_set(ngx_http_reques return NGX_ERROR; } - if (njs_vm_retval_string(ctx->vm, &value) != NJS_OK) { + if (ngx_js_retval(ctx->vm, &ctx->retval, &value) != NGX_OK) { return NGX_ERROR; } - v->len = value.length; + v->len = value.len; v->valid = 1; v->no_cacheable = 0; v->not_found = 0; - v->data = value.start; + v->data = value.data; return NGX_OK; } @@ -1123,7 +1118,7 @@ static ngx_int_t ngx_http_js_init_vm(ngx_http_request_t *r) { njs_int_t rc; - njs_str_t exception; + ngx_str_t exception; ngx_http_js_ctx_t *ctx; ngx_pool_cleanup_t *cln; ngx_http_js_main_conf_t *jmcf; @@ -1141,6 +1136,8 @@ ngx_http_js_init_vm(ngx_http_request_t * return NGX_ERROR; } + njs_value_invalid_set(njs_value_arg(&ctx->retval)); + ngx_http_set_ctx(r, ctx, ngx_http_js_module); } @@ -1164,10 +1161,10 @@ ngx_http_js_init_vm(ngx_http_request_t * cln->data = ctx; if (njs_vm_start(ctx->vm) == NJS_ERROR) { - njs_vm_retval_string(ctx->vm, &exception); + ngx_js_retval(ctx->vm, NULL, &exception); ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, - "js exception: %*s", exception.length, exception.start); + "js exception: %V", &exception); return NGX_ERROR; } @@ -3403,7 +3400,7 @@ ngx_http_js_handle_vm_event(ngx_http_req njs_value_t *args, njs_uint_t nargs) { njs_int_t rc; - njs_str_t exception; + ngx_str_t exception; ngx_http_js_ctx_t *ctx; ctx = ngx_http_get_module_ctx(r, ngx_http_js_module); @@ -3417,10 +3414,10 @@ ngx_http_js_handle_vm_event(ngx_http_req (ngx_int_t) rc, vm_event); if (rc == NJS_ERROR) { - njs_vm_retval_string(ctx->vm, &exception); + ngx_js_retval(ctx->vm, NULL, &exception); ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, - "js exception: %*s", exception.length, exception.start); + "js exception: %V", &exception); ngx_http_finalize_request(r, NGX_ERROR); } diff -r 9291aef80a73 -r 5e3973c2216d nginx/ngx_js.c --- a/nginx/ngx_js.c Fri Oct 08 13:40:58 2021 +0000 +++ b/nginx/ngx_js.c Fri Oct 08 13:41:00 2021 +0000 @@ -69,7 +69,9 @@ ngx_int_t ngx_js_call(njs_vm_t *vm, ngx_str_t *fname, ngx_log_t *log, njs_opaque_value_t *args, njs_uint_t nargs) { - njs_str_t name, exception; + njs_int_t ret; + njs_str_t name; + ngx_str_t exception; njs_function_t *func; name.start = fname->data; @@ -82,16 +84,51 @@ ngx_js_call(njs_vm_t *vm, ngx_str_t *fna return NGX_ERROR; } - if (njs_vm_call(vm, func, njs_value_arg(args), nargs) != NJS_OK) { - njs_vm_retval_string(vm, &exception); + ret = njs_vm_call(vm, func, njs_value_arg(args), nargs); + if (ret == NJS_ERROR) { + ngx_js_retval(vm, NULL, &exception); ngx_log_error(NGX_LOG_ERR, log, 0, - "js exception: %*s", exception.length, exception.start); + "js exception: %V", &exception); + + return NGX_ERROR; + } + + ret = njs_vm_run(vm); + if (ret == NJS_ERROR) { + ngx_js_retval(vm, NULL, &exception); + + ngx_log_error(NGX_LOG_ERR, log, 0, + "js exception: %V", &exception); return NGX_ERROR; } - return njs_vm_run(vm); + return (ret == NJS_AGAIN) ? NGX_AGAIN : NGX_OK; +} + + +ngx_int_t +ngx_js_retval(njs_vm_t *vm, njs_opaque_value_t *retval, ngx_str_t *s) +{ + njs_int_t ret; + njs_str_t str; + + if (retval != NULL && njs_value_is_valid(njs_value_arg(retval))) { + ret = njs_vm_value_string(vm, &str, njs_value_arg(retval)); + + } else { + ret = njs_vm_retval_string(vm, &str); + } + + if (ret != NJS_OK) { + return NGX_ERROR; + } + + s->data = str.start; + s->len = str.length; + + return NGX_OK; } diff -r 9291aef80a73 -r 5e3973c2216d nginx/ngx_js.h --- a/nginx/ngx_js.h Fri Oct 08 13:40:58 2021 +0000 +++ b/nginx/ngx_js.h Fri Oct 08 13:41:00 2021 +0000 @@ -51,6 +51,8 @@ typedef ngx_ssl_t *(*ngx_external_ssl_pt ngx_int_t ngx_js_call(njs_vm_t *vm, ngx_str_t *fname, ngx_log_t *log, njs_opaque_value_t *args, njs_uint_t nargs); +ngx_int_t ngx_js_retval(njs_vm_t *vm, njs_opaque_value_t *retval, + ngx_str_t *s); njs_int_t ngx_js_ext_log(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t level); diff -r 9291aef80a73 -r 5e3973c2216d nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Fri Oct 08 13:40:58 2021 +0000 +++ b/nginx/ngx_stream_js_module.c Fri Oct 08 13:41:00 2021 +0000 @@ -52,6 +52,7 @@ typedef struct { typedef struct { njs_vm_t *vm; + njs_opaque_value_t retval; njs_opaque_value_t args[3]; ngx_buf_t *buf; ngx_chain_t **last_out; @@ -511,7 +512,7 @@ ngx_stream_js_preread_handler(ngx_stream static ngx_int_t ngx_stream_js_phase_handler(ngx_stream_session_t *s, ngx_str_t *name) { - njs_str_t exception; + ngx_str_t exception; njs_int_t ret; ngx_int_t rc; ngx_connection_t *c; @@ -550,10 +551,10 @@ ngx_stream_js_phase_handler(ngx_stream_s ret = ngx_stream_js_run_event(s, ctx, &ctx->events[NGX_JS_EVENT_UPLOAD]); if (ret != NJS_OK) { - njs_vm_retval_string(ctx->vm, &exception); + ngx_js_retval(ctx->vm, NULL, &exception); - ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %*s", - exception.length, exception.start); + ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %V", + &exception); return NGX_ERROR; } @@ -583,7 +584,7 @@ static ngx_int_t ngx_stream_js_body_filter(ngx_stream_session_t *s, ngx_chain_t *in, ngx_uint_t from_upstream) { - njs_str_t exception; + ngx_str_t exception; njs_int_t ret; ngx_int_t rc; ngx_chain_t *out, *cl, **busy; @@ -635,10 +636,10 @@ ngx_stream_js_body_filter(ngx_stream_ses if (event->ev != NULL) { ret = ngx_stream_js_run_event(s, ctx, event); if (ret != NJS_OK) { - njs_vm_retval_string(ctx->vm, &exception); + ngx_js_retval(ctx->vm, NULL, &exception); - ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %*s", - exception.length, exception.start); + ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %V", + &exception); return NGX_ERROR; } @@ -693,7 +694,7 @@ ngx_stream_js_variable_set(ngx_stream_se ngx_int_t rc; njs_int_t pending; - njs_str_t value; + ngx_str_t value; ngx_stream_js_ctx_t *ctx; rc = ngx_stream_js_init_vm(s); @@ -727,15 +728,15 @@ ngx_stream_js_variable_set(ngx_stream_se return NGX_ERROR; } - if (njs_vm_retval_string(ctx->vm, &value) != NJS_OK) { + if (ngx_js_retval(ctx->vm, &ctx->retval, &value) != NGX_OK) { return NGX_ERROR; } - v->len = value.length; + v->len = value.len; v->valid = 1; v->no_cacheable = 0; v->not_found = 0; - v->data = value.start; + v->data = value.data; return NGX_OK; } @@ -772,7 +773,7 @@ static ngx_int_t ngx_stream_js_init_vm(ngx_stream_session_t *s) { njs_int_t rc; - njs_str_t exception; + ngx_str_t exception; ngx_pool_cleanup_t *cln; ngx_stream_js_ctx_t *ctx; ngx_stream_js_main_conf_t *jmcf; @@ -790,6 +791,8 @@ ngx_stream_js_init_vm(ngx_stream_session return NGX_ERROR; } + njs_value_invalid_set(njs_value_arg(&ctx->retval)); + ngx_stream_set_ctx(s, ctx, ngx_stream_js_module); } @@ -811,10 +814,10 @@ ngx_stream_js_init_vm(ngx_stream_session cln->data = s; if (njs_vm_start(ctx->vm) == NJS_ERROR) { - njs_vm_retval_string(ctx->vm, &exception); + ngx_js_retval(ctx->vm, NULL, &exception); ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, - "js exception: %*s", exception.length, exception.start); + "js exception: %V", &exception); return NGX_ERROR; } @@ -1433,7 +1436,7 @@ ngx_stream_js_handle_event(ngx_stream_se njs_value_t *args, njs_uint_t nargs) { njs_int_t rc; - njs_str_t exception; + ngx_str_t exception; ngx_stream_js_ctx_t *ctx; ctx = ngx_stream_get_module_ctx(s, ngx_stream_js_module); @@ -1443,10 +1446,10 @@ ngx_stream_js_handle_event(ngx_stream_se rc = njs_vm_run(ctx->vm); if (rc == NJS_ERROR) { - njs_vm_retval_string(ctx->vm, &exception); + ngx_js_retval(ctx->vm, NULL, &exception); ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, - "js exception: %*s", exception.length, exception.start); + "js exception: %V", &exception); ngx_stream_finalize_session(s, NGX_STREAM_INTERNAL_SERVER_ERROR); } From xeioex at nginx.com Fri Oct 8 13:42:08 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 08 Oct 2021 13:42:08 +0000 Subject: [njs] Modules: introduced setReturnValue() method. Message-ID: details: https://hg.nginx.org/njs/rev/fb3e13959b71 branches: changeset: 1717:fb3e13959b71 user: Dmitry Volyntsev date: Fri Oct 08 13:41:01 2021 +0000 description: Modules: introduced setReturnValue() method. diffstat: nginx/ngx_http_js_module.c | 36 ++++++++++++++++++++++++++++++++++++ nginx/ngx_stream_js_module.c | 36 ++++++++++++++++++++++++++++++++++++ 2 files changed, 72 insertions(+), 0 deletions(-) diffs (120 lines): diff -r 5e3973c2216d -r fb3e13959b71 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Fri Oct 08 13:41:00 2021 +0000 +++ b/nginx/ngx_http_js_module.c Fri Oct 08 13:41:01 2021 +0000 @@ -143,6 +143,8 @@ static njs_int_t ngx_http_js_ext_send(nj njs_uint_t nargs, njs_index_t unused); static njs_int_t ngx_http_js_ext_send_buffer(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused); +static njs_int_t ngx_http_js_ext_set_return_value(njs_vm_t *vm, + njs_value_t *args, njs_uint_t nargs, njs_index_t unused); static njs_int_t ngx_http_js_ext_done(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused); static njs_int_t ngx_http_js_ext_finish(njs_vm_t *vm, njs_value_t *args, @@ -656,6 +658,17 @@ static njs_external_t ngx_http_js_ext_r { .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("setReturnValue"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_http_js_ext_set_return_value, + } + }, + + { + .flags = NJS_EXTERN_METHOD, .name.string = njs_str("done"), .writable = 1, .configurable = 1, @@ -2154,6 +2167,29 @@ ngx_http_js_ext_send_buffer(njs_vm_t *vm static njs_int_t +ngx_http_js_ext_set_return_value(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused) +{ + ngx_http_js_ctx_t *ctx; + ngx_http_request_t *r; + + r = njs_vm_external(vm, ngx_http_js_request_proto_id, + njs_argument(args, 0)); + if (r == NULL) { + njs_vm_error(vm, "\"this\" is not an external"); + return NJS_ERROR; + } + + ctx = ngx_http_get_module_ctx(r, ngx_http_js_module); + + njs_value_assign(&ctx->retval, njs_arg(args, nargs, 1)); + njs_value_undefined_set(njs_vm_retval(vm)); + + return NJS_OK; +} + + +static njs_int_t ngx_http_js_ext_done(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused) { diff -r 5e3973c2216d -r fb3e13959b71 nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Fri Oct 08 13:41:00 2021 +0000 +++ b/nginx/ngx_stream_js_module.c Fri Oct 08 13:41:01 2021 +0000 @@ -110,6 +110,8 @@ static njs_int_t ngx_stream_js_ext_off(n njs_uint_t nargs, njs_index_t unused); static njs_int_t ngx_stream_js_ext_send(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused); +static njs_int_t ngx_stream_js_ext_set_return_value(njs_vm_t *vm, + njs_value_t *args, njs_uint_t nargs, njs_index_t unused); static njs_int_t ngx_stream_js_ext_variables(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, @@ -450,6 +452,17 @@ static njs_external_t ngx_stream_js_ext } }, + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("setReturnValue"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_stream_js_ext_set_return_value, + } + }, + }; @@ -1249,6 +1262,29 @@ ngx_stream_js_ext_send(njs_vm_t *vm, njs static njs_int_t +ngx_stream_js_ext_set_return_value(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused) +{ + ngx_stream_js_ctx_t *ctx; + ngx_stream_session_t *s; + + s = njs_vm_external(vm, ngx_stream_js_session_proto_id, + njs_argument(args, 0)); + if (s == NULL) { + njs_vm_error(vm, "\"this\" is not an external"); + return NJS_ERROR; + } + + ctx = ngx_stream_get_module_ctx(s, ngx_stream_js_module); + + njs_value_assign(&ctx->retval, njs_arg(args, nargs, 1)); + njs_value_undefined_set(njs_vm_retval(vm)); + + return NJS_OK; +} + + +static njs_int_t ngx_stream_js_ext_variables(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval) { From xeioex at nginx.com Fri Oct 8 13:51:17 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 08 Oct 2021 13:51:17 +0000 Subject: [njs] Tests: added async tests support. Message-ID: details: https://hg.nginx.org/njs/rev/839307cc293a branches: changeset: 1718:839307cc293a user: Dmitry Volyntsev date: Fri Oct 08 13:50:50 2021 +0000 description: Tests: added async tests support. diffstat: src/njs.h | 2 + src/njs_value.c | 14 + src/test/njs_externals_test.c | 150 ++++++++++++++ src/test/njs_externals_test.h | 19 + src/test/njs_unit_test.c | 424 +++++++++++++++++++++++++++++++++++------ 5 files changed, 547 insertions(+), 62 deletions(-) diffs (798 lines): diff -r fb3e13959b71 -r 839307cc293a src/njs.h --- a/src/njs.h Fri Oct 08 13:41:01 2021 +0000 +++ b/src/njs.h Fri Oct 08 13:50:50 2021 +0000 @@ -380,6 +380,7 @@ NJS_EXPORT void njs_vm_memory_error(njs_ NJS_EXPORT void njs_value_undefined_set(njs_value_t *value); NJS_EXPORT void njs_value_null_set(njs_value_t *value); +NJS_EXPORT void njs_value_invalid_set(njs_value_t *value); NJS_EXPORT void njs_value_boolean_set(njs_value_t *value, int yn); NJS_EXPORT void njs_value_number_set(njs_value_t *value, double num); @@ -396,6 +397,7 @@ NJS_EXPORT njs_int_t njs_vm_prop_name(nj NJS_EXPORT njs_int_t njs_value_is_null(const njs_value_t *value); NJS_EXPORT njs_int_t njs_value_is_undefined(const njs_value_t *value); NJS_EXPORT njs_int_t njs_value_is_null_or_undefined(const njs_value_t *value); +NJS_EXPORT njs_int_t njs_value_is_valid(const njs_value_t *value); NJS_EXPORT njs_int_t njs_value_is_boolean(const njs_value_t *value); NJS_EXPORT njs_int_t njs_value_is_number(const njs_value_t *value); NJS_EXPORT njs_int_t njs_value_is_valid_number(const njs_value_t *value); diff -r fb3e13959b71 -r 839307cc293a src/njs_value.c --- a/src/njs_value.c Fri Oct 08 13:41:01 2021 +0000 +++ b/src/njs_value.c Fri Oct 08 13:50:50 2021 +0000 @@ -395,6 +395,13 @@ njs_value_null_set(njs_value_t *value) void +njs_value_invalid_set(njs_value_t *value) +{ + njs_set_invalid(value); +} + + +void njs_value_boolean_set(njs_value_t *value, int yn) { njs_set_boolean(value, yn); @@ -451,6 +458,13 @@ njs_value_is_null_or_undefined(const njs njs_int_t +njs_value_is_valid(const njs_value_t *value) +{ + return njs_is_valid(value); +} + + +njs_int_t njs_value_is_boolean(const njs_value_t *value) { return njs_is_boolean(value); diff -r fb3e13959b71 -r 839307cc293a src/test/njs_externals_test.c --- a/src/test/njs_externals_test.c Fri Oct 08 13:41:01 2021 +0000 +++ b/src/test/njs_externals_test.c Fri Oct 08 13:50:50 2021 +0000 @@ -373,6 +373,71 @@ njs_unit_test_r_method(njs_vm_t *vm, njs static njs_int_t +njs_unit_test_r_subrequest(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, + njs_index_t unused) +{ + njs_vm_event_t vm_event; + njs_function_t *callback; + njs_external_ev_t *ev; + njs_external_env_t *env; + njs_unit_test_req_t *r; + + r = njs_vm_external(vm, njs_external_r_proto_id, njs_argument(args, 0)); + if (r == NULL) { + njs_type_error(vm, "\"this\" is not an external"); + return NJS_ERROR; + } + + callback = njs_value_function(njs_arg(args, nargs, 1)); + if (callback == NULL) { + njs_type_error(vm, "argument is not callable"); + return NJS_ERROR; + } + + vm_event = njs_vm_add_event(vm, callback, 1, NULL, NULL); + if (vm_event == NULL) { + njs_internal_error(vm, "njs_vm_add_event() failed"); + return NJS_ERROR; + } + + ev = njs_mp_alloc(vm->mem_pool, sizeof(njs_external_ev_t)); + if (ev == NULL) { + njs_memory_error(vm); + return NJS_ERROR; + } + + ev->vm_event = vm_event; + ev->data = r; + ev->nargs = 1; + njs_value_assign(&ev->args[0], njs_argument(args, 0)); + + env = vm->external; + + njs_queue_insert_tail(&env->events, &ev->link); + + njs_set_undefined(&vm->retval); + + return NJS_OK; +} + + +static njs_int_t +njs_unit_test_r_retval(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, + njs_index_t unused) +{ + njs_external_env_t *env; + + env = vm->external; + + njs_value_assign(&env->retval, njs_arg(args, nargs, 1)); + + njs_set_undefined(&vm->retval); + + return NJS_OK; +} + + +static njs_int_t njs_unit_test_r_create(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused) { @@ -583,6 +648,28 @@ static njs_external_t njs_unit_test_r_e }, { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("subrequest"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_unit_test_r_subrequest, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("retval"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_unit_test_r_retval, + } + }, + + { .flags = NJS_EXTERN_OBJECT, .name.string = njs_str("props"), .enumerable = 1, @@ -755,3 +842,66 @@ njs_externals_init(njs_vm_t *vm) { return njs_externals_init_internal(vm, &njs_test_requests[1], 3, 0); } + + +njs_int_t +njs_external_env_init(njs_external_env_t *env) +{ + if (env != NULL) { + njs_value_invalid_set(&env->retval); + njs_queue_init(&env->events); + } + + return NJS_OK; +} + + +njs_int_t +njs_external_process_events(njs_vm_t *vm, njs_external_env_t *env) +{ + njs_queue_t *events; + njs_queue_link_t *link; + njs_external_ev_t *ev; + + events = &env->events; + + for ( ;; ) { + link = njs_queue_first(events); + + if (link == njs_queue_tail(events)) { + break; + } + + ev = njs_queue_link_data(link, njs_external_ev_t, link); + + njs_queue_remove(&ev->link); + ev->link.prev = NULL; + ev->link.next = NULL; + + njs_vm_post_event(vm, ev->vm_event, &ev->args[0], ev->nargs); + } + + return NJS_OK; +} + + +njs_int_t +njs_external_call(njs_vm_t *vm, const njs_str_t *fname, njs_value_t *args, + njs_uint_t nargs) +{ + njs_int_t ret; + njs_function_t *func; + + func = njs_vm_function(vm, fname); + if (func == NULL) { + njs_stderror("njs_external_call(): function \"%V\" not found\n", fname); + return NJS_ERROR; + } + + ret = njs_vm_call(vm, func, args, nargs); + if (ret == NJS_ERROR) { + return NJS_ERROR; + } + + return njs_vm_run(vm); +} diff -r fb3e13959b71 -r 839307cc293a src/test/njs_externals_test.h --- a/src/test/njs_externals_test.h Fri Oct 08 13:41:01 2021 +0000 +++ b/src/test/njs_externals_test.h Fri Oct 08 13:50:50 2021 +0000 @@ -8,8 +8,27 @@ #define _NJS_EXTERNALS_TEST_H_INCLUDED_ +typedef struct { + njs_value_t retval; + njs_queue_t events; /* of njs_external_ev_t */ +} njs_external_env_t; + + +typedef struct { + njs_vm_event_t vm_event; + void *data; + njs_uint_t nargs; + njs_value_t args[3]; + njs_queue_link_t link; +} njs_external_ev_t; + + njs_int_t njs_externals_shared_init(njs_vm_t *vm); njs_int_t njs_externals_init(njs_vm_t *vm); +njs_int_t njs_external_env_init(njs_external_env_t *env); +njs_int_t njs_external_call(njs_vm_t *vm, const njs_str_t *fname, + njs_value_t *args, njs_uint_t nargs); +njs_int_t njs_external_process_events(njs_vm_t *vm, njs_external_env_t *env); #endif /* _NJS_EXTERNALS_TEST_H_INCLUDED_ */ diff -r fb3e13959b71 -r 839307cc293a src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Fri Oct 08 13:41:01 2021 +0000 +++ b/src/test/njs_unit_test.c Fri Oct 08 13:50:50 2021 +0000 @@ -20486,14 +20486,6 @@ static njs_unit_test_t njs_test[] = "new ctor();"), njs_str("[object AsyncFunction]") }, - { njs_str("let ctor = Object.getPrototypeOf(async function(){}).constructor;" - "let f = new ctor(); f()"), - njs_str("[object Promise]") }, - - { njs_str("let ctor = Object.getPrototypeOf(async function(){}).constructor;" - "let f = new ctor('x', 'await 1; return x'); f(1)"), - njs_str("[object Promise]") }, - { njs_str("let f = new Function('x', 'await 1; return x'); f(1)"), njs_str("SyntaxError: await is only valid in async functions in runtime:1") }, @@ -20510,9 +20502,6 @@ static njs_unit_test_t njs_test[] = "(async function() {f(await 111)})"), njs_str("SyntaxError: await in arguments not supported in 1") }, - { njs_str("Promise.all([async () => [await x('X')]])"), - njs_str("[object Promise]") }, - { njs_str("async () => [await x(1)(),]; async () => [await x(1)()]"), njs_str("[object AsyncFunction]") }, @@ -20937,8 +20926,59 @@ static njs_unit_test_t njs_externals_te { njs_str("$r.buffer instanceof Buffer"), njs_str("true") }, + + { njs_str("let ctor = Object.getPrototypeOf(async function(){}).constructor;" + "let f = new ctor();" + "$r.retval(f())"), + njs_str("[object Promise]") }, + + { njs_str("let ctor = Object.getPrototypeOf(async function(){}).constructor;" + "let f = new ctor('x', 'await 1; return x');" + "$r.retval(f(1))"), + njs_str("[object Promise]") }, + + { njs_str("let ctor = Object.getPrototypeOf(async function(){}).constructor;" + "let f = new ctor('x', 'await 1; return x');" + "f(1).then($r.retval)"), + njs_str("1") }, + + { njs_str("$r.retval(Promise.all([async () => [await x('X')]]))"), + njs_str("[object Promise]") }, + + { njs_str("let obj = { a: 1, b: 2};" + "function cb(r) { r.retval(obj.a); }" + "$r.subrequest(reply => cb(reply))"), + njs_str("1") }, }; + +static njs_unit_test_t njs_async_handler_test[] = +{ + { njs_str("globalThis.main = (function() {" + " function cb(r) { r.retval(1); }" + " function handler(r) {" + " r.subrequest(reply => cb(reply));" + " };" + " return {handler};" + "})();" + ), + njs_str("1") }, + +#if 0 /* FIXME */ + { njs_str("globalThis.main = (function() {" + " let obj = { a: 1, b: 2};" + " function cb(r) { r.retval(obj.a); }" + " function handler(r) {" + " r.subrequest(reply => cb(reply));" + " };" + " return {handler};" + "})();" + ), + njs_str("1") }, +#endif +}; + + static njs_unit_test_t njs_shared_test[] = { { njs_str("var cr = require('crypto'); cr.createHash"), @@ -21531,6 +21571,9 @@ typedef struct { njs_uint_t repeat; njs_bool_t unsafe; njs_bool_t backtrace; + njs_bool_t handler; + njs_bool_t async; + unsigned seed; } njs_opts_t; @@ -21540,6 +21583,27 @@ typedef struct { } njs_stat_t; +typedef struct { + njs_vm_t *vm; + njs_external_env_t *env; + njs_external_env_t env0; + + enum { + sw_start = 0, + sw_handler, + sw_loop, + sw_done + } state; +} njs_external_state_t; + + +typedef struct { + njs_external_state_t *states; + njs_uint_t size; + njs_uint_t current; +} njs_runtime_t; + + static void njs_unit_test_report(njs_str_t *name, njs_stat_t *prev, njs_stat_t *current) { @@ -21555,20 +21619,240 @@ njs_unit_test_report(njs_str_t *name, nj static njs_int_t +njs_external_state_init(njs_vm_t *vm, njs_external_state_t *s, njs_opts_t *opts) +{ + njs_int_t ret; + + if (opts->externals) { + s->env = &s->env0; + + ret = njs_external_env_init(s->env); + if (ret != NJS_OK) { + njs_stderror("njs_external_env_init() failed\n"); + return NJS_ERROR; + } + + } else { + s->env = NULL; + } + + s->vm = njs_vm_clone(vm, s->env); + if (s->vm == NULL) { + njs_stderror("njs_vm_clone() failed\n"); + return NJS_ERROR; + } + + if (opts->externals) { + ret = njs_externals_init(s->vm); + if (ret != NJS_OK) { + njs_stderror("njs_externals_init() failed\n"); + return NJS_ERROR; + } + } + + s->state = sw_start; + + return NJS_OK; +} + + +static njs_int_t +njs_external_retval(njs_external_state_t *state, njs_str_t *s) +{ + if (state->env != NULL && njs_value_is_valid(&state->env->retval)) { + return njs_vm_value_string(state->vm, s, &state->env->retval); + } + + return njs_vm_retval_string(state->vm, s); +} + + +static njs_runtime_t * +njs_runtime_init(njs_vm_t *vm, njs_opts_t *opts) +{ + njs_int_t ret; + njs_uint_t i; + njs_runtime_t *rt; + + rt = njs_mp_alloc(vm->mem_pool, sizeof(njs_runtime_t)); + if (rt == NULL) { + return NULL; + } + + rt->size = opts->repeat; + rt->states = njs_mp_alloc(vm->mem_pool, + sizeof(njs_external_state_t) * rt->size); + if (rt->states == NULL) { + return NULL; + } + + rt->current = 0; + srandom(opts->seed); + + for (i = 0; i < rt->size; i++) { + ret = njs_external_state_init(vm, &rt->states[i], opts); + if (ret != NJS_OK) { + njs_stderror("njs_external_state_init() failed\n"); + return NULL; + } + } + + return rt; +} + + +static njs_external_state_t * +njs_runtime_next_state(njs_runtime_t *rt, njs_opts_t *opts) +{ + unsigned next, n; + + n = 0; + next = ((opts->async) ? (unsigned) random() : rt->current++) % rt->size; + + while (rt->states[next].state == sw_done) { + next++; + next = next % rt->size; + + n++; + + if (n == rt->size) { + return NULL; + } + } + + return &rt->states[next]; +} + + +static void +njs_runtime_destroy(njs_runtime_t *rt) +{ + njs_uint_t i; + + for (i = 0; i < rt->size; i++) { + if (rt->states[i].vm != NULL) { + njs_vm_destroy(rt->states[i].vm); + } + } +} + + +static njs_int_t +njs_process_test(njs_external_state_t *state, njs_opts_t *opts, + njs_unit_test_t *expected) +{ + njs_int_t ret; + njs_str_t s; + njs_bool_t success; + njs_value_t request; + + static const njs_str_t handler_str = njs_str("main.handler"); + static const njs_str_t request_str = njs_str("$r"); + + switch (state->state) { + case sw_start: + state->state = sw_handler; + + ret = njs_vm_start(state->vm); + if (ret != NJS_OK) { + goto done; + } + + if (opts->async) { + return NJS_OK; + } + + /* Fall through. */ + case sw_handler: + state->state = sw_loop; + + if (opts->handler) { + ret = njs_vm_value(state->vm, &request_str, &request); + if (ret != NJS_OK) { + njs_stderror("njs_vm_value(\"%V\") failed\n", &request_str); + return NJS_ERROR; + } + + ret = njs_external_call(state->vm, &handler_str, &request, 1); + if (ret == NJS_ERROR) { + goto done; + } + + if (opts->async) { + return NJS_OK; + } + } + + /* Fall through. */ + case sw_loop: + default: + for ( ;; ) { + if (!njs_vm_pending(state->vm)) { + break; + } + + ret = njs_external_process_events(state->vm, state->env); + if (ret != NJS_OK) { + njs_stderror("njs_external_process_events() failed\n"); + return NJS_ERROR; + } + + if (njs_vm_waiting(state->vm) && !njs_vm_posted(state->vm)) { + /*TODO: async events. */ + + njs_stderror("njs_process_test(): async events unsupported\n"); + return NJS_ERROR; + } + + (void) njs_vm_run(state->vm); + + if (opts->async) { + return NJS_OK; + } + } + } + +done: + + state->state = sw_done; + + if (njs_external_retval(state, &s) != NJS_OK) { + njs_stderror("njs_external_retval() failed\n"); + return NJS_ERROR; + } + + success = njs_strstr_eq(&expected->ret, &s); + if (!success) { + njs_stderror("njs(\"%V\")\nexpected: \"%V\"\n got: \"%V\"\n", + &expected->script, &expected->ret, &s); + + return NJS_DECLINED; + } + + njs_vm_destroy(state->vm); + state->vm = NULL; + + return NJS_OK; +} + + +static njs_int_t njs_unit_test(njs_unit_test_t tests[], size_t num, njs_str_t *name, njs_opts_t *opts, njs_stat_t *stat) { - u_char *start, *end; - njs_vm_t *vm, *nvm; - njs_int_t ret; - njs_str_t s; - njs_uint_t i, repeat; - njs_stat_t prev; - njs_bool_t success; - njs_vm_opt_t options; + u_char *start, *end; + njs_vm_t *vm; + njs_int_t ret; + njs_str_t s; + njs_bool_t success; + njs_uint_t i; + njs_stat_t prev; + njs_vm_opt_t options; + njs_runtime_t *rt; + njs_external_state_t *state; vm = NULL; - nvm = NULL; + rt = NULL; prev = *stat; @@ -21609,32 +21893,34 @@ njs_unit_test(njs_unit_test_t tests[], s njs_disassembler(vm); } - repeat = opts->repeat; - - do { - if (nvm != NULL) { - njs_vm_destroy(nvm); + rt = njs_runtime_init(vm, opts); + if (rt == NULL) { + njs_stderror("njs_runtime_init() failed\n"); + goto done; + } + + for ( ;; ) { + state = njs_runtime_next_state(rt, opts); + if (state == NULL) { + break; } - nvm = njs_vm_clone(vm, NULL); - if (nvm == NULL) { - njs_printf("njs_vm_clone() failed\n"); + ret = njs_process_test(state, opts, &tests[i]); + if (ret != NJS_OK) { + if (ret == NJS_DECLINED) { + break; + } + + njs_stderror("njs_process_test() failed\n"); goto done; } - - if (opts->externals) { - ret = njs_externals_init(nvm); - if (ret != NJS_OK) { - goto done; - } - } - - ret = njs_vm_start(nvm); - } while (--repeat != 0); - - if (njs_vm_retval_string(nvm, &s) != NJS_OK) { - njs_printf("njs_vm_retval_string() failed\n"); - goto done; + } + + success = (ret == NJS_OK); + + if (rt != NULL) { + njs_runtime_destroy(rt); + rt = NULL; } } else { @@ -21648,23 +21934,20 @@ njs_unit_test(njs_unit_test_t tests[], s s = njs_str_value("Error: " "Extra characters at the end of the script"); } - } - - success = njs_strstr_eq(&tests[i].ret, &s); - - if (!success) { - njs_printf("njs(\"%V\")\nexpected: \"%V\"\n got: \"%V\"\n", - &tests[i].script, &tests[i].ret, &s); - - stat->failed++; + + success = njs_strstr_eq(&tests[i].ret, &s); + if (!success) { + njs_stderror("njs(\"%V\")\nexpected: \"%V\"\n" + " got: \"%V\"\n", + &tests[i].script, &tests[i].ret, &s); + } + } + + if (success) { + stat->passed++; } else { - stat->passed++; - } - - if (nvm != NULL) { - njs_vm_destroy(nvm); - nvm = NULL; + stat->failed++; } njs_vm_destroy(vm); @@ -21675,8 +21958,8 @@ njs_unit_test(njs_unit_test_t tests[], s done: - if (nvm != NULL) { - njs_vm_destroy(nvm); + if (rt != NULL) { + njs_runtime_destroy(rt); } if (vm != NULL) { @@ -22784,7 +23067,7 @@ done: static njs_int_t -njs_get_options(njs_opts_t *opts, int argc, char **argv) +njs_options_parse(njs_opts_t *opts, int argc, char **argv) { char *p; njs_int_t i; @@ -22798,6 +23081,7 @@ njs_get_options(njs_opts_t *opts, int ar " -d print disassembled code.\n" " -f PATTERN1[|PATTERN2..] filter test suites to run.\n" " -r count overrides repeat count for tests.\n" + " -s seed sets seed for async tests.\n" " -v verbose mode.\n"; for (i = 1; i < argc; i++) { @@ -22839,6 +23123,15 @@ njs_get_options(njs_opts_t *opts, int ar njs_stderror("option \"-r\" requires argument\n"); return NJS_ERROR; + case 's': + if (++i < argc) { + opts->seed = atoi(argv[i]); + break; + } + + njs_stderror("option \"-s\" requires argument\n"); + return NJS_ERROR; + case 'v': opts->verbose = 1; break; @@ -22967,8 +23260,14 @@ static njs_test_suite_t njs_suites[] = njs_nitems(njs_externals_test), njs_unit_test }, + { njs_str("async handler"), + { .async = 1, .externals = 1, .handler = 1, .repeat = 4, .seed = 2, .unsafe = 1 }, + njs_async_handler_test, + njs_nitems(njs_async_handler_test), + njs_unit_test }, + { njs_str("shared"), - { .externals = 1, .repeat = 128, .unsafe = 1, .backtrace = 1 }, + { .externals = 1, .repeat = 128, .seed = 42, .unsafe = 1, .backtrace = 1 }, njs_shared_test, njs_nitems(njs_shared_test), njs_unit_test }, @@ -23022,7 +23321,7 @@ main(int argc, char **argv) njs_memzero(&opts, sizeof(njs_opts_t)); - ret = njs_get_options(&opts, argc, argv); + ret = njs_options_parse(&opts, argc, argv); if (ret != NJS_OK) { return (ret == NJS_DONE) ? EXIT_SUCCESS: EXIT_FAILURE; } @@ -23045,6 +23344,7 @@ main(int argc, char **argv) op.disassemble = opts.disassemble; op.repeat = opts.repeat ? opts.repeat : op.repeat; + op.seed = opts.seed ? opts.seed : op.seed; op.verbose = opts.verbose; ret = suite->run(suite->tests, suite->n, &suite->name, &op, &stat); From mathpal_n at fastmail.com Fri Oct 8 18:19:29 2021 From: mathpal_n at fastmail.com (Awdhesh Mathpal) Date: Fri, 08 Oct 2021 11:19:29 -0700 Subject: Extra data from upstream and keepalive connections Message-ID: Hello, Proxy module may not disable keepalive connection when upstream sends extra data with Content-Length:0 response header. This happens because of an incorrect assumption on the state of the upstream->keepalive flag at https://github.com/nginx/nginx/blame/master/src/http/modules/ngx_http_proxy_module.c#L2336 When response content-length is 0, then upstream->keepalive may get initialized to 1 depending on the Connection response header. https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_proxy_module.c#L2002 To trigger this issue, nginx must be configured as follows: - proxy buffering is disabled - responses are processed by ngx_http_proxy_non_buffered_copy_filter (no nginx caching) - The upstream keepalive directive is enabled - The content-length response header from upstream is 0 - Upstream sends a body/extra data Under these conditions, the connection will be saved for next request. Here is a patch that addresses this: # HG changeset patch # User Awdhesh Mathpal # Date 1633659791 25200 # Thu Oct 07 19:23:11 2021 -0700 # Node ID ccf2ccd9724f7cff4363e81545b1af97aa881415 # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 proxy: Disable keepalive on extra data When an upstream sends Content-Length:0, upstream->keepalive is initialized eagerly on the basis of Connection header. This can lead to keepalive being enabled on the connection. If in such a scenario upstream sends extra data, then the connection should not be reused. diff -r ae7c767aa491 -r ccf2ccd9724f src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Wed Oct 06 18:01:42 2021 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Thu Oct 07 19:23:11 2021 -0700 @@ -2337,6 +2337,7 @@ ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, "upstream sent more data than specified in " "\"Content-Length\" header"); + u->keepalive = 0; return NGX_OK; } @@ -2370,7 +2371,7 @@ ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, "upstream sent more data than specified in " "\"Content-Length\" header"); - + u->keepalive = 0; cl->buf->last = cl->buf->pos + u->length; u->length = 0; Awdhesh From doujiang24 at gmail.com Sat Oct 9 01:14:19 2021 From: doujiang24 at gmail.com (DeJiang Zhu) Date: Sat, 9 Oct 2021 09:14:19 +0800 Subject: segfault when both use builtin and shared in ssl_session_cache Message-ID: Hi, Nginx developers: I'm investigating a segfault issue: it happens when both "builtin" and "shared" cache types are used in ssl_session_cache and it disappear when only use "shared". It's original reported here: https://github.com/kubernetes/ingress-nginx/issues/7080#issuecomment-932293028 And some more details here: https://github.com/openssl/openssl/issues/16733#issue-1014329932 I haven't see any code on Nginx side that will directly manipulate the session hash hash. Could you please provide any suggestions? Thanks very much! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Oct 9 04:56:13 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 9 Oct 2021 07:56:13 +0300 Subject: segfault when both use builtin and shared in ssl_session_cache In-Reply-To: References: Message-ID: Hello! On Sat, Oct 09, 2021 at 09:14:19AM +0800, DeJiang Zhu wrote: > Hi, Nginx developers: > > I'm investigating a segfault issue: it happens when both "builtin" and > "shared" cache types are used in ssl_session_cache and it disappear when > only use "shared". > > It's original reported here: > https://github.com/kubernetes/ingress-nginx/issues/7080#issuecomment-932293028 > And some more details here: > https://github.com/openssl/openssl/issues/16733#issue-1014329932 > > I haven't see any code on Nginx side that will directly manipulate the > session hash hash. > Could you please provide any suggestions? Thanks very much! By itself nginx does not try to manipulate OpenSSL's builtin session cache directly. Rather, nginx only controls if builtin cache is enabled and its size via SSL_CTX_set_session_cache_mode() and SSL_CTX_sess_set_cache_size(). Additionally, when nginx has reasons to remove a session, it calls SSL_CTX_remove_session() to remove a particular session. Note though that the links above indicate that you are using a fork rather than nginx itself, this might make a difference. Testing on vanilla nginx without any 3rd party modules might be a good idea, if it's possible. -- Maxim Dounin http://mdounin.ru/ From doujiang24 at gmail.com Sat Oct 9 07:16:12 2021 From: doujiang24 at gmail.com (DeJiang Zhu) Date: Sat, 9 Oct 2021 15:16:12 +0800 Subject: segfault when both use builtin and shared in ssl_session_cache In-Reply-To: References: Message-ID: Hello Maxim, On Sat, Oct 9, 2021 at 12:57 PM Maxim Dounin wrote: > Hello! > > On Sat, Oct 09, 2021 at 09:14:19AM +0800, DeJiang Zhu wrote: > > > Hi, Nginx developers: > > > > I'm investigating a segfault issue: it happens when both "builtin" and > > "shared" cache types are used in ssl_session_cache and it disappear when > > only use "shared". > > > > It's original reported here: > > > https://github.com/kubernetes/ingress-nginx/issues/7080#issuecomment-932293028 > > And some more details here: > > https://github.com/openssl/openssl/issues/16733#issue-1014329932 > > > > I haven't see any code on Nginx side that will directly manipulate the > > session hash hash. > > Could you please provide any suggestions? Thanks very much! > > By itself nginx does not try to manipulate OpenSSL's builtin > session cache directly. Rather, nginx only controls if builtin > cache is enabled and its size via SSL_CTX_set_session_cache_mode() > and SSL_CTX_sess_set_cache_size(). Additionally, when nginx has > reasons to remove a session, it calls SSL_CTX_remove_session() to > remove a particular session. > Got it. Thanks for your quick reply. > > Note though that the links above indicate that you are using a > fork rather than nginx itself, this might make a difference. > Testing on vanilla nginx without any 3rd party modules might be a > good idea, if it's possible. > AFAIK, ingress-nginx only enabled the "ssl_session_cache" for session cache. It hasn't enabled `ssl_session_fetch/store_by_lua" from lua-nginx-module. It is only reproduced in some production cases, it's hard to reproduce it on vanilla Nginx. Anyway, thanks again, and will update here when got more clues. > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artur at juraszek.xyz Sun Oct 10 19:43:16 2021 From: artur at juraszek.xyz (=?iso-8859-1?q?Artur_Juraszek?=) Date: Sun, 10 Oct 2021 21:43:16 +0200 Subject: [PATCH] Recognize image/avif in mime.types Message-ID: # HG changeset patch # User Artur Juraszek # Date 1633893497 -7200 # Sun Oct 10 21:18:17 2021 +0200 # Node ID d62a7ff2ec94678392024b875bbadac149a0feaf # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 Recognize image/avif in mime.types It's an officially registered[1] image format that's now supported by most major web browsers. [1] https://www.iana.org/assignments/media-types/media-types.xhtml diff -r ae7c767aa491 -r d62a7ff2ec94 conf/mime.types --- a/conf/mime.types Wed Oct 06 18:01:42 2021 +0300 +++ b/conf/mime.types Sun Oct 10 21:18:17 2021 +0200 @@ -15,6 +15,7 @@ text/vnd.wap.wml wml; text/x-component htc; + image/avif avif; image/png png; image/svg+xml svg svgz; image/tiff tif tiff; From alexander.borisov at nginx.com Mon Oct 11 14:47:02 2021 From: alexander.borisov at nginx.com (Alexander Borisov) Date: Mon, 11 Oct 2021 14:47:02 +0000 Subject: [njs] Fixed copying of closures for declared functions. Message-ID: details: https://hg.nginx.org/njs/rev/66bd2cc7fd87 branches: changeset: 1719:66bd2cc7fd87 user: Alexander Borisov date: Mon Oct 11 17:46:24 2021 +0300 description: Fixed copying of closures for declared functions. After 0a2a0b5a74f4 (0.6.0), the referencing of a closure value inside of a nested function may result in heap-use-after-free. For this to happen the closure value have to be referenced in a function invoked asynchronously. For example if a closure value is referenced in r.subrequest() or setTimeout() handler. The problem was that closure values of nested function were assigned during the function call and the memory shared between all the cloned VMs was used to store temporary assignments until the moment the declared function was referenced. When two VMs executed concurrently, the first VM might see the changed made by the second VM if the first one was suspended. The fix is to copy all declared functions at the time of the call. This closes #421 issue on GitHub. diffstat: src/njs_function.c | 11 ++++++++++- src/njs_function.h | 2 +- src/njs_parser.h | 6 ++++++ src/njs_variable.c | 14 ++++++++------ src/test/njs_unit_test.c | 2 -- 5 files changed, 25 insertions(+), 10 deletions(-) diffs (127 lines): diff -r 839307cc293a -r 66bd2cc7fd87 src/njs_function.c --- a/src/njs_function.c Fri Oct 08 13:50:50 2021 +0000 +++ b/src/njs_function.c Mon Oct 11 17:46:24 2021 +0300 @@ -615,6 +615,7 @@ njs_function_lambda_call(njs_vm_t *vm) njs_value_t *args, **local, *value; njs_value_t **cur_local, **cur_closures, **cur_temp; njs_function_t *function; + njs_declaration_t *declr; njs_function_lambda_t *lambda; frame = (njs_frame_t *) vm->top_frame; @@ -680,7 +681,15 @@ njs_function_lambda_call(njs_vm_t *vm) while (n != 0) { n--; - function = njs_function(lambda->declarations[n]); + declr = &lambda->declarations[n]; + value = njs_scope_value(vm, declr->index); + + *value = *declr->value; + + function = njs_function_value_copy(vm, value); + if (njs_slow_path(function == NULL)) { + return NJS_ERROR; + } ret = njs_function_capture_closure(vm, function, function->u.lambda); if (njs_slow_path(ret != NJS_OK)) { diff -r 839307cc293a -r 66bd2cc7fd87 src/njs_function.h --- a/src/njs_function.h Fri Oct 08 13:50:50 2021 +0000 +++ b/src/njs_function.h Mon Oct 11 17:46:24 2021 +0300 @@ -14,7 +14,7 @@ struct njs_function_lambda_s { uint32_t nlocal; uint32_t temp; - njs_value_t **declarations; + njs_declaration_t *declarations; uint32_t ndeclarations; njs_index_t self; diff -r 839307cc293a -r 66bd2cc7fd87 src/njs_parser.h --- a/src/njs_parser.h Fri Oct 08 13:50:50 2021 +0000 +++ b/src/njs_parser.h Mon Oct 11 17:46:24 2021 +0300 @@ -104,6 +104,12 @@ typedef struct { } njs_parser_rbtree_node_t; +typedef struct { + njs_value_t *value; + njs_index_t index; +} njs_declaration_t; + + typedef njs_int_t (*njs_parser_traverse_cb_t)(njs_vm_t *vm, njs_parser_node_t *node, void *ctx); diff -r 839307cc293a -r 66bd2cc7fd87 src/njs_variable.c --- a/src/njs_variable.c Fri Oct 08 13:50:50 2021 +0000 +++ b/src/njs_variable.c Mon Oct 11 17:46:24 2021 +0300 @@ -9,7 +9,7 @@ #include -static njs_value_t **njs_variable_scope_function_add(njs_parser_t *parser, +static njs_declaration_t *njs_variable_scope_function_add(njs_parser_t *parser, njs_parser_scope_t *scope); static njs_parser_scope_t *njs_variable_scope_find(njs_parser_t *parser, njs_parser_scope_t *scope, uintptr_t unique_id, njs_variable_type_t type); @@ -39,8 +39,8 @@ njs_variable_function_add(njs_parser_t * uintptr_t unique_id, njs_variable_type_t type) { njs_bool_t ctor; - njs_value_t **declr; njs_variable_t *var; + njs_declaration_t *declr; njs_parser_scope_t *root; njs_function_lambda_t *lambda; @@ -76,10 +76,12 @@ njs_variable_function_add(njs_parser_t * return NULL; } - *declr = &var->value; - var->index = njs_scope_index(root->type, root->items, NJS_LEVEL_LOCAL, type); + + declr->value = &var->value; + declr->index = var->index; + root->items++; } @@ -90,12 +92,12 @@ njs_variable_function_add(njs_parser_t * } -static njs_value_t ** +static njs_declaration_t * njs_variable_scope_function_add(njs_parser_t *parser, njs_parser_scope_t *scope) { if (scope->declarations == NULL) { scope->declarations = njs_arr_create(parser->vm->mem_pool, 1, - sizeof(njs_value_t *)); + sizeof(njs_declaration_t)); if (njs_slow_path(scope->declarations == NULL)) { return NULL; } diff -r 839307cc293a -r 66bd2cc7fd87 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Fri Oct 08 13:50:50 2021 +0000 +++ b/src/test/njs_unit_test.c Mon Oct 11 17:46:24 2021 +0300 @@ -20964,7 +20964,6 @@ static njs_unit_test_t njs_async_handle ), njs_str("1") }, -#if 0 /* FIXME */ { njs_str("globalThis.main = (function() {" " let obj = { a: 1, b: 2};" " function cb(r) { r.retval(obj.a); }" @@ -20975,7 +20974,6 @@ static njs_unit_test_t njs_async_handle "})();" ), njs_str("1") }, -#endif }; From xeioex at nginx.com Mon Oct 11 15:08:58 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 11 Oct 2021 15:08:58 +0000 Subject: [njs] Introduced WebCrypto API according to W3C spec. Message-ID: details: https://hg.nginx.org/njs/rev/a4c3c333c05d branches: changeset: 1720:a4c3c333c05d user: Dmitry Volyntsev date: Mon Oct 11 15:06:15 2021 +0000 description: Introduced WebCrypto API according to W3C spec. The following methods were implemented: crypto.getRandomValues() crypto.subtle.importKey() format: raw, pkcs8, spki algorithm: AES-CBC, AES-CTR, AES-GCM, ECDSA, HKDF, HMAC, PBKDF2, RSASSA-PKCS1-v1_5, RSA-OAEP, RSA-PSS crypto.subtle.decrypt() crypto.subtle.encrypt() algorithm: AES-CBC, AES-CTR, AES-GCM, RSA-OAEP crypto.subtle.deriveBits() crypto.subtle.deriveKey() algorithm: HKDF, PBKDF2 crypto.subtle.digest() algorithm: SHA-1, SHA-256, SHA-384, SHA-512 crypto.subtle.sign() crypto.subtle.verify() algorithm: ECDSA, HMAC, RSASSA-PKCS1-v1_5, RSA-PSS diffstat: auto/make | 5 +- auto/openssl | 56 + auto/sources | 1 + auto/summary | 4 + configure | 3 +- external/njs_webcrypto.c | 2666 +++++++++++++++++++++ external/njs_webcrypto.h | 15 + nginx/config | 11 +- nginx/ngx_js.c | 7 + src/njs_shell.c | 12 + src/njs_str.c | 37 + src/njs_str.h | 9 + src/test/njs_externals_test.c | 14 + src/test/njs_unit_test.c | 49 +- test/njs_expect_test.exp | 32 + test/ts/test.ts | 25 +- test/webcrypto/README.rst | 136 + test/webcrypto/aes.js | 123 + test/webcrypto/aes_decoding.js | 116 + test/webcrypto/aes_gcm_enc.js | 51 + test/webcrypto/derive.js | 149 + test/webcrypto/digest.js | 88 + test/webcrypto/ec.pkcs8 | 5 + test/webcrypto/ec.spki | 4 + test/webcrypto/ec2.pkcs8 | 5 + test/webcrypto/ec2.spki | 4 + test/webcrypto/rsa.js | 106 + test/webcrypto/rsa.pkcs8 | 16 + test/webcrypto/rsa.pkcs8.broken | 16 + test/webcrypto/rsa.spki | 6 + test/webcrypto/rsa.spki.broken | 6 + test/webcrypto/rsa2.pkcs8 | 16 + test/webcrypto/rsa2.spki | 6 + test/webcrypto/rsa_decoding.js | 81 + test/webcrypto/sign.js | 282 ++ test/webcrypto/text.base64.aes-cbc128.enc | 1 + test/webcrypto/text.base64.aes-cbc256.enc | 1 + test/webcrypto/text.base64.aes-ctr128.enc | 1 + test/webcrypto/text.base64.aes-ctr256.enc | 1 + test/webcrypto/text.base64.aes-gcm128-96.enc | 1 + test/webcrypto/text.base64.aes-gcm128-extra.enc | 1 + test/webcrypto/text.base64.aes-gcm128.enc | 1 + test/webcrypto/text.base64.aes-gcm256.enc | 1 + test/webcrypto/text.base64.rsa-oaep.enc | 3 + test/webcrypto/text.base64.sha1.ecdsa.sig | 2 + test/webcrypto/text.base64.sha1.hmac.sig | 1 + test/webcrypto/text.base64.sha1.pkcs1.sig | 3 + test/webcrypto/text.base64.sha1.rsa-pss.16.sig | 3 + test/webcrypto/text.base64.sha256.ecdsa.sig | 2 + test/webcrypto/text.base64.sha256.hmac.sig | 1 + test/webcrypto/text.base64.sha256.hmac.sig.broken | 1 + test/webcrypto/text.base64.sha256.pkcs1.sig | 3 + test/webcrypto/text.base64.sha256.rsa-pss.0.sig | 3 + test/webcrypto/text.base64.sha256.rsa-pss.32.sig | 3 + test/webcrypto/verify.js | 207 + ts/index.d.ts | 1 + ts/njs_core.d.ts | 2 +- ts/njs_webcrypto.d.ts | 226 + 58 files changed, 4615 insertions(+), 16 deletions(-) diffs (truncated from 5014 to 1000 lines): diff -r 66bd2cc7fd87 -r a4c3c333c05d auto/make --- a/auto/make Mon Oct 11 17:46:24 2021 +0300 +++ b/auto/make Mon Oct 11 15:06:15 2021 +0000 @@ -75,7 +75,7 @@ cat << END >> $NJS_MAKEFILE $NJS_BUILD_DIR/njs: \\ $NJS_BUILD_DIR/libnjs.a \\ - src/njs_shell.c + src/njs_shell.c external/njs_webcrypto.h external/njs_webcrypto.c \$(NJS_LINK) -o $NJS_BUILD_DIR/njs \$(NJS_CFLAGS) \\ $NJS_LIB_AUX_CFLAGS \$(NJS_LIB_INCS) -Injs \\ src/njs_shell.c \\ @@ -159,7 +159,8 @@ njs_dep_post=`njs_gen_dep_post $njs_dep cat << END >> $NJS_MAKEFILE -$NJS_BUILD_DIR/$njs_externals_obj: $njs_src +$NJS_BUILD_DIR/$njs_externals_obj: \\ + $njs_src external/njs_webcrypto.h external/njs_webcrypto.c \$(NJS_CC) -c \$(NJS_CFLAGS) $NJS_LIB_AUX_CFLAGS \\ \$(NJS_LIB_INCS) -Injs \\ -o $NJS_BUILD_DIR/$njs_externals_obj \\ diff -r 66bd2cc7fd87 -r a4c3c333c05d auto/openssl --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/auto/openssl Mon Oct 11 15:06:15 2021 +0000 @@ -0,0 +1,56 @@ + +# Copyright (C) Dmitry Volyntsev +# Copyright (C) NGINX, Inc. + + +NJS_OPENSSL_LIB= +NJS_HAVE_OPENSSL=NO + + +njs_found=no + + +njs_feature="OpenSSL library" +njs_feature_name=NJS_HAVE_OPENSSL +njs_feature_run=yes +njs_feature_incs= +njs_feature_libs="-lcrypto" +njs_feature_test="#include + + int main() { + OpenSSL_add_all_algorithms(); + return 0; + }" +. auto/feature + + +if [ $njs_found = yes ]; then + njs_feature="OpenSSL HKDF" + njs_feature_name=NJS_HAVE_OPENSSL_HKDF + njs_feature_test="#include + #include + + int main(void) { + EVP_PKEY_CTX *pctx = EVP_PKEY_CTX_new_id(EVP_PKEY_HKDF, NULL); + + EVP_PKEY_CTX_set_hkdf_md(pctx, EVP_sha256()); + EVP_PKEY_CTX_free(pctx); + + return 0; + }" + . auto/feature + + njs_feature="OpenSSL EVP_MD_CTX_new()" + njs_feature_name=NJS_HAVE_OPENSSL_EVP_MD_CTX_NEW + njs_feature_test="#include + + int main(void) { + EVP_MD_CTX *ctx = EVP_MD_CTX_new(); + EVP_MD_CTX_free(ctx); + return 0; + }" + . auto/feature + + NJS_HAVE_OPENSSL=YES + NJS_OPENSSL_LIB="$njs_feature_libs" +fi diff -r 66bd2cc7fd87 -r a4c3c333c05d auto/sources --- a/auto/sources Mon Oct 11 17:46:24 2021 +0300 +++ b/auto/sources Mon Oct 11 15:06:15 2021 +0000 @@ -2,6 +2,7 @@ NJS_LIB_SRCS=" \ src/njs_diyfp.c \ src/njs_dtoa.c \ src/njs_dtoa_fixed.c \ + src/njs_str.c \ src/njs_strtod.c \ src/njs_murmur_hash.c \ src/njs_djb_hash.c \ diff -r 66bd2cc7fd87 -r a4c3c333c05d auto/summary --- a/auto/summary Mon Oct 11 17:46:24 2021 +0300 +++ b/auto/summary Mon Oct 11 15:06:15 2021 +0000 @@ -15,6 +15,10 @@ if [ $NJS_HAVE_READLINE = YES ]; then echo " + using readline library: $NJS_READLINE_LIB" fi +if [ $NJS_HAVE_OPENSSL = YES ]; then + echo " + using OpenSSL library: $NJS_OPENSSL_LIB" +fi + echo echo " njs build dir: $NJS_BUILD_DIR" diff -r 66bd2cc7fd87 -r a4c3c333c05d configure --- a/configure Mon Oct 11 17:46:24 2021 +0300 +++ b/configure Mon Oct 11 15:06:15 2021 +0000 @@ -26,12 +26,13 @@ set -u . auto/explicit_bzero . auto/pcre . auto/readline +. auto/openssl . auto/sources NJS_LIB_AUX_CFLAGS="$NJS_PCRE_CFLAGS" NJS_LIBS="$NJS_LIBRT" -NJS_LIB_AUX_LIBS="$NJS_PCRE_LIB" +NJS_LIB_AUX_LIBS="$NJS_PCRE_LIB $NJS_OPENSSL_LIB" . auto/make diff -r 66bd2cc7fd87 -r a4c3c333c05d external/njs_webcrypto.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/external/njs_webcrypto.c Mon Oct 11 15:06:15 2021 +0000 @@ -0,0 +1,2666 @@ + +/* + * Copyright (C) Dmitry Volyntsev + * Copyright (C) NGINX, Inc. + */ + + +#include +#include "njs_webcrypto.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#if NJS_HAVE_OPENSSL_HKDF +#include +#endif + +#if NJS_HAVE_OPENSSL_EVP_MD_CTX_NEW +#define njs_evp_md_ctx_new() EVP_MD_CTX_new(); +#define njs_evp_md_ctx_free(_ctx) EVP_MD_CTX_free(_ctx); +#else +#define njs_evp_md_ctx_new() EVP_MD_CTX_create(); +#define njs_evp_md_ctx_free(_ctx) EVP_MD_CTX_destroy(_ctx); +#endif + + +typedef enum { + NJS_KEY_FORMAT_RAW = 1 << 1, + NJS_KEY_FORMAT_PKCS8 = 1 << 2, + NJS_KEY_FORMAT_SPKI = 1 << 3, + NJS_KEY_FORMAT_JWK = 1 << 4, + NJS_KEY_FORMAT_UNKNOWN = 1 << 5, +} njs_webcrypto_key_format_t; + + +typedef enum { + NJS_KEY_USAGE_DECRYPT = 1 << 1, + NJS_KEY_USAGE_DERIVE_BITS = 1 << 2, + NJS_KEY_USAGE_DERIVE_KEY = 1 << 3, + NJS_KEY_USAGE_ENCRYPT = 1 << 4, + NJS_KEY_USAGE_GENERATE_KEY = 1 << 5, + NJS_KEY_USAGE_SIGN = 1 << 6, + NJS_KEY_USAGE_VERIFY = 1 << 7, + NJS_KEY_USAGE_WRAP_KEY = 1 << 8, + NJS_KEY_USAGE_UNSUPPORTED = 1 << 9, + NJS_KEY_USAGE_UNWRAP_KEY = 1 << 10, +} njs_webcrypto_key_usage_t; + + +typedef enum { + NJS_ALGORITHM_RSA_OAEP, + NJS_ALGORITHM_AES_GCM, + NJS_ALGORITHM_AES_CTR, + NJS_ALGORITHM_AES_CBC, + NJS_ALGORITHM_RSASSA_PKCS1_v1_5, + NJS_ALGORITHM_RSA_PSS, + NJS_ALGORITHM_ECDSA, + NJS_ALGORITHM_ECDH, + NJS_ALGORITHM_PBKDF2, + NJS_ALGORITHM_HKDF, + NJS_ALGORITHM_HMAC, +} njs_webcrypto_alg_t; + + +typedef enum { + NJS_HASH_SHA1, + NJS_HASH_SHA256, + NJS_HASH_SHA384, + NJS_HASH_SHA512, +} njs_webcrypto_hash_t; + + +typedef enum { + NJS_CURVE_P256, + NJS_CURVE_P384, + NJS_CURVE_P521, +} njs_webcrypto_curve_t; + + +typedef struct { + njs_str_t name; + uintptr_t value; +} njs_webcrypto_entry_t; + + +typedef struct { + njs_webcrypto_alg_t type; + unsigned usage; + unsigned fmt; +} njs_webcrypto_algorithm_t; + + +typedef struct { + njs_webcrypto_algorithm_t *alg; + unsigned usage; + njs_webcrypto_hash_t hash; + njs_webcrypto_curve_t curve; + + EVP_PKEY *pkey; + njs_str_t raw; +} njs_webcrypto_key_t; + + +typedef int (*EVP_PKEY_cipher_init_t)(EVP_PKEY_CTX *ctx); +typedef int (*EVP_PKEY_cipher_t)(EVP_PKEY_CTX *ctx, unsigned char *out, + size_t *outlen, const unsigned char *in, size_t inlen); + + +static njs_int_t njs_ext_cipher(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t njs_cipher_pkey(njs_vm_t *vm, njs_str_t *data, + njs_webcrypto_key_t *key, njs_index_t encrypt); +static njs_int_t njs_cipher_aes_gcm(njs_vm_t *vm, njs_str_t *data, + njs_webcrypto_key_t *key, njs_value_t *options, njs_bool_t encrypt); +static njs_int_t njs_cipher_aes_ctr(njs_vm_t *vm, njs_str_t *data, + njs_webcrypto_key_t *key, njs_value_t *options, njs_bool_t encrypt); +static njs_int_t njs_cipher_aes_cbc(njs_vm_t *vm, njs_str_t *data, + njs_webcrypto_key_t *key, njs_value_t *options, njs_bool_t encrypt); +static njs_int_t njs_ext_derive(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t derive_key); +static njs_int_t njs_ext_digest(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t njs_ext_export_key(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t njs_ext_generate_key(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t njs_ext_import_key(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t njs_ext_sign(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t verify); +static njs_int_t njs_ext_unwrap_key(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t njs_ext_wrap_key(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t njs_ext_get_random_values(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); + +static void njs_webcrypto_cleanup_pkey(void *data); +static njs_webcrypto_key_format_t njs_key_format(njs_vm_t *vm, + njs_value_t *value, njs_str_t *format); +static njs_int_t njs_key_usage(njs_vm_t *vm, njs_value_t *value, + unsigned *mask); +static njs_webcrypto_algorithm_t *njs_key_algorithm(njs_vm_t *vm, + njs_value_t *value); +static njs_str_t *njs_algorithm_string(njs_webcrypto_algorithm_t *algorithm); +static njs_int_t njs_algorithm_hash(njs_vm_t *vm, njs_value_t *value, + njs_webcrypto_hash_t *hash); +static const EVP_MD *njs_algorithm_hash_digest(njs_webcrypto_hash_t hash); +static njs_int_t njs_algorithm_curve(njs_vm_t *vm, njs_value_t *value, + njs_webcrypto_curve_t *curve); + +static njs_int_t njs_webcrypto_result(njs_vm_t *vm, njs_value_t *result, + njs_int_t rc); +static void njs_webcrypto_error(njs_vm_t *vm, const char *fmt, ...); + +static njs_webcrypto_entry_t njs_webcrypto_alg[] = { + +#define njs_webcrypto_algorithm(type, usage_mask, fmt_mask) \ + (uintptr_t) & (njs_webcrypto_algorithm_t) { type, usage_mask, fmt_mask } + + { + njs_str("RSA-OAEP"), + njs_webcrypto_algorithm(NJS_ALGORITHM_RSA_OAEP, + NJS_KEY_USAGE_ENCRYPT | + NJS_KEY_USAGE_DECRYPT | + NJS_KEY_USAGE_WRAP_KEY | + NJS_KEY_USAGE_UNWRAP_KEY | + NJS_KEY_USAGE_GENERATE_KEY, + NJS_KEY_FORMAT_PKCS8 | + NJS_KEY_FORMAT_SPKI) + }, + + { + njs_str("AES-GCM"), + njs_webcrypto_algorithm(NJS_ALGORITHM_AES_GCM, + NJS_KEY_USAGE_ENCRYPT | + NJS_KEY_USAGE_DECRYPT | + NJS_KEY_USAGE_WRAP_KEY | + NJS_KEY_USAGE_UNWRAP_KEY | + NJS_KEY_USAGE_GENERATE_KEY, + NJS_KEY_FORMAT_RAW) + }, + + { + njs_str("AES-CTR"), + njs_webcrypto_algorithm(NJS_ALGORITHM_AES_CTR, + NJS_KEY_USAGE_ENCRYPT | + NJS_KEY_USAGE_DECRYPT | + NJS_KEY_USAGE_WRAP_KEY | + NJS_KEY_USAGE_UNWRAP_KEY | + NJS_KEY_USAGE_GENERATE_KEY, + NJS_KEY_FORMAT_RAW) + }, + + { + njs_str("AES-CBC"), + njs_webcrypto_algorithm(NJS_ALGORITHM_AES_CBC, + NJS_KEY_USAGE_ENCRYPT | + NJS_KEY_USAGE_DECRYPT | + NJS_KEY_USAGE_WRAP_KEY | + NJS_KEY_USAGE_UNWRAP_KEY | + NJS_KEY_USAGE_GENERATE_KEY, + NJS_KEY_FORMAT_RAW) + }, + + { + njs_str("RSASSA-PKCS1-v1_5"), + njs_webcrypto_algorithm(NJS_ALGORITHM_RSASSA_PKCS1_v1_5, + NJS_KEY_USAGE_SIGN | + NJS_KEY_USAGE_VERIFY | + NJS_KEY_USAGE_GENERATE_KEY, + NJS_KEY_FORMAT_PKCS8 | + NJS_KEY_FORMAT_SPKI) + }, + + { + njs_str("RSA-PSS"), + njs_webcrypto_algorithm(NJS_ALGORITHM_RSA_PSS, + NJS_KEY_USAGE_SIGN | + NJS_KEY_USAGE_VERIFY | + NJS_KEY_USAGE_GENERATE_KEY, + NJS_KEY_FORMAT_PKCS8 | + NJS_KEY_FORMAT_SPKI) + }, + + { + njs_str("ECDSA"), + njs_webcrypto_algorithm(NJS_ALGORITHM_ECDSA, + NJS_KEY_USAGE_SIGN | + NJS_KEY_USAGE_VERIFY | + NJS_KEY_USAGE_GENERATE_KEY, + NJS_KEY_FORMAT_PKCS8 | + NJS_KEY_FORMAT_SPKI) + }, + + { + njs_str("ECDH"), + njs_webcrypto_algorithm(NJS_ALGORITHM_ECDH, + NJS_KEY_USAGE_DERIVE_KEY | + NJS_KEY_USAGE_DERIVE_BITS | + NJS_KEY_USAGE_GENERATE_KEY | + NJS_KEY_USAGE_UNSUPPORTED, + NJS_KEY_FORMAT_UNKNOWN) + }, + + { + njs_str("PBKDF2"), + njs_webcrypto_algorithm(NJS_ALGORITHM_PBKDF2, + NJS_KEY_USAGE_DERIVE_KEY | + NJS_KEY_USAGE_DERIVE_BITS, + NJS_KEY_FORMAT_RAW) + }, + + { + njs_str("HKDF"), + njs_webcrypto_algorithm(NJS_ALGORITHM_HKDF, + NJS_KEY_USAGE_DERIVE_KEY | + NJS_KEY_USAGE_DERIVE_BITS, + NJS_KEY_FORMAT_RAW) + }, + + { + njs_str("HMAC"), + njs_webcrypto_algorithm(NJS_ALGORITHM_HMAC, + NJS_KEY_USAGE_GENERATE_KEY | + NJS_KEY_USAGE_SIGN | + NJS_KEY_USAGE_VERIFY, + NJS_KEY_FORMAT_RAW) + }, + + { + njs_null_str, + 0 + } +}; + + +static njs_webcrypto_entry_t njs_webcrypto_hash[] = { + { njs_str("SHA-256"), NJS_HASH_SHA256 }, + { njs_str("SHA-384"), NJS_HASH_SHA384 }, + { njs_str("SHA-512"), NJS_HASH_SHA512 }, + { njs_str("SHA-1"), NJS_HASH_SHA1 }, + { njs_null_str, 0 } +}; + + +static njs_webcrypto_entry_t njs_webcrypto_curve[] = { + { njs_str("P-256"), NJS_CURVE_P256 }, + { njs_str("P-384"), NJS_CURVE_P384 }, + { njs_str("P-521"), NJS_CURVE_P521 }, + { njs_null_str, 0 } +}; + + +static njs_webcrypto_entry_t njs_webcrypto_usage[] = { + { njs_str("decrypt"), NJS_KEY_USAGE_DECRYPT }, + { njs_str("deriveBits"), NJS_KEY_USAGE_DERIVE_BITS }, + { njs_str("deriveKey"), NJS_KEY_USAGE_DERIVE_KEY }, + { njs_str("encrypt"), NJS_KEY_USAGE_ENCRYPT }, + { njs_str("sign"), NJS_KEY_USAGE_SIGN }, + { njs_str("unwrapKey"), NJS_KEY_USAGE_UNWRAP_KEY }, + { njs_str("verify"), NJS_KEY_USAGE_VERIFY }, + { njs_str("wrapKey"), NJS_KEY_USAGE_WRAP_KEY }, + { njs_null_str, 0 } +}; + + +static njs_external_t njs_ext_webcrypto_crypto_key[] = { + + { + .flags = NJS_EXTERN_PROPERTY | NJS_EXTERN_SYMBOL, + .name.symbol = NJS_SYMBOL_TO_STRING_TAG, + .u.property = { + .value = "CryptoKey", + } + }, +}; + + +static njs_external_t njs_ext_subtle_webcrypto[] = { + + { + .flags = NJS_EXTERN_PROPERTY | NJS_EXTERN_SYMBOL, + .name.symbol = NJS_SYMBOL_TO_STRING_TAG, + .u.property = { + .value = "SubtleCrypto", + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("decrypt"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_cipher, + .magic8 = 0, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("deriveBits"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_derive, + .magic8 = 0, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("deriveKey"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_derive, + .magic8 = 1, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("digest"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_digest, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("encrypt"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_cipher, + .magic8 = 1, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("exportKey"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_export_key, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("generateKey"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_generate_key, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("importKey"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_import_key, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("sign"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_sign, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("unwrapKey"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_unwrap_key, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("verify"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_sign, + .magic8 = 1, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("wrapKey"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_wrap_key, + } + }, + +}; + +static njs_external_t njs_ext_webcrypto[] = { + + { + .flags = NJS_EXTERN_PROPERTY | NJS_EXTERN_SYMBOL, + .name.symbol = NJS_SYMBOL_TO_STRING_TAG, + .u.property = { + .value = "Crypto", + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("getRandomValues"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = njs_ext_get_random_values, + } + }, + + { + .flags = NJS_EXTERN_OBJECT, + .name.string = njs_str("subtle"), + .enumerable = 1, + .writable = 1, + .u.object = { + .enumerable = 1, + .properties = njs_ext_subtle_webcrypto, + .nproperties = njs_nitems(njs_ext_subtle_webcrypto), + } + }, + +}; + + +static njs_int_t njs_webcrypto_crypto_key_proto_id; + + +static njs_int_t +njs_ext_cipher(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, + njs_index_t encrypt) +{ + unsigned mask; + njs_int_t ret; + njs_str_t data; + njs_value_t *options; + njs_webcrypto_key_t *key; + njs_webcrypto_algorithm_t *alg; + + options = njs_arg(args, nargs, 1); + alg = njs_key_algorithm(vm, options); + if (njs_slow_path(alg == NULL)) { + goto fail; + } + + key = njs_vm_external(vm, njs_webcrypto_crypto_key_proto_id, + njs_arg(args, nargs, 2)); + if (njs_slow_path(key == NULL)) { + njs_type_error(vm, "\"key\" is not a CryptoKey object"); + goto fail; + } + + mask = encrypt ? NJS_KEY_USAGE_ENCRYPT : NJS_KEY_USAGE_DECRYPT; + if (njs_slow_path(!(key->usage & mask))) { + njs_type_error(vm, "provide key does not support %s operation", + encrypt ? "encrypt" : "decrypt"); + goto fail; + } + + if (njs_slow_path(key->alg != alg)) { + njs_type_error(vm, "cannot %s using \"%V\" with \"%V\" key", + encrypt ? "encrypt" : "decrypt", + njs_algorithm_string(key->alg), + njs_algorithm_string(alg)); + goto fail; + } + + ret = njs_vm_value_to_bytes(vm, &data, njs_arg(args, nargs, 3)); + if (njs_slow_path(ret != NJS_OK)) { + goto fail; + } + + switch (alg->type) { + case NJS_ALGORITHM_RSA_OAEP: + ret = njs_cipher_pkey(vm, &data, key, encrypt); + break; + + case NJS_ALGORITHM_AES_GCM: + ret = njs_cipher_aes_gcm(vm, &data, key, options, encrypt); + break; + + case NJS_ALGORITHM_AES_CTR: + ret = njs_cipher_aes_ctr(vm, &data, key, options, encrypt); + break; + + case NJS_ALGORITHM_AES_CBC: + default: + ret = njs_cipher_aes_cbc(vm, &data, key, options, encrypt); + } + + return njs_webcrypto_result(vm, njs_vm_retval(vm), ret); + +fail: + + return njs_webcrypto_result(vm, njs_vm_retval(vm), NJS_ERROR); +} + + +static njs_int_t +njs_cipher_pkey(njs_vm_t *vm, njs_str_t *data, njs_webcrypto_key_t *key, + njs_index_t encrypt) +{ + u_char *dst; + size_t outlen; + njs_int_t ret; + const EVP_MD *md; + EVP_PKEY_CTX *ctx; + EVP_PKEY_cipher_t cipher; + EVP_PKEY_cipher_init_t init; + + ctx = EVP_PKEY_CTX_new(key->pkey, NULL); + if (njs_slow_path(ctx == NULL)) { + njs_webcrypto_error(vm, "EVP_PKEY_CTX_new() failed"); + return NJS_ERROR; + } + + if (encrypt) { + init = EVP_PKEY_encrypt_init; + cipher = EVP_PKEY_encrypt; + + } else { + init = EVP_PKEY_decrypt_init; + cipher = EVP_PKEY_decrypt; + } + + ret = init(ctx); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_PKEY_%scrypt_init() failed", + encrypt ? "en" : "de"); + ret = NJS_ERROR; + goto fail; + } + + md = njs_algorithm_hash_digest(key->hash); + + EVP_PKEY_CTX_set_rsa_padding(ctx, RSA_PKCS1_OAEP_PADDING); + EVP_PKEY_CTX_set_rsa_oaep_md(ctx, md); + EVP_PKEY_CTX_set_rsa_mgf1_md(ctx, md); + + ret = cipher(ctx, NULL, &outlen, data->start, data->length); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_PKEY_%scrypt() failed", + encrypt ? "en" : "de"); + ret = NJS_ERROR; + goto fail; + } + + dst = njs_mp_alloc(njs_vm_memory_pool(vm), outlen); + if (njs_slow_path(dst == NULL)) { + njs_memory_error(vm); + ret = NJS_ERROR; + goto fail; + } + + ret = cipher(ctx, dst, &outlen, data->start, data->length); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_PKEY_%scrypt() failed", + encrypt ? "en" : "de"); + ret = NJS_ERROR; + goto fail; + } + + ret = njs_vm_value_array_buffer_set(vm, njs_vm_retval(vm), dst, outlen); + +fail: + + EVP_PKEY_CTX_free(ctx); + + return ret; +} + + +static njs_int_t +njs_cipher_aes_gcm(njs_vm_t *vm, njs_str_t *data, njs_webcrypto_key_t *key, + njs_value_t *options, njs_bool_t encrypt) +{ + int len, outlen, dstlen; + u_char *dst, *p; + int64_t taglen; + njs_str_t iv, aad; + njs_int_t ret; + njs_value_t value; + EVP_CIPHER_CTX *ctx; + const EVP_CIPHER *cipher; + + static const njs_value_t string_iv = njs_string("iv"); + static const njs_value_t string_ad = njs_string("additionalData"); + static const njs_value_t string_tl = njs_string("tagLength"); + + switch (key->raw.length) { + case 16: + cipher = EVP_aes_128_gcm(); + break; + + case 32: + cipher = EVP_aes_256_gcm(); + break; + + default: + njs_type_error(vm, "AES-GCM Invalid key length"); + return NJS_ERROR; + } + + ret = njs_value_property(vm, options, njs_value_arg(&string_iv), &value); + if (njs_slow_path(ret != NJS_OK)) { + if (ret == NJS_DECLINED) { + njs_type_error(vm, "AES-GCM algorithm.iv is not provided"); + } + + return NJS_ERROR; + } + + ret = njs_vm_value_to_bytes(vm, &iv, &value); + if (njs_slow_path(ret != NJS_OK)) { + return NJS_ERROR; + } + + taglen = 128; + + ret = njs_value_property(vm, options, njs_value_arg(&string_tl), &value); + if (njs_slow_path(ret == NJS_ERROR)) { + return NJS_ERROR; + } + + if (njs_is_defined(&value)) { + ret = njs_value_to_integer(vm, &value, &taglen); + if (njs_slow_path(ret != NJS_OK)) { + return NJS_ERROR; + } + } + + if (njs_slow_path(taglen != 32 + && taglen != 64 + && taglen != 96 + && taglen != 104 + && taglen != 112 + && taglen != 120 + && taglen != 128)) + { + njs_type_error(vm, "AES-GCM Invalid tagLength"); + return NJS_ERROR; + } + + taglen /= 8; + + if (njs_slow_path(!encrypt && (data->length < (size_t) taglen))) { + njs_type_error(vm, "AES-GCM data is too short"); + return NJS_ERROR; + } + + ctx = EVP_CIPHER_CTX_new(); + if (njs_slow_path(ctx == NULL)) { + njs_webcrypto_error(vm, "EVP_CIPHER_CTX_new() failed"); + return NJS_ERROR; + } + + ret = EVP_CipherInit_ex(ctx, cipher, NULL, NULL, NULL, encrypt); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_%sInit_ex() failed", + encrypt ? "Encrypt" : "Decrypt"); + ret = NJS_ERROR; + goto fail; + } + + ret = EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_IVLEN, iv.length, NULL); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_CIPHER_CTX_ctrl() failed"); + ret = NJS_ERROR; + goto fail; + } + + ret = EVP_CipherInit_ex(ctx, NULL, NULL, key->raw.start, iv.start, + encrypt); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_%sInit_ex() failed", + encrypt ? "Encrypt" : "Decrypt"); + ret = NJS_ERROR; + goto fail; + } + + if (!encrypt) { + ret = EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_TAG, taglen, + &data->start[data->length - taglen]); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_CIPHER_CTX_ctrl() failed"); + ret = NJS_ERROR; + goto fail; + } + } + + ret = njs_value_property(vm, options, njs_value_arg(&string_ad), &value); + if (njs_slow_path(ret == NJS_ERROR)) { + return NJS_ERROR; + } + + aad.length = 0; + + if (njs_is_defined(&value)) { + ret = njs_vm_value_to_bytes(vm, &aad, &value); + if (njs_slow_path(ret != NJS_OK)) { + return NJS_ERROR; + } + } + + if (aad.length != 0) { + ret = EVP_CipherUpdate(ctx, NULL, &outlen, aad.start, aad.length); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_%sUpdate() failed", + encrypt ? "Encrypt" : "Decrypt"); + ret = NJS_ERROR; + goto fail; + } + } + + dstlen = data->length + EVP_CIPHER_CTX_block_size(ctx) + taglen; + dst = njs_mp_alloc(njs_vm_memory_pool(vm), dstlen); + if (njs_slow_path(dst == NULL)) { + njs_memory_error(vm); + return NJS_ERROR; + } + + ret = EVP_CipherUpdate(ctx, dst, &outlen, data->start, + data->length - (encrypt ? 0 : taglen)); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_%sUpdate() failed", + encrypt ? "Encrypt" : "Decrypt"); + ret = NJS_ERROR; + goto fail; + } + + p = &dst[outlen]; + len = EVP_CIPHER_CTX_block_size(ctx); + + ret = EVP_CipherFinal_ex(ctx, p, &len); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_%sFinal_ex() failed", + encrypt ? "Encrypt" : "Decrypt"); + ret = NJS_ERROR; + goto fail; + } + + outlen += len; + p += len; + + if (encrypt) { + ret = EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_GET_TAG, taglen, p); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_CIPHER_CTX_ctrl() failed"); + ret = NJS_ERROR; + goto fail; + } + + outlen += taglen; + } + + ret = njs_vm_value_array_buffer_set(vm, njs_vm_retval(vm), dst, outlen); + +fail: + + EVP_CIPHER_CTX_free(ctx); + + return ret; +} + + +static njs_int_t +njs_cipher_aes_ctr128(njs_vm_t *vm, const EVP_CIPHER *cipher, u_char *key, + u_char *data, size_t dlen, u_char *counter, u_char *dst, int *olen, + njs_bool_t encrypt) +{ + int len, outlen; + njs_int_t ret; + EVP_CIPHER_CTX *ctx; + + ctx = EVP_CIPHER_CTX_new(); + if (njs_slow_path(ctx == NULL)) { + njs_webcrypto_error(vm, "EVP_CIPHER_CTX_new() failed"); + return NJS_ERROR; + } + + ret = EVP_CipherInit_ex(ctx, cipher, NULL, key, counter, encrypt); + if (njs_slow_path(ret <= 0)) { + njs_webcrypto_error(vm, "EVP_%sInit_ex() failed", + encrypt ? "Encrypt" : "Decrypt"); + ret = NJS_ERROR; From mdounin at mdounin.ru Mon Oct 11 18:49:54 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Oct 2021 21:49:54 +0300 Subject: Extra data from upstream and keepalive connections In-Reply-To: References: Message-ID: Hello! On Fri, Oct 08, 2021 at 11:19:29AM -0700, Awdhesh Mathpal wrote: > Hello, > > Proxy module may not disable keepalive connection when upstream sends extra data with Content-Length:0 response header. > > This happens because of an incorrect assumption on the state of the upstream->keepalive flag at https://github.com/nginx/nginx/blame/master/src/http/modules/ngx_http_proxy_module.c#L2336 > > When response content-length is 0, then upstream->keepalive may get initialized to 1 depending on the Connection response header. https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_proxy_module.c#L2002 > > To trigger this issue, nginx must be configured as follows: > - proxy buffering is disabled > - responses are processed by ngx_http_proxy_non_buffered_copy_filter (no nginx caching) > - The upstream keepalive directive is enabled > - The content-length response header from upstream is 0 > - Upstream sends a body/extra data > > Under these conditions, the connection will be saved for next request. > > Here is a patch that addresses this: Thanks for the patch, see below for comments. > > # HG changeset patch > # User Awdhesh Mathpal > # Date 1633659791 25200 > # Thu Oct 07 19:23:11 2021 -0700 > # Node ID ccf2ccd9724f7cff4363e81545b1af97aa881415 > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 > proxy: Disable keepalive on extra data > > When an upstream sends Content-Length:0, upstream->keepalive > is initialized eagerly on the basis of Connection header. This > can lead to keepalive being enabled on the connection. If in such > a scenario upstream sends extra data, then the connection should > not be reused. The "Content-Length: 0" case is not the only possible scenario when this may happen. A more accurate would be to say "a response without body". It might also make sense to refer to the similar code in ngx_http_proxy_copy_filter(), as well as the 83c4622053b0 (http://hg.nginx.org/nginx/rev/83c4622053b0) where this was missed. Suggested commit log, also fixing minor style issues: : Proxy: disabled keepalive on extra data in non-buffered mode. : : The u->keepalive flag is initialized early if the response has no body : (or an empty body), and needs to be reset if there are any extra data, : similarly to how it is done in ngx_http_proxy_copy_filter(). Missed : in 83c4622053b0. > > diff -r ae7c767aa491 -r ccf2ccd9724f src/http/modules/ngx_http_proxy_module.c > --- a/src/http/modules/ngx_http_proxy_module.c Wed Oct 06 18:01:42 2021 +0300 > +++ b/src/http/modules/ngx_http_proxy_module.c Thu Oct 07 19:23:11 2021 -0700 > @@ -2337,6 +2337,7 @@ > ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, > "upstream sent more data than specified in " > "\"Content-Length\" header"); > + u->keepalive = 0; > return NGX_OK; > } > This part looks fine. > @@ -2370,7 +2371,7 @@ > ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, > "upstream sent more data than specified in " > "\"Content-Length\" header"); > - > + u->keepalive = 0; > cl->buf->last = cl->buf->pos + u->length; > u->length = 0; > > But this one shouldn't be needed, as this part cannot be reached with u->keepalive set. If you think it can, please elaborate. Updated patch with the above commit-log changes and unneeded part removed: # HG changeset patch # User Awdhesh Mathpal # Date 1633659791 25200 # Thu Oct 07 19:23:11 2021 -0700 # Node ID 055b2a8471171dfa16a5696524d6f740b213e660 # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 Proxy: disabled keepalive on extra data in non-buffered mode. The u->keepalive flag is initialized early if the response has no body (or an empty body), and needs to be reset if there are any extra data, similarly to how it is done in ngx_http_proxy_copy_filter(). Missed in 83c4622053b0. diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -2337,6 +2337,7 @@ ngx_http_proxy_non_buffered_copy_filter( ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, "upstream sent more data than specified in " "\"Content-Length\" header"); + u->keepalive = 0; return NGX_OK; } Please take a look. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Oct 11 18:53:33 2021 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 11 Oct 2021 21:53:33 +0300 Subject: [PATCH] Removed CLOCK_MONOTONIC_COARSE support Message-ID: <3217b92006f8807d1613.1633978413@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1633978301 -10800 # Mon Oct 11 21:51:41 2021 +0300 # Node ID 3217b92006f8807d16134246a064baab64fa7b32 # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 Removed CLOCK_MONOTONIC_COARSE support. While clock_gettime(CLOCK_MONOTONIC_COARSE) is faster than clock_gettime(CLOCK_MONOTONIC), the latter is fast enough on Linux for practical usage, and the difference is negligible compared to other costs at each event loop iteration. On the other hand, CLOCK_MONOTONIC_COARSE causes various issues with typical CONFIG_HZ=250, notably very inacurate limit_rate handling in some edge cases (ticket #1678) and negative difference between $request_time and $upstream_response_time (ticket #1965). diff --git a/src/core/ngx_times.c b/src/core/ngx_times.c --- a/src/core/ngx_times.c +++ b/src/core/ngx_times.c @@ -200,10 +200,6 @@ ngx_monotonic_time(time_t sec, ngx_uint_ #if defined(CLOCK_MONOTONIC_FAST) clock_gettime(CLOCK_MONOTONIC_FAST, &ts); - -#elif defined(CLOCK_MONOTONIC_COARSE) - clock_gettime(CLOCK_MONOTONIC_COARSE, &ts); - #else clock_gettime(CLOCK_MONOTONIC, &ts); #endif From mdounin at mdounin.ru Mon Oct 11 18:58:20 2021 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 11 Oct 2021 21:58:20 +0300 Subject: [PATCH 1 of 4] Switched to using posted next events after sendfile_max_chunk In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1633978533 -10800 # Mon Oct 11 21:55:33 2021 +0300 # Node ID d175cd09ac9d2bab7f7226eac3bfce196a296cc0 # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 Switched to using posted next events after sendfile_max_chunk. Previously, 1 millisecond delay was used instead. In certain edge cases this might result in noticeable performance degradation though, notably on Linux with typical CONFIG_HZ=250 (so 1ms delay becomes 4ms), sendfile_max_chunk 2m, and link speed above 2.5 Gbps. Using posted next events removes the artificial delay and makes processing fast in all cases. diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c +++ b/src/http/ngx_http_write_filter_module.c @@ -331,8 +331,7 @@ ngx_http_write_filter(ngx_http_request_t && c->write->ready && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) { - c->write->delayed = 1; - ngx_add_timer(c->write, 1); + ngx_post_event(c->write, &ngx_posted_next_events); } for (cl = r->out; cl && cl != chain; /* void */) { From mdounin at mdounin.ru Mon Oct 11 18:58:21 2021 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 11 Oct 2021 21:58:21 +0300 Subject: [PATCH 2 of 4] Simplified sendfile_max_chunk handling In-Reply-To: References: Message-ID: <489323e194e4c3b1a793.1633978701@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1633978587 -10800 # Mon Oct 11 21:56:27 2021 +0300 # Node ID 489323e194e4c3b1a7937c51bd4e1671c70f52f8 # Parent d175cd09ac9d2bab7f7226eac3bfce196a296cc0 Simplified sendfile_max_chunk handling. Previously, it was checked that sendfile_max_chunk was enabled and almost whole sendfile_max_chunk was sent (see e67ef50c3176), to avoid delaying connections where sendfile_max_chunk wasn't reached (for example, when sending responses smaller than sendfile_max_chunk). Now we instead check if there are unsent data, and the connection is still ready for writing. Additionally we also check c->write->delayed to ignore connections already delayed by limit_rate. This approach is believed to be more robust, and correctly handles not only sendfile_max_chunk, but also internal limits of c->send_chain(), such as sendfile() maximum supported length (ticket #1870). diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c +++ b/src/http/ngx_http_write_filter_module.c @@ -321,16 +321,12 @@ ngx_http_write_filter(ngx_http_request_t delay = (ngx_msec_t) ((nsent - sent) * 1000 / r->limit_rate); if (delay > 0) { - limit = 0; c->write->delayed = 1; ngx_add_timer(c->write, delay); } } - if (limit - && c->write->ready - && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) - { + if (chain && c->write->ready && !c->write->delayed) { ngx_post_event(c->write, &ngx_posted_next_events); } From mdounin at mdounin.ru Mon Oct 11 18:58:19 2021 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 11 Oct 2021 21:58:19 +0300 Subject: [PATCH 0 of 4] sendfile_max_chunk series Message-ID: Hello! Here is a patch series to improve sendfile_max_chunk support and use it by default. Mostly inspired by KTLS / SSL_sendfile() upcoming changes. -- Maxim Dounin From mdounin at mdounin.ru Mon Oct 11 18:58:22 2021 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 11 Oct 2021 21:58:22 +0300 Subject: [PATCH 3 of 4] Upstream: sendfile_max_chunk support In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1633978615 -10800 # Mon Oct 11 21:56:55 2021 +0300 # Node ID c7ef6ce9455b01ee1fdcfd7288c4ac5b3ef0de41 # Parent 489323e194e4c3b1a7937c51bd4e1671c70f52f8 Upstream: sendfile_max_chunk support. Previously, connections to upstream servers used sendfile() if it was enabled, but never honored sendfile_max_chunk. This might result in worker monopolization for a long time if large request bodies are allowed. diff --git a/src/core/ngx_output_chain.c b/src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c +++ b/src/core/ngx_output_chain.c @@ -803,6 +803,10 @@ ngx_chain_writer(void *data, ngx_chain_t return NGX_ERROR; } + if (chain && c->write->ready) { + ngx_post_event(c->write, &ngx_posted_next_events); + } + for (cl = ctx->out; cl && cl != chain; /* void */) { ln = cl; cl = cl->next; diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1511,8 +1511,9 @@ ngx_http_upstream_check_broken_connectio static void ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) { - ngx_int_t rc; - ngx_connection_t *c; + ngx_int_t rc; + ngx_connection_t *c; + ngx_http_core_loc_conf_t *clcf; r->connection->log->action = "connecting to upstream"; @@ -1599,10 +1600,12 @@ ngx_http_upstream_connect(ngx_http_reque /* init or reinit the ngx_output_chain() and ngx_chain_writer() contexts */ + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); + u->writer.out = NULL; u->writer.last = &u->writer.out; u->writer.connection = c; - u->writer.limit = 0; + u->writer.limit = clcf->sendfile_max_chunk; if (u->request_sent) { if (ngx_http_upstream_reinit(r, u) != NGX_OK) { From mdounin at mdounin.ru Mon Oct 11 18:58:23 2021 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Mon, 11 Oct 2021 21:58:23 +0300 Subject: [PATCH 4 of 4] Changed default value of sendfile_max_chunk to 2m In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1633978667 -10800 # Mon Oct 11 21:57:47 2021 +0300 # Node ID a6426f166fa41d23040e5b3aefb2d6340c10a53c # Parent c7ef6ce9455b01ee1fdcfd7288c4ac5b3ef0de41 Changed default value of sendfile_max_chunk to 2m. The "sendfile_max_chunk" directive is important to prevent worker monopolization by fast connections. The 2m value implies maximum 200ms delay with 100 Mbps links, 20ms delay with 1 Gbps links, and 2ms on 10 Gbps links. It is also seems to be a good value for disks. diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -3720,7 +3720,7 @@ ngx_http_core_merge_loc_conf(ngx_conf_t ngx_conf_merge_value(conf->internal, prev->internal, 0); ngx_conf_merge_value(conf->sendfile, prev->sendfile, 0); ngx_conf_merge_size_value(conf->sendfile_max_chunk, - prev->sendfile_max_chunk, 0); + prev->sendfile_max_chunk, 2 * 1024 * 1024); ngx_conf_merge_size_value(conf->subrequest_output_buffer_size, prev->subrequest_output_buffer_size, (size_t) ngx_pagesize); From mathpal_n at fastmail.com Tue Oct 12 06:09:46 2021 From: mathpal_n at fastmail.com (Awdhesh Mathpal) Date: Mon, 11 Oct 2021 23:09:46 -0700 Subject: Extra data from upstream and keepalive connections In-Reply-To: References: Message-ID: <1724b523-2727-4acc-86c3-ce24551ed6f9@www.fastmail.com> Yes, Content-Length:0 is not the only case. The updated change set looks good to me. > But this one shouldn't be needed, as this part cannot be reached > with u->keepalive set. If you think it can, please elaborate. Yes, that part cannot happen currently, as upstream->keepalive is not yet initialized. I added it to be explicit about the intent and to avoid any case, where the condition of upstream->keepalive not being set becomes false in future. Awdhesh ----- Original message ----- From: Maxim Dounin To: nginx-devel at nginx.org Subject: Re: Extra data from upstream and keepalive connections Date: Monday, October 11, 2021 11:49 Hello! On Fri, Oct 08, 2021 at 11:19:29AM -0700, Awdhesh Mathpal wrote: > Hello, > > Proxy module may not disable keepalive connection when upstream sends extra data with Content-Length:0 response header. > > This happens because of an incorrect assumption on the state of the upstream->keepalive flag at https://github.com/nginx/nginx/blame/master/src/http/modules/ngx_http_proxy_module.c#L2336 > > When response content-length is 0, then upstream->keepalive may get initialized to 1 depending on the Connection response header. https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_proxy_module.c#L2002 > > To trigger this issue, nginx must be configured as follows: > - proxy buffering is disabled > - responses are processed by ngx_http_proxy_non_buffered_copy_filter (no nginx caching) > - The upstream keepalive directive is enabled > - The content-length response header from upstream is 0 > - Upstream sends a body/extra data > > Under these conditions, the connection will be saved for next request. > > Here is a patch that addresses this: Thanks for the patch, see below for comments. > > # HG changeset patch > # User Awdhesh Mathpal > # Date 1633659791 25200 > # Thu Oct 07 19:23:11 2021 -0700 > # Node ID ccf2ccd9724f7cff4363e81545b1af97aa881415 > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 > proxy: Disable keepalive on extra data > > When an upstream sends Content-Length:0, upstream->keepalive > is initialized eagerly on the basis of Connection header. This > can lead to keepalive being enabled on the connection. If in such > a scenario upstream sends extra data, then the connection should > not be reused. The "Content-Length: 0" case is not the only possible scenario when this may happen. A more accurate would be to say "a response without body". It might also make sense to refer to the similar code in ngx_http_proxy_copy_filter(), as well as the 83c4622053b0 (http://hg.nginx.org/nginx/rev/83c4622053b0) where this was missed. Suggested commit log, also fixing minor style issues: : Proxy: disabled keepalive on extra data in non-buffered mode. : : The u->keepalive flag is initialized early if the response has no body : (or an empty body), and needs to be reset if there are any extra data, : similarly to how it is done in ngx_http_proxy_copy_filter(). Missed : in 83c4622053b0. > > diff -r ae7c767aa491 -r ccf2ccd9724f src/http/modules/ngx_http_proxy_module.c > --- a/src/http/modules/ngx_http_proxy_module.c Wed Oct 06 18:01:42 2021 +0300 > +++ b/src/http/modules/ngx_http_proxy_module.c Thu Oct 07 19:23:11 2021 -0700 > @@ -2337,6 +2337,7 @@ > ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, > "upstream sent more data than specified in " > "\"Content-Length\" header"); > + u->keepalive = 0; > return NGX_OK; > } > This part looks fine. > @@ -2370,7 +2371,7 @@ > ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, > "upstream sent more data than specified in " > "\"Content-Length\" header"); > - > + u->keepalive = 0; > cl->buf->last = cl->buf->pos + u->length; > u->length = 0; > > But this one shouldn't be needed, as this part cannot be reached with u->keepalive set. If you think it can, please elaborate. Updated patch with the above commit-log changes and unneeded part removed: # HG changeset patch # User Awdhesh Mathpal # Date 1633659791 25200 # Thu Oct 07 19:23:11 2021 -0700 # Node ID 055b2a8471171dfa16a5696524d6f740b213e660 # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 Proxy: disabled keepalive on extra data in non-buffered mode. The u->keepalive flag is initialized early if the response has no body (or an empty body), and needs to be reset if there are any extra data, similarly to how it is done in ngx_http_proxy_copy_filter(). Missed in 83c4622053b0. diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -2337,6 +2337,7 @@ ngx_http_proxy_non_buffered_copy_filter( ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, "upstream sent more data than specified in " "\"Content-Length\" header"); + u->keepalive = 0; return NGX_OK; } Please take a look. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Awdhesh Mathpal From sunzhiyong3210 at gmail.com Tue Oct 12 07:41:22 2021 From: sunzhiyong3210 at gmail.com (sun edward) Date: Tue, 12 Oct 2021 15:41:22 +0800 Subject: performance is affected after merge OCSP changeset Message-ID: Hi, There is a changeset fe919fd63b0b "client certificate validation with OCSP" , after merge this changeset, the performance seems not as good as before, the avg response time increased about 50~60ms. is there a way to optimize this problem? thanks ®ards -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Oct 12 11:31:35 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 12 Oct 2021 14:31:35 +0300 Subject: performance is affected after merge OCSP changeset In-Reply-To: References: Message-ID: > On 12 Oct 2021, at 10:41, sun edward wrote: > > Hi, > There is a changeset fe919fd63b0b "client certificate validation with OCSP" , after merge this changeset, the performance seems not as good as before, the avg response time increased about 50~60ms. is there a way to optimize this problem? > Are you referring to processing 0-RTT HTTP/3 requests? Anyway, please try this change and report back. # HG changeset patch # User Sergey Kandaurov # Date 1634038108 -10800 # Tue Oct 12 14:28:28 2021 +0300 # Branch quic # Node ID af4bd86814fdd0a2da3f7b8a965c41923ebeedd5 # Parent 9d47948842a3fd1c658a9676e638ef66207ffdcd QUIC: speeding up processing 0-RTT. After fe919fd63b0b, processing 0-RTT was postponed until after handshake completion (typically seen as 2-RTT), including both ssl_ocsp on and off. This change allows to start OCSP checks with reused SSL handshakes, which eliminates 1 additional RTT allowing to process 0-RTT as expected. diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c --- a/src/event/quic/ngx_event_quic_ssl.c +++ b/src/event/quic/ngx_event_quic_ssl.c @@ -410,6 +410,10 @@ ngx_quic_crypto_input(ngx_connection_t * return NGX_ERROR; } + if (SSL_session_reused(c->ssl->connection)) { + goto ocsp; + } + return NGX_OK; } @@ -463,6 +467,7 @@ ngx_quic_crypto_input(ngx_connection_t * return NGX_ERROR; } +ocsp: rc = ngx_ssl_ocsp_validate(c); if (rc == NGX_ERROR) { -- Sergey Kandaurov From vl at nginx.com Tue Oct 12 12:39:38 2021 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 12 Oct 2021 15:39:38 +0300 Subject: [PATCH 4 of 5] QUIC: traffic-based flood detection In-Reply-To: References: Message-ID: On Thu, Oct 07, 2021 at 02:36:17PM +0300, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1633602816 -10800 > # Thu Oct 07 13:33:36 2021 +0300 > # Branch quic > # Node ID e20f00b8ac9005621993ea19375b1646c9182e7b > # Parent 31561ac584b74d29af9a442afca47821a98217b2 > QUIC: traffic-based flood detection. > > With this patch, all traffic over a QUIC connection is compared to traffic > over QUIC streams. As long as total traffic is many times larger than stream > traffic, we consider this to be a flood. > > diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c > --- a/src/event/quic/ngx_event_quic.c > +++ b/src/event/quic/ngx_event_quic.c > @@ -662,13 +662,17 @@ ngx_quic_close_timer_handler(ngx_event_t > static ngx_int_t > ngx_quic_input(ngx_connection_t *c, ngx_buf_t *b, ngx_quic_conf_t *conf) > { > - u_char *p; > - ngx_int_t rc; > - ngx_uint_t good; > - ngx_quic_header_t pkt; > + size_t size; > + u_char *p; > + ngx_int_t rc; > + ngx_uint_t good; > + ngx_quic_header_t pkt; > + ngx_quic_connection_t *qc; > > good = 0; > > + size = b->last - b->pos; > + > p = b->pos; > > while (p < b->last) { > @@ -701,7 +705,8 @@ ngx_quic_input(ngx_connection_t *c, ngx_ > > if (rc == NGX_DONE) { > /* stop further processing */ > - return NGX_DECLINED; > + good = 0; > + break; > } this chunk looks unnecessary: we will test 'good' after the loop and return NGX_DECLINED anyway in this case (good = 0). > > if (rc == NGX_OK) { > @@ -733,7 +738,27 @@ ngx_quic_input(ngx_connection_t *c, ngx_ > p = b->pos; > } > > - return good ? NGX_OK : NGX_DECLINED; > + if (!good) { > + return NGX_DECLINED; > + } > + > + qc = ngx_quic_get_connection(c); > + > + if (qc) { > + qc->received += size; > + > + if ((uint64_t) (c->sent + qc->received) / 8 > > + (qc->streams.sent + qc->streams.recv_last) + 1048576) > + { note: the comparison is intentionally similar to one used HTTP/2 for the same purposes > + ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic flood detected"); > + > + qc->error = NGX_QUIC_ERR_NO_ERROR; > + qc->error_reason = "QUIC flood detected"; > + return NGX_ERROR; > + } > + } > + > + return NGX_OK; > } > > > diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h > --- a/src/event/quic/ngx_event_quic_connection.h > +++ b/src/event/quic/ngx_event_quic_connection.h > @@ -236,6 +236,8 @@ struct ngx_quic_connection_s { > ngx_quic_streams_t streams; > ngx_quic_congestion_t congestion; > > + off_t received; > + > ngx_uint_t error; > enum ssl_encryption_level_t error_level; > ngx_uint_t error_ftype; As a whole, it seems to be working good enough. From vl at nginx.com Tue Oct 12 12:43:25 2021 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 12 Oct 2021 15:43:25 +0300 Subject: [PATCH 5 of 5] QUIC: limited the total number of frames In-Reply-To: <25aeebb9432182a6246f.1633606578@arut-laptop> References: <25aeebb9432182a6246f.1633606578@arut-laptop> Message-ID: On Thu, Oct 07, 2021 at 02:36:18PM +0300, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1633603050 -10800 > # Thu Oct 07 13:37:30 2021 +0300 > # Branch quic > # Node ID 25aeebb9432182a6246fedba6b1024f3d61e959b > # Parent e20f00b8ac9005621993ea19375b1646c9182e7b > QUIC: limited the total number of frames. > > Exceeding 10000 allocated frames is considered a flood. > > diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h > --- a/src/event/quic/ngx_event_quic_connection.h > +++ b/src/event/quic/ngx_event_quic_connection.h > @@ -228,10 +228,8 @@ struct ngx_quic_connection_s { > ngx_chain_t *free_bufs; > ngx_buf_t *free_shadow_bufs; > > -#ifdef NGX_QUIC_DEBUG_ALLOC > ngx_uint_t nframes; > ngx_uint_t nbufs; > -#endif nbufs are actually used only inside NGX_QUIC_DEBUG_ALLOC macro... > > ngx_quic_streams_t streams; > ngx_quic_congestion_t congestion; > diff --git a/src/event/quic/ngx_event_quic_frames.c b/src/event/quic/ngx_event_quic_frames.c > --- a/src/event/quic/ngx_event_quic_frames.c > +++ b/src/event/quic/ngx_event_quic_frames.c > @@ -38,18 +38,22 @@ ngx_quic_alloc_frame(ngx_connection_t *c > "quic reuse frame n:%ui", qc->nframes); > #endif > > - } else { > + } else if (qc->nframes < 10000) { > frame = ngx_palloc(c->pool, sizeof(ngx_quic_frame_t)); > if (frame == NULL) { > return NULL; > } > > -#ifdef NGX_QUIC_DEBUG_ALLOC > ++qc->nframes; > > +#ifdef NGX_QUIC_DEBUG_ALLOC > ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, > "quic alloc frame n:%ui", qc->nframes); > #endif > + > + } else { > + ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic flood detected"); > + return NULL; > } > > ngx_memzero(frame, sizeof(ngx_quic_frame_t)); > @@ -372,9 +376,9 @@ ngx_quic_alloc_buf(ngx_connection_t *c) > > cl->buf = b; > > -#ifdef NGX_QUIC_DEBUG_ALLOC > ++qc->nbufs; ... so this change seems unnecessary > > +#ifdef NGX_QUIC_DEBUG_ALLOC > ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, > "quic alloc buffer n:%ui", qc->nbufs); > #endif note: again, the patch follows approach used in HTTP/2 for limiting number of allocated frames and uses same constant. as a whole, should be working. From vl at nginx.com Tue Oct 12 12:45:30 2021 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 12 Oct 2021 15:45:30 +0300 Subject: [PATCH 1 of 5] HTTP/3: removed client-side encoder support In-Reply-To: References: Message-ID: On Thu, Oct 07, 2021 at 02:36:14PM +0300, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1633520939 -10800 > # Wed Oct 06 14:48:59 2021 +0300 > # Branch quic > # Node ID d53039c3224e8227979c113f621e532aef7c0f9b > # Parent 1ead7d64e9934c1a6c0d9dd3c5f1a3d643b926d6 > HTTP/3: removed client-side encoder support. > > Dynamic tables are not used when generating responses anyway. > > diff --git a/src/http/v3/ngx_http_v3_streams.c b/src/http/v3/ngx_http_v3_streams.c > --- a/src/http/v3/ngx_http_v3_streams.c > +++ b/src/http/v3/ngx_http_v3_streams.c > @@ -480,155 +480,6 @@ failed: > > > ngx_int_t > -ngx_http_v3_send_ref_insert(ngx_connection_t *c, ngx_uint_t dynamic, > - ngx_uint_t index, ngx_str_t *value) > -{ > - u_char *p, buf[NGX_HTTP_V3_PREFIX_INT_LEN * 2]; > - size_t n; > - ngx_connection_t *ec; > - > - ngx_log_debug3(NGX_LOG_DEBUG_HTTP, c->log, 0, > - "http3 client ref insert, %s[%ui] \"%V\"", > - dynamic ? "dynamic" : "static", index, value); > - > - ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER); > - if (ec == NULL) { > - return NGX_ERROR; > - } > - > - p = buf; > - > - *p = (dynamic ? 0x80 : 0xc0); > - p = (u_char *) ngx_http_v3_encode_prefix_int(p, index, 6); > - > - /* XXX option for huffman? */ > - *p = 0; > - p = (u_char *) ngx_http_v3_encode_prefix_int(p, value->len, 7); > - > - n = p - buf; > - > - if (ec->send(ec, buf, n) != (ssize_t) n) { > - goto failed; > - } > - > - if (ec->send(ec, value->data, value->len) != (ssize_t) value->len) { > - goto failed; > - } > - > - return NGX_OK; > - > -failed: > - > - ngx_http_v3_close_uni_stream(ec); > - > - return NGX_ERROR; > -} > - > - > -ngx_int_t > -ngx_http_v3_send_insert(ngx_connection_t *c, ngx_str_t *name, ngx_str_t *value) > -{ > - u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; > - size_t n; > - ngx_connection_t *ec; > - > - ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, > - "http3 client insert \"%V\":\"%V\"", name, value); > - > - ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER); > - if (ec == NULL) { > - return NGX_ERROR; > - } > - > - /* XXX option for huffman? */ > - buf[0] = 0x40; > - n = (u_char *) ngx_http_v3_encode_prefix_int(buf, name->len, 5) - buf; > - > - if (ec->send(ec, buf, n) != (ssize_t) n) { > - goto failed; > - } > - > - if (ec->send(ec, name->data, name->len) != (ssize_t) name->len) { > - goto failed; > - } > - > - /* XXX option for huffman? */ > - buf[0] = 0; > - n = (u_char *) ngx_http_v3_encode_prefix_int(buf, value->len, 7) - buf; > - > - if (ec->send(ec, buf, n) != (ssize_t) n) { > - goto failed; > - } > - > - if (ec->send(ec, value->data, value->len) != (ssize_t) value->len) { > - goto failed; > - } > - > - return NGX_OK; > - > -failed: > - > - ngx_http_v3_close_uni_stream(ec); > - > - return NGX_ERROR; > -} > - > - > -ngx_int_t > -ngx_http_v3_send_set_capacity(ngx_connection_t *c, ngx_uint_t capacity) > -{ > - u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; > - size_t n; > - ngx_connection_t *ec; > - > - ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, > - "http3 client set capacity %ui", capacity); > - > - ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER); > - if (ec == NULL) { > - return NGX_ERROR; > - } > - > - buf[0] = 0x20; > - n = (u_char *) ngx_http_v3_encode_prefix_int(buf, capacity, 5) - buf; > - > - if (ec->send(ec, buf, n) != (ssize_t) n) { > - ngx_http_v3_close_uni_stream(ec); > - return NGX_ERROR; > - } > - > - return NGX_OK; > -} > - > - > -ngx_int_t > -ngx_http_v3_send_duplicate(ngx_connection_t *c, ngx_uint_t index) > -{ > - u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; > - size_t n; > - ngx_connection_t *ec; > - > - ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, > - "http3 client duplicate %ui", index); > - > - ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER); > - if (ec == NULL) { > - return NGX_ERROR; > - } > - > - buf[0] = 0; > - n = (u_char *) ngx_http_v3_encode_prefix_int(buf, index, 5) - buf; > - > - if (ec->send(ec, buf, n) != (ssize_t) n) { > - ngx_http_v3_close_uni_stream(ec); > - return NGX_ERROR; > - } > - > - return NGX_OK; > -} > - > - > -ngx_int_t > ngx_http_v3_send_ack_section(ngx_connection_t *c, ngx_uint_t stream_id) > { > u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN]; > diff --git a/src/http/v3/ngx_http_v3_streams.h b/src/http/v3/ngx_http_v3_streams.h > --- a/src/http/v3/ngx_http_v3_streams.h > +++ b/src/http/v3/ngx_http_v3_streams.h > @@ -27,13 +27,6 @@ ngx_int_t ngx_http_v3_cancel_stream(ngx_ > > ngx_int_t ngx_http_v3_send_settings(ngx_connection_t *c); > ngx_int_t ngx_http_v3_send_goaway(ngx_connection_t *c, uint64_t id); > -ngx_int_t ngx_http_v3_send_ref_insert(ngx_connection_t *c, ngx_uint_t dynamic, > - ngx_uint_t index, ngx_str_t *value); > -ngx_int_t ngx_http_v3_send_insert(ngx_connection_t *c, ngx_str_t *name, > - ngx_str_t *value); > -ngx_int_t ngx_http_v3_send_set_capacity(ngx_connection_t *c, > - ngx_uint_t capacity); > -ngx_int_t ngx_http_v3_send_duplicate(ngx_connection_t *c, ngx_uint_t index); > ngx_int_t ngx_http_v3_send_ack_section(ngx_connection_t *c, > ngx_uint_t stream_id); > ngx_int_t ngx_http_v3_send_cancel_stream(ngx_connection_t *c, Looks good. From vl at nginx.com Tue Oct 12 12:46:03 2021 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 12 Oct 2021 15:46:03 +0300 Subject: [PATCH 2 of 5] HTTP/3: fixed request length calculation In-Reply-To: <1b87f4e196cce2b7aae3.1633606575@arut-laptop> References: <1b87f4e196cce2b7aae3.1633606575@arut-laptop> Message-ID: On Thu, Oct 07, 2021 at 02:36:15PM +0300, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1633521076 -10800 > # Wed Oct 06 14:51:16 2021 +0300 > # Branch quic > # Node ID 1b87f4e196cce2b7aae33a63ca6dfc857b99f2b7 > # Parent d53039c3224e8227979c113f621e532aef7c0f9b > HTTP/3: fixed request length calculation. > > Previously, when request was blocked, r->request_length was not updated. > > diff --git a/src/http/v3/ngx_http_v3_request.c b/src/http/v3/ngx_http_v3_request.c > --- a/src/http/v3/ngx_http_v3_request.c > +++ b/src/http/v3/ngx_http_v3_request.c > @@ -297,6 +297,8 @@ ngx_http_v3_process_request(ngx_event_t > break; > } > > + r->request_length += b->pos - p; > + > if (rc == NGX_BUSY) { > if (rev->error) { > ngx_http_close_request(r, NGX_HTTP_CLOSE); > @@ -310,8 +312,6 @@ ngx_http_v3_process_request(ngx_event_t > break; > } > > - r->request_length += b->pos - p; > - > if (rc == NGX_AGAIN) { > continue; > } Looks good From pluknet at nginx.com Tue Oct 12 16:47:36 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 12 Oct 2021 19:47:36 +0300 Subject: [PATCH 0 of 2] KTLS / SSL_sendfile() support In-Reply-To: References: Message-ID: > On 27 Sep 2021, at 16:18, Maxim Dounin wrote: > > Hello! > > This patch series add kernel TLS / SSL_sendfile() support. > Works on FreeBSD 13.0+ and Linux with kernel 4.13+ (at least 5.2 > is recommended, tested with 5.11). > > The following questions need additional testing/attention: > > - What about EINTR? Looks like it simply results in SSL_ERROR_WANT_WRITE, > so might need extra checking to make sure there will be another write > event. > > - What about SSL_sendfile(), early data and write blocking? > Ref. c->ssl->write_blocked, 7431:294162223c7c by pluknet at . > Looks like it is not a problem with SSL_sendfile(), but needs > further checking. > On that particular one. Indeed, it should not be an issue, since KTLS bypasses OpenSSL internals. For the record, I've reproduced the original issue fixed in 294162223c7c. For example, it could be reading discarded body sent separately in 1-RTT. Even with the fix backed out, reading with blocked sendfile works fine. 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL buf copy: 246 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to write: 246 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_write_early_data: 1, 246 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to sendfile: @0 1048576 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_sendfile: 45056 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to sendfile: @45056 1003520 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_sendfile: 40960 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to sendfile: @86016 962560 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_sendfile: 61440 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to sendfile: @147456 901120 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_sendfile: -1 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_get_error: 3 2021/10/12 16:15:53 [debug] 38707#0: *2 http write filter 0000000802259660 2021/10/12 16:15:53 [debug] 38707#0: *2 http copy filter: -2 "/file?" 2021/10/12 16:15:53 [debug] 38707#0: *2 http finalize request: -2, "/file?" a:1, c:2 2021/10/12 16:15:53 [debug] 38707#0: *2 event timer add: 13: 60000:707289850 2021/10/12 16:15:53 [debug] 38707#0: *2 kevent set event: 13: ft:-2 fl:0025 2021/10/12 16:15:53 [debug] 38707#0: timer delta: 1 2021/10/12 16:15:53 [debug] 38707#0: worker cycle 2021/10/12 16:15:53 [debug] 38707#0: kevent timer: 60000, changes: 1 2021/10/12 16:15:53 [debug] 38707#0: kevent events: 1 2021/10/12 16:15:53 [debug] 38707#0: kevent: 13: ft:-1 fl:0020 ff:00000000 d:138 ud:0000000802328841 2021/10/12 16:15:53 [debug] 38707#0: *2 http run request: "/file?" 2021/10/12 16:15:53 [debug] 38707#0: *2 http read discarded body 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_read_early_data: 2, 0 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_read: 10 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_read: avail:128 For comparison (and to make sure I'm testing it right), disabling sendfile on unfixed nginx would reintroduce an error: 2021/10/12 16:33:41 [debug] 42445#0: *2 SSL_read_early_data: 2, 0 2021/10/12 16:33:41 [alert] 42445#0: *2 ignoring stale global SSL error (SSL: error:0A00010F:SSL routines::bad length) while sending response to client, client: 127.0.0.1, server: localhost, request: "GET /file HTTP/1.1", host: "localhost" 2021/10/12 16:33:41 [debug] 42445#0: *2 SSL_read: -1 2021/10/12 16:33:41 [debug] 42445#0: *2 SSL_get_error: 5 > - What about FreeBSD aio sendfile (aka SF_NODISKIO)? Might be > easy enough to support. > > Review and testing appreciated. > -- Sergey Kandaurov From mdounin at mdounin.ru Tue Oct 12 16:48:27 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Oct 2021 16:48:27 +0000 Subject: [nginx] Proxy: disabled keepalive on extra data in non-buffered mode. Message-ID: details: https://hg.nginx.org/nginx/rev/055b2a847117 branches: changeset: 7931:055b2a847117 user: Awdhesh Mathpal date: Thu Oct 07 19:23:11 2021 -0700 description: Proxy: disabled keepalive on extra data in non-buffered mode. The u->keepalive flag is initialized early if the response has no body (or an empty body), and needs to be reset if there are any extra data, similarly to how it is done in ngx_http_proxy_copy_filter(). Missed in 83c4622053b0. diffstat: src/http/modules/ngx_http_proxy_module.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r ae7c767aa491 -r 055b2a847117 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Wed Oct 06 18:01:42 2021 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Thu Oct 07 19:23:11 2021 -0700 @@ -2337,6 +2337,7 @@ ngx_http_proxy_non_buffered_copy_filter( ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, "upstream sent more data than specified in " "\"Content-Length\" header"); + u->keepalive = 0; return NGX_OK; } From mdounin at mdounin.ru Tue Oct 12 17:04:56 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Oct 2021 20:04:56 +0300 Subject: Extra data from upstream and keepalive connections In-Reply-To: <1724b523-2727-4acc-86c3-ce24551ed6f9@www.fastmail.com> References: <1724b523-2727-4acc-86c3-ce24551ed6f9@www.fastmail.com> Message-ID: Hello! On Mon, Oct 11, 2021 at 11:09:46PM -0700, Awdhesh Mathpal wrote: > Yes, Content-Length:0 is not the only case. The updated change > set looks good to me. Committed, thanks. https://hg.nginx.org/nginx/rev/055b2a847117 -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Tue Oct 12 17:24:50 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 12 Oct 2021 17:24:50 +0000 Subject: [njs] SSL: fixed compatibility with OpenSSL 3.0. Message-ID: details: https://hg.nginx.org/njs/rev/8e335c2ac447 branches: changeset: 1721:8e335c2ac447 user: Dmitry Volyntsev date: Tue Oct 12 17:24:31 2021 +0000 description: SSL: fixed compatibility with OpenSSL 3.0. diffstat: auto/openssl | 26 +---------------------- external/njs_openssl.h | 53 ++++++++++++++++++++++++++++++++++++++++++++++++ external/njs_webcrypto.c | 28 ++---------------------- 3 files changed, 57 insertions(+), 50 deletions(-) diffs (145 lines): diff -r a4c3c333c05d -r 8e335c2ac447 auto/openssl --- a/auto/openssl Mon Oct 11 15:06:15 2021 +0000 +++ b/auto/openssl Tue Oct 12 17:24:31 2021 +0000 @@ -25,31 +25,7 @@ njs_feature_test="#include +#include +#include +#include +#include +#include +#include +#include +#include + +#if EVP_PKEY_HKDF +#include +#endif + + +#if (defined LIBRESSL_VERSION_NUMBER && OPENSSL_VERSION_NUMBER == 0x20000000L) +#undef OPENSSL_VERSION_NUMBER +#if (LIBRESSL_VERSION_NUMBER >= 0x2080000fL) +#define OPENSSL_VERSION_NUMBER 0x1010000fL +#else +#define OPENSSL_VERSION_NUMBER 0x1000107fL +#endif +#endif + + +#if (OPENSSL_VERSION_NUMBER >= 0x10100000L) +#define njs_evp_md_ctx_new() EVP_MD_CTX_new() +#define njs_evp_md_ctx_free(_ctx) EVP_MD_CTX_free(_ctx) +#else +#define njs_evp_md_ctx_new() EVP_MD_CTX_create() +#define njs_evp_md_ctx_free(_ctx) EVP_MD_CTX_destroy(_ctx) +#endif + + +#if (OPENSSL_VERSION_NUMBER < 0x30000000L && !defined ERR_peek_error_data) +#define ERR_peek_error_data(d, f) ERR_peek_error_line_data(NULL, NULL, d, f) +#endif + + +#endif /* _NJS_EXTERNAL_OPENSSL_H_INCLUDED_ */ diff -r a4c3c333c05d -r 8e335c2ac447 external/njs_webcrypto.c --- a/external/njs_webcrypto.c Mon Oct 11 15:06:15 2021 +0000 +++ b/external/njs_webcrypto.c Tue Oct 12 17:24:31 2021 +0000 @@ -7,29 +7,7 @@ #include #include "njs_webcrypto.h" - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#if NJS_HAVE_OPENSSL_HKDF -#include -#endif - -#if NJS_HAVE_OPENSSL_EVP_MD_CTX_NEW -#define njs_evp_md_ctx_new() EVP_MD_CTX_new(); -#define njs_evp_md_ctx_free(_ctx) EVP_MD_CTX_free(_ctx); -#else -#define njs_evp_md_ctx_new() EVP_MD_CTX_create(); -#define njs_evp_md_ctx_free(_ctx) EVP_MD_CTX_destroy(_ctx); -#endif - +#include "njs_openssl.h" typedef enum { NJS_KEY_FORMAT_RAW = 1 << 1, @@ -1449,7 +1427,7 @@ njs_ext_derive(njs_vm_t *vm, njs_value_t break; case NJS_ALGORITHM_HKDF: -#ifdef NJS_HAVE_OPENSSL_HKDF +#ifdef EVP_PKEY_HKDF ret = njs_algorithm_hash(vm, aobject, &hash); if (njs_slow_path(ret == NJS_ERROR)) { goto fail; @@ -2588,7 +2566,7 @@ njs_webcrypto_error(njs_vm_t *vm, const for ( ;; ) { - n = ERR_peek_error_line_data(NULL, NULL, &data, &flags); + n = ERR_peek_error_data(&data, &flags); if (n == 0) { break; From mdounin at mdounin.ru Tue Oct 12 21:21:21 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Oct 2021 21:21:21 +0000 Subject: [nginx] Synced ngx_http_subrequest() argument names (ticket #2255). Message-ID: details: https://hg.nginx.org/nginx/rev/01829d162095 branches: changeset: 7932:01829d162095 user: Maxim Dounin date: Tue Oct 12 23:18:18 2021 +0300 description: Synced ngx_http_subrequest() argument names (ticket #2255). diffstat: src/http/ngx_http_core_module.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 055b2a847117 -r 01829d162095 src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Thu Oct 07 19:23:11 2021 -0700 +++ b/src/http/ngx_http_core_module.h Tue Oct 12 23:18:18 2021 +0300 @@ -502,8 +502,8 @@ ngx_int_t ngx_http_gzip_ok(ngx_http_requ ngx_int_t ngx_http_subrequest(ngx_http_request_t *r, - ngx_str_t *uri, ngx_str_t *args, ngx_http_request_t **sr, - ngx_http_post_subrequest_t *psr, ngx_uint_t flags); + ngx_str_t *uri, ngx_str_t *args, ngx_http_request_t **psr, + ngx_http_post_subrequest_t *ps, ngx_uint_t flags); ngx_int_t ngx_http_internal_redirect(ngx_http_request_t *r, ngx_str_t *uri, ngx_str_t *args); ngx_int_t ngx_http_named_location(ngx_http_request_t *r, ngx_str_t *name); From gaoyan09 at baidu.com Wed Oct 13 06:46:53 2021 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Wed, 13 Oct 2021 06:46:53 +0000 Subject: Should continue when ngx_quic_bpf_group_add_socket failed with adding one socket during reloading Message-ID: <05AC6108-9E40-4D0C-AA9B-DABCF1E1A419@baidu.com> ngx_quic_bpf_module_init: Should continue when ngx_quic_bpf_group_add_socket failed with adding one socket during reloading? Gao,Yan(ACG VCP) From vl at nginx.com Wed Oct 13 08:15:04 2021 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 13 Oct 2021 11:15:04 +0300 Subject: Should continue when ngx_quic_bpf_group_add_socket failed with adding one socket during reloading In-Reply-To: <05AC6108-9E40-4D0C-AA9B-DABCF1E1A419@baidu.com> References: <05AC6108-9E40-4D0C-AA9B-DABCF1E1A419@baidu.com> Message-ID: <1d6aa6a7-a6c1-bb8e-a1f2-7b9468b6a25c@nginx.com> 13.10.2021 09:46, Gao,Yan(???) ?????: > ngx_quic_bpf_module_init: > Should continue when ngx_quic_bpf_group_add_socket failed with adding one socket during reloading? > > Gao,Yan(ACG VCP) > Hello Gao Yan, this is a hard question. I would say that only valid reason to fail there is if you hit some kernel limits. Otherwise it is some bug in code that should be fixed. If you fail to add sockets into map on reload, you end up in inconsistent state anyway, and there is not much you can do, but restart nginx completely. I hope this helps. From vl at nginx.com Wed Oct 13 09:06:56 2021 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 13 Oct 2021 12:06:56 +0300 Subject: [PATCH 3 of 5] HTTP/3: traffic-based flood detection In-Reply-To: <31561ac584b74d29af9a.1633606576@arut-laptop> References: <31561ac584b74d29af9a.1633606576@arut-laptop> Message-ID: On Thu, Oct 07, 2021 at 02:36:16PM +0300, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1633602162 -10800 > # Thu Oct 07 13:22:42 2021 +0300 > # Branch quic > # Node ID 31561ac584b74d29af9a442afca47821a98217b2 > # Parent 1b87f4e196cce2b7aae33a63ca6dfc857b99f2b7 > HTTP/3: traffic-based flood detection. > > With this patch, all traffic over HTTP/3 bidi and uni streams is counted in > the h3c->total_bytes field, and payload traffic is counted in the > h3c->payload_bytes field. As long as total traffic is many times larger than > payload traffic, we consider this to be a flood. > > Request header traffic is counted as if all fields are literal. Response > header traffic is counted as is. [..] this looks more complex than QUIC part, as we don't have clear understanding what 'payload' is. Attempt to count literal fields vs bytes leads to situations when payload is greater than total due to en/decoding. It looks like it doesn't harm though, as the difference is not that big and we should not have something like zip-bomb here (i.e. decoded payload increases greatly in length, while total is quite small) I'm not sure that assuming reserved frames is not a good payload is a good idea. While we don't know what is there, RFC tells us not assume anything about their meaning. On the other side, we can definitely consider huge number of reserved frames as a flood, as we don't make any progress with request as we receive them and waste resources. overal, it looks working, and I have no better ideas how we can improve it. From arut at nginx.com Wed Oct 13 11:37:08 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 13 Oct 2021 14:37:08 +0300 Subject: [PATCH 3 of 5] HTTP/3: traffic-based flood detection In-Reply-To: References: <31561ac584b74d29af9a.1633606576@arut-laptop> Message-ID: <20211013113708.enlvf4j3frea6swj@Romans-MacBook-Pro.local> On Wed, Oct 13, 2021 at 12:06:56PM +0300, Vladimir Homutov wrote: > On Thu, Oct 07, 2021 at 02:36:16PM +0300, Roman Arutyunyan wrote: > > # HG changeset patch > > # User Roman Arutyunyan > > # Date 1633602162 -10800 > > # Thu Oct 07 13:22:42 2021 +0300 > > # Branch quic > > # Node ID 31561ac584b74d29af9a442afca47821a98217b2 > > # Parent 1b87f4e196cce2b7aae33a63ca6dfc857b99f2b7 > > HTTP/3: traffic-based flood detection. > > > > With this patch, all traffic over HTTP/3 bidi and uni streams is counted in > > the h3c->total_bytes field, and payload traffic is counted in the > > h3c->payload_bytes field. As long as total traffic is many times larger than > > payload traffic, we consider this to be a flood. > > > > Request header traffic is counted as if all fields are literal. Response > > header traffic is counted as is. > > [..] > > this looks more complex than QUIC part, as we don't have clear > understanding what 'payload' is. Exactly. > Attempt to count literal fields vs bytes leads to situations when > payload is greater than total due to en/decoding. It looks like > it doesn't harm though, as the difference is not that big and we > should not have something like zip-bomb here > (i.e. decoded payload increases greatly in length, while total is quite > small) Counting fields as literal: 1. simplifies code 2. takes into account bytes passed via the encoder stream otherwise counted as burden, which technically they aren't Overall, I like the idea of comparing the traffic we received with the size of the request, whatever method we use to calculate it. Cherrypicking payload from the traffic is tricky in HTTP/3 since what logically constitutes the payload is spread over several streams. > I'm not sure that assuming reserved frames is not a good payload > is a good idea. While we don't know what is there, RFC tells us > not assume anything about their meaning. On the other side, > we can definitely consider huge number of reserved frames as a flood, > as we don't make any progress with request as we receive them > and waste resources. I assume reserved streams are potentially the main source of flood. > overal, it looks working, and I have no better ideas how we can improve > it. > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Roman Arutyunyan From arut at nginx.com Wed Oct 13 11:41:38 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 13 Oct 2021 14:41:38 +0300 Subject: [PATCH 4 of 5] QUIC: traffic-based flood detection In-Reply-To: References: Message-ID: <20211013114138.53au4ophyqkxgjfh@Romans-MacBook-Pro.local> On Tue, Oct 12, 2021 at 03:39:38PM +0300, Vladimir Homutov wrote: > On Thu, Oct 07, 2021 at 02:36:17PM +0300, Roman Arutyunyan wrote: > > # HG changeset patch > > # User Roman Arutyunyan > > # Date 1633602816 -10800 > > # Thu Oct 07 13:33:36 2021 +0300 > > # Branch quic > > # Node ID e20f00b8ac9005621993ea19375b1646c9182e7b > > # Parent 31561ac584b74d29af9a442afca47821a98217b2 > > QUIC: traffic-based flood detection. > > > > With this patch, all traffic over a QUIC connection is compared to traffic > > over QUIC streams. As long as total traffic is many times larger than stream > > traffic, we consider this to be a flood. > > > > diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c > > --- a/src/event/quic/ngx_event_quic.c > > +++ b/src/event/quic/ngx_event_quic.c > > @@ -662,13 +662,17 @@ ngx_quic_close_timer_handler(ngx_event_t > > static ngx_int_t > > ngx_quic_input(ngx_connection_t *c, ngx_buf_t *b, ngx_quic_conf_t *conf) > > { > > - u_char *p; > > - ngx_int_t rc; > > - ngx_uint_t good; > > - ngx_quic_header_t pkt; > > + size_t size; > > + u_char *p; > > + ngx_int_t rc; > > + ngx_uint_t good; > > + ngx_quic_header_t pkt; > > + ngx_quic_connection_t *qc; > > > > good = 0; > > > > + size = b->last - b->pos; > > + > > p = b->pos; > > > > while (p < b->last) { > > @@ -701,7 +705,8 @@ ngx_quic_input(ngx_connection_t *c, ngx_ > > > > if (rc == NGX_DONE) { > > /* stop further processing */ > > - return NGX_DECLINED; > > + good = 0; > > + break; > > } > > this chunk looks unnecessary: we will test 'good' after the loop and > return NGX_DECLINED anyway in this case (good = 0). Sure, thanks. Removed this one. > > if (rc == NGX_OK) { > > @@ -733,7 +738,27 @@ ngx_quic_input(ngx_connection_t *c, ngx_ > > p = b->pos; > > } > > > > - return good ? NGX_OK : NGX_DECLINED; > > + if (!good) { > > + return NGX_DECLINED; > > + } > > + > > + qc = ngx_quic_get_connection(c); > > + > > + if (qc) { > > + qc->received += size; > > + > > + if ((uint64_t) (c->sent + qc->received) / 8 > > > + (qc->streams.sent + qc->streams.recv_last) + 1048576) > > + { > > note: the comparison is intentionally similar to one used HTTP/2 for the > same purposes > > > + ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic flood detected"); > > + > > + qc->error = NGX_QUIC_ERR_NO_ERROR; > > + qc->error_reason = "QUIC flood detected"; > > + return NGX_ERROR; > > + } > > + } > > + > > + return NGX_OK; > > } > > > > > > diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h > > --- a/src/event/quic/ngx_event_quic_connection.h > > +++ b/src/event/quic/ngx_event_quic_connection.h > > @@ -236,6 +236,8 @@ struct ngx_quic_connection_s { > > ngx_quic_streams_t streams; > > ngx_quic_congestion_t congestion; > > > > + off_t received; > > + > > ngx_uint_t error; > > enum ssl_encryption_level_t error_level; > > ngx_uint_t error_ftype; > > As a whole, it seems to be working good enough. > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Roman Arutyunyan From arut at nginx.com Wed Oct 13 11:52:48 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 13 Oct 2021 14:52:48 +0300 Subject: [PATCH 5 of 5] QUIC: limited the total number of frames In-Reply-To: References: <25aeebb9432182a6246f.1633606578@arut-laptop> Message-ID: <20211013115248.us5yo3oabakdycvb@Romans-MacBook-Pro.local> On Tue, Oct 12, 2021 at 03:43:25PM +0300, Vladimir Homutov wrote: > On Thu, Oct 07, 2021 at 02:36:18PM +0300, Roman Arutyunyan wrote: > > # HG changeset patch > > # User Roman Arutyunyan > > # Date 1633603050 -10800 > > # Thu Oct 07 13:37:30 2021 +0300 > > # Branch quic > > # Node ID 25aeebb9432182a6246fedba6b1024f3d61e959b > > # Parent e20f00b8ac9005621993ea19375b1646c9182e7b > > QUIC: limited the total number of frames. > > > > Exceeding 10000 allocated frames is considered a flood. > > > > diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h > > --- a/src/event/quic/ngx_event_quic_connection.h > > +++ b/src/event/quic/ngx_event_quic_connection.h > > @@ -228,10 +228,8 @@ struct ngx_quic_connection_s { > > ngx_chain_t *free_bufs; > > ngx_buf_t *free_shadow_bufs; > > > > -#ifdef NGX_QUIC_DEBUG_ALLOC > > ngx_uint_t nframes; > > ngx_uint_t nbufs; > > -#endif > > nbufs are actually used only inside NGX_QUIC_DEBUG_ALLOC macro... We probably need to think about limiting nbufs too. Technically it's already limited by flow control, but if we only use a small portion of each buffer (like 1 byte), we can allocate much more than we need. This should probably be optimized. I'm already working on it in my stream buffering patchset. Until then let's leave it under the macro. > > ngx_quic_streams_t streams; > > ngx_quic_congestion_t congestion; > > diff --git a/src/event/quic/ngx_event_quic_frames.c b/src/event/quic/ngx_event_quic_frames.c > > --- a/src/event/quic/ngx_event_quic_frames.c > > +++ b/src/event/quic/ngx_event_quic_frames.c > > @@ -38,18 +38,22 @@ ngx_quic_alloc_frame(ngx_connection_t *c > > "quic reuse frame n:%ui", qc->nframes); > > #endif > > > > - } else { > > + } else if (qc->nframes < 10000) { > > frame = ngx_palloc(c->pool, sizeof(ngx_quic_frame_t)); > > if (frame == NULL) { > > return NULL; > > } > > > > -#ifdef NGX_QUIC_DEBUG_ALLOC > > ++qc->nframes; > > > > +#ifdef NGX_QUIC_DEBUG_ALLOC > > ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, > > "quic alloc frame n:%ui", qc->nframes); > > #endif > > + > > + } else { > > + ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic flood detected"); > > + return NULL; > > } > > > > ngx_memzero(frame, sizeof(ngx_quic_frame_t)); > > @@ -372,9 +376,9 @@ ngx_quic_alloc_buf(ngx_connection_t *c) > > > > cl->buf = b; > > > > -#ifdef NGX_QUIC_DEBUG_ALLOC > > ++qc->nbufs; > > ... so this change seems unnecessary > > > > > +#ifdef NGX_QUIC_DEBUG_ALLOC > > ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, > > "quic alloc buffer n:%ui", qc->nbufs); > > #endif > > note: again, the patch follows approach used in HTTP/2 for limiting number of > allocated frames and uses same constant. > > as a whole, should be working. > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1634125611 -10800 # Wed Oct 13 14:46:51 2021 +0300 # Branch quic # Node ID 6acee7057a256068f73f70a6d85dd0106642bf94 # Parent c6bce9ed64c3ea3fe3d8bbfda3852ffa5c556e1a QUIC: limited the total number of frames. Exceeding 10000 allocated frames is considered a flood. diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h +++ b/src/event/quic/ngx_event_quic_connection.h @@ -228,8 +228,8 @@ struct ngx_quic_connection_s { ngx_chain_t *free_bufs; ngx_buf_t *free_shadow_bufs; + ngx_uint_t nframes; #ifdef NGX_QUIC_DEBUG_ALLOC - ngx_uint_t nframes; ngx_uint_t nbufs; #endif diff --git a/src/event/quic/ngx_event_quic_frames.c b/src/event/quic/ngx_event_quic_frames.c --- a/src/event/quic/ngx_event_quic_frames.c +++ b/src/event/quic/ngx_event_quic_frames.c @@ -38,18 +38,22 @@ ngx_quic_alloc_frame(ngx_connection_t *c "quic reuse frame n:%ui", qc->nframes); #endif - } else { + } else if (qc->nframes < 10000) { frame = ngx_palloc(c->pool, sizeof(ngx_quic_frame_t)); if (frame == NULL) { return NULL; } -#ifdef NGX_QUIC_DEBUG_ALLOC ++qc->nframes; +#ifdef NGX_QUIC_DEBUG_ALLOC ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic alloc frame n:%ui", qc->nframes); #endif + + } else { + ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic flood detected"); + return NULL; } ngx_memzero(frame, sizeof(ngx_quic_frame_t)); From mdounin at mdounin.ru Wed Oct 13 13:27:53 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Oct 2021 16:27:53 +0300 Subject: [PATCH 0 of 2] KTLS / SSL_sendfile() support In-Reply-To: References: Message-ID: Hello! On Tue, Oct 12, 2021 at 07:47:36PM +0300, Sergey Kandaurov wrote: [...] > > - What about SSL_sendfile(), early data and write blocking? > > Ref. c->ssl->write_blocked, 7431:294162223c7c by pluknet at . > > Looks like it is not a problem with SSL_sendfile(), but needs > > further checking. > > > > On that particular one. > > Indeed, it should not be an issue, since KTLS bypasses OpenSSL internals. My concern here is alert dispatching and flushing part of the SSL_sendfile() function. It still does a lot in various OpenSSL write code path, and I'm not sure it cannot trigger the same OpenSSL issue if blocking happens at wrong moment. On the other hand, this is unlikely, and probably we can ignore this anyway. > For the record, I've reproduced the original issue fixed in 294162223c7c. > For example, it could be reading discarded body sent separately in 1-RTT. > Even with the fix backed out, reading with blocked sendfile works fine. > > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL buf copy: 246 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to write: 246 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_write_early_data: 1, 246 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to sendfile: @0 1048576 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_sendfile: 45056 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to sendfile: @45056 1003520 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_sendfile: 40960 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to sendfile: @86016 962560 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_sendfile: 61440 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL to sendfile: @147456 901120 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_sendfile: -1 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_get_error: 3 > 2021/10/12 16:15:53 [debug] 38707#0: *2 http write filter 0000000802259660 > 2021/10/12 16:15:53 [debug] 38707#0: *2 http copy filter: -2 "/file?" > 2021/10/12 16:15:53 [debug] 38707#0: *2 http finalize request: -2, "/file?" a:1, > c:2 > 2021/10/12 16:15:53 [debug] 38707#0: *2 event timer add: 13: 60000:707289850 > 2021/10/12 16:15:53 [debug] 38707#0: *2 kevent set event: 13: ft:-2 fl:0025 > 2021/10/12 16:15:53 [debug] 38707#0: timer delta: 1 > 2021/10/12 16:15:53 [debug] 38707#0: worker cycle > 2021/10/12 16:15:53 [debug] 38707#0: kevent timer: 60000, changes: 1 > 2021/10/12 16:15:53 [debug] 38707#0: kevent events: 1 > 2021/10/12 16:15:53 [debug] 38707#0: kevent: 13: ft:-1 fl:0020 ff:00000000 d:138 ud:0000000802328841 > 2021/10/12 16:15:53 [debug] 38707#0: *2 http run request: "/file?" > 2021/10/12 16:15:53 [debug] 38707#0: *2 http read discarded body > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_read_early_data: 2, 0 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_read: 10 > 2021/10/12 16:15:53 [debug] 38707#0: *2 SSL_read: avail:128 > > For comparison (and to make sure I'm testing it right), > disabling sendfile on unfixed nginx would reintroduce an error: > > 2021/10/12 16:33:41 [debug] 42445#0: *2 SSL_read_early_data: 2, 0 > 2021/10/12 16:33:41 [alert] 42445#0: *2 ignoring stale global SSL error (SSL: error:0A00010F:SSL routines::bad length) while sending response to client, client: 127.0.0.1, server: localhost, request: "GET /file HTTP/1.1", host: "localhost" > 2021/10/12 16:33:41 [debug] 42445#0: *2 SSL_read: -1 > 2021/10/12 16:33:41 [debug] 42445#0: *2 SSL_get_error: 5 It would be great to make a test (may be disabled by default and/or with some comments on tuning needed to reproduce) for the original issue, to make sure we'll be able to check possible future OpenSSL fixes, if any. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Wed Oct 13 16:56:51 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 13 Oct 2021 16:56:51 +0000 Subject: [njs] SSL: fixed typo introduced in 8e335c2ac447. Message-ID: details: https://hg.nginx.org/njs/rev/776277c313be branches: changeset: 1722:776277c313be user: Dmitry Volyntsev date: Wed Oct 13 15:29:50 2021 +0000 description: SSL: fixed typo introduced in 8e335c2ac447. diffstat: external/njs_openssl.h | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 8e335c2ac447 -r 776277c313be external/njs_openssl.h --- a/external/njs_openssl.h Tue Oct 12 17:24:31 2021 +0000 +++ b/external/njs_openssl.h Wed Oct 13 15:29:50 2021 +0000 @@ -21,7 +21,7 @@ #include #include -#if EVP_PKEY_HKDF +#ifdef EVP_PKEY_HKDF #include #endif From xeioex at nginx.com Wed Oct 13 16:56:53 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 13 Oct 2021 16:56:53 +0000 Subject: [njs] SSL: fixed building with OpenSSL <= 1.0.1. Message-ID: details: https://hg.nginx.org/njs/rev/1b63a726fcea branches: changeset: 1723:1b63a726fcea user: Dmitry Volyntsev date: Wed Oct 13 16:31:00 2021 +0000 description: SSL: fixed building with OpenSSL <= 1.0.1. This closes #429 issue on Github. diffstat: external/njs_openssl.h | 3 +++ external/njs_webcrypto.c | 4 ++-- 2 files changed, 5 insertions(+), 2 deletions(-) diffs (34 lines): diff -r 776277c313be -r 1b63a726fcea external/njs_openssl.h --- a/external/njs_openssl.h Wed Oct 13 15:29:50 2021 +0000 +++ b/external/njs_openssl.h Wed Oct 13 16:31:00 2021 +0000 @@ -45,6 +45,9 @@ #endif +#define njs_bio_new_mem_buf(b, len) BIO_new_mem_buf((void *) b, len) + + #if (OPENSSL_VERSION_NUMBER < 0x30000000L && !defined ERR_peek_error_data) #define ERR_peek_error_data(d, f) ERR_peek_error_line_data(NULL, NULL, d, f) #endif diff -r 776277c313be -r 1b63a726fcea external/njs_webcrypto.c --- a/external/njs_webcrypto.c Wed Oct 13 15:29:50 2021 +0000 +++ b/external/njs_webcrypto.c Wed Oct 13 16:31:00 2021 +0000 @@ -598,7 +598,7 @@ njs_cipher_pkey(njs_vm_t *vm, njs_str_t md = njs_algorithm_hash_digest(key->hash); EVP_PKEY_CTX_set_rsa_padding(ctx, RSA_PKCS1_OAEP_PADDING); - EVP_PKEY_CTX_set_rsa_oaep_md(ctx, md); + EVP_PKEY_CTX_set_signature_md(ctx, md); EVP_PKEY_CTX_set_rsa_mgf1_md(ctx, md); ret = cipher(ctx, NULL, &outlen, data->start, data->length); @@ -1714,7 +1714,7 @@ njs_ext_import_key(njs_vm_t *vm, njs_val switch (fmt) { case NJS_KEY_FORMAT_PKCS8: - bio = BIO_new_mem_buf(start, key_data.length); + bio = njs_bio_new_mem_buf(start, key_data.length); if (njs_slow_path(bio == NULL)) { njs_webcrypto_error(vm, "BIO_new_mem_buf() failed"); goto fail; From xeioex at nginx.com Thu Oct 14 17:16:52 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 14 Oct 2021 17:16:52 +0000 Subject: [njs] Style. Message-ID: details: https://hg.nginx.org/njs/rev/9502fed1bd6b branches: changeset: 1724:9502fed1bd6b user: Dmitry Volyntsev date: Thu Oct 14 15:18:47 2021 +0000 description: Style. diffstat: nginx/ngx_http_js_module.c | 24 ++++++++++++------------ nginx/ngx_stream_js_module.c | 24 ++++++++++++------------ 2 files changed, 24 insertions(+), 24 deletions(-) diffs (134 lines): diff -r 1b63a726fcea -r 9502fed1bd6b nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Wed Oct 13 16:31:00 2021 +0000 +++ b/nginx/ngx_http_js_module.c Thu Oct 14 15:18:47 2021 +0000 @@ -233,7 +233,7 @@ static char *ngx_http_js_merge_loc_conf( void *child); #if (NGX_HTTP_SSL) -static char * ngx_http_js_set_ssl(ngx_conf_t *cf, ngx_http_js_loc_conf_t *plcf); +static char * ngx_http_js_set_ssl(ngx_conf_t *cf, ngx_http_js_loc_conf_t *jlcf); #endif static ngx_ssl_t *ngx_http_js_ssl(njs_vm_t *vm, ngx_http_request_t *r); @@ -4087,7 +4087,7 @@ ngx_http_js_merge_loc_conf(ngx_conf_t *c #if (NGX_HTTP_SSL) static char * -ngx_http_js_set_ssl(ngx_conf_t *cf, ngx_http_js_loc_conf_t *plcf) +ngx_http_js_set_ssl(ngx_conf_t *cf, ngx_http_js_loc_conf_t *jlcf) { ngx_ssl_t *ssl; ngx_pool_cleanup_t *cln; @@ -4097,10 +4097,10 @@ ngx_http_js_set_ssl(ngx_conf_t *cf, ngx_ return NGX_CONF_ERROR; } - plcf->ssl = ssl; + jlcf->ssl = ssl; ssl->log = cf->log; - if (ngx_ssl_create(ssl, plcf->ssl_protocols, NULL) != NGX_OK) { + if (ngx_ssl_create(ssl, jlcf->ssl_protocols, NULL) != NGX_OK) { return NGX_CONF_ERROR; } @@ -4113,12 +4113,12 @@ ngx_http_js_set_ssl(ngx_conf_t *cf, ngx_ cln->handler = ngx_ssl_cleanup_ctx; cln->data = ssl; - if (ngx_ssl_ciphers(NULL, ssl, &plcf->ssl_ciphers, 0) != NGX_OK) { + if (ngx_ssl_ciphers(NULL, ssl, &jlcf->ssl_ciphers, 0) != NGX_OK) { return NGX_CONF_ERROR; } - if (ngx_ssl_trusted_certificate(cf, ssl, &plcf->ssl_trusted_certificate, - plcf->ssl_verify_depth) + if (ngx_ssl_trusted_certificate(cf, ssl, &jlcf->ssl_trusted_certificate, + jlcf->ssl_verify_depth) != NGX_OK) { return NGX_CONF_ERROR; @@ -4134,11 +4134,11 @@ static ngx_ssl_t * ngx_http_js_ssl(njs_vm_t *vm, ngx_http_request_t *r) { #if (NGX_HTTP_SSL) - ngx_http_js_loc_conf_t *plcf; - - plcf = ngx_http_get_module_loc_conf(r, ngx_http_js_module); - - return plcf->ssl; + ngx_http_js_loc_conf_t *jlcf; + + jlcf = ngx_http_get_module_loc_conf(r, ngx_http_js_module); + + return jlcf->ssl; #else return NULL; #endif diff -r 1b63a726fcea -r 9502fed1bd6b nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Wed Oct 13 16:31:00 2021 +0000 +++ b/nginx/ngx_stream_js_module.c Thu Oct 14 15:18:47 2021 +0000 @@ -147,7 +147,7 @@ static ngx_int_t ngx_stream_js_init(ngx_ #if (NGX_SSL) static char * ngx_stream_js_set_ssl(ngx_conf_t *cf, - ngx_stream_js_srv_conf_t *pscf); + ngx_stream_js_srv_conf_t *jscf); #endif static ngx_ssl_t *ngx_stream_js_ssl(njs_vm_t *vm, ngx_stream_session_t *s); @@ -2060,7 +2060,7 @@ ngx_stream_js_init(ngx_conf_t *cf) #if (NGX_SSL) static char * -ngx_stream_js_set_ssl(ngx_conf_t *cf, ngx_stream_js_srv_conf_t *pscf) +ngx_stream_js_set_ssl(ngx_conf_t *cf, ngx_stream_js_srv_conf_t *jscf) { ngx_ssl_t *ssl; ngx_pool_cleanup_t *cln; @@ -2070,10 +2070,10 @@ ngx_stream_js_set_ssl(ngx_conf_t *cf, ng return NGX_CONF_ERROR; } - pscf->ssl = ssl; + jscf->ssl = ssl; ssl->log = cf->log; - if (ngx_ssl_create(ssl, pscf->ssl_protocols, NULL) != NGX_OK) { + if (ngx_ssl_create(ssl, jscf->ssl_protocols, NULL) != NGX_OK) { return NGX_CONF_ERROR; } @@ -2086,12 +2086,12 @@ ngx_stream_js_set_ssl(ngx_conf_t *cf, ng cln->handler = ngx_ssl_cleanup_ctx; cln->data = ssl; - if (ngx_ssl_ciphers(NULL, ssl, &pscf->ssl_ciphers, 0) != NGX_OK) { + if (ngx_ssl_ciphers(NULL, ssl, &jscf->ssl_ciphers, 0) != NGX_OK) { return NGX_CONF_ERROR; } - if (ngx_ssl_trusted_certificate(cf, ssl, &pscf->ssl_trusted_certificate, - pscf->ssl_verify_depth) + if (ngx_ssl_trusted_certificate(cf, ssl, &jscf->ssl_trusted_certificate, + jscf->ssl_verify_depth) != NGX_OK) { return NGX_CONF_ERROR; @@ -2107,11 +2107,11 @@ static ngx_ssl_t * ngx_stream_js_ssl(njs_vm_t *vm, ngx_stream_session_t *s) { #if (NGX_SSL) - ngx_stream_js_srv_conf_t *pscf; - - pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_js_module); - - return pscf->ssl; + ngx_stream_js_srv_conf_t *jscf; + + jscf = ngx_stream_get_module_srv_conf(s, ngx_stream_js_module); + + return jscf->ssl; #else return NULL; #endif From xeioex at nginx.com Thu Oct 14 17:16:54 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 14 Oct 2021 17:16:54 +0000 Subject: [njs] Modules: fixed Respose.headers getter in fetch API. Message-ID: details: https://hg.nginx.org/njs/rev/6545769f30bf branches: changeset: 1725:6545769f30bf user: Dmitry Volyntsev date: Thu Oct 14 17:16:10 2021 +0000 description: Modules: fixed Respose.headers getter in fetch API. The issue manifested itself when Response object is dumped using JSON.stringify() or njs.dump(). The Response headers were dumped as "null" values. diffstat: nginx/ngx_js_fetch.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diffs (13 lines): diff -r 9502fed1bd6b -r 6545769f30bf nginx/ngx_js_fetch.c --- a/nginx/ngx_js_fetch.c Thu Oct 14 15:18:47 2021 +0000 +++ b/nginx/ngx_js_fetch.c Thu Oct 14 17:16:10 2021 +0000 @@ -2207,8 +2207,7 @@ ngx_response_js_ext_header(njs_vm_t *vm, return NJS_ERROR; } - return ngx_response_js_ext_header_get(vm, value, &name, njs_vm_retval(vm), - 0); + return ngx_response_js_ext_header_get(vm, value, &name, retval, 0); } From mdounin at mdounin.ru Thu Oct 14 17:33:28 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Oct 2021 20:33:28 +0300 Subject: Possible error on revalidate in ngx_http_upstream In-Reply-To: <7f42d31a-60fe-8521-5337-d6b6a9948978@cdn77.com> References: <7f42d31a-60fe-8521-5337-d6b6a9948978@cdn77.com> Message-ID: Hello! On Thu, Oct 07, 2021 at 06:37:11PM +0200, Ji?? Setni?ka wrote: > I use nginx as a proxy with enabled cache. I encountered strange > behavior on revalidate. > > When upstream does not return any caching headers it is ok - file is > cached with default cachetime and on revalidate the r->cache->valid_sec > is updated to now + default cachetime. > > Also when upstream consistently returns caching headers it is still ok - > file is cached according to caching headers and on revalidate the > r->cache->valid_sec is updated by value from 304 response caching headers. > > Problem is when upstream previously returned absolute caching headers on > 200 response (so the file is cached according to these headers and these > headers are saved into cache file on disk) but later it changed its > behavior and on 304 response it does not return any caching headers. > In such case, I would expect that now + default cachetime would be used > as the new r->cache->valid_sec, but old absolute time is used instead > and this yields in revalidate on each request. Per RFC 2616 (https://datatracker.ietf.org/doc/html/rfc2616#section-10.3.5): : The response MUST include the following header fields: : : ... : : - Expires, Cache-Control, and/or Vary, if the field-value might : differ from that sent in any previous response for the same : variant That is, as long as Expires is not present in the 304 response, per RFC 2616 it is correct to assume that the Expires in the original response still applies. This is what nginx currently does: if there are no Expires/Cache-Control in the 304 response, it uses Expires/Cache-Control from the original response, and there are none either, tries to use proxy_cache_valid. With this behaviour it is basically not possible to "remove" cache control headers, it is only possible to redefine them to something new explicitly returned. In RFC 7232 this was changed to (https://datatracker.ietf.org/doc/html/rfc7232#section-4.1): : The server generating a 304 response MUST generate any of the : following header fields that would have been sent in a 200 (OK) : response to the same request: Cache-Control, Content-Location, Date, : ETag, Expires, and Vary. This implies that the original response Expires and Cache-Control headers should not be used at all. With this approach, Expires and Cache-Control headers can be safely removed (in theory). Switching to RFC 7232 logic might be considered, though this should be done with care, as it can affect various upstream servers which follow RFC 2616 and does not generate Expire/Cache-Control if these are expected to match headers returned in the original response. On the other hand, I'm not sure this will help in your case, since Expires headers in the past are already automatically ignored, see below. > In ngx_http_upstream_test_next(...) in revalidate part there is firstly > cache time from upstream 304 response saved to temporal variable (valid > = r->cache->valid_sec) and then request is reinited and > r->cache->valid_sec is set according to headers in the cached file. > Problem is when value == 0 (no caching info from upstream) and there is > an absolute time in the cached file headers. > > This patch should fix this behavior - time computed from cached file is > used only when it is in the future otherwise, time calculated by > ngx_http_file_cache_valid(...) is used. As long as Expires is in the past, r->cache->valid_sec is not set and remains 0, see ngx_http_upstream_process_expires(). As such, suggested patch is a nop as long as standard Expires and Cache-Control headers are used: nginx will ignore Expires from the original response automatically, and will use proxy_cache_valid instead. Are you trying to address X-Accel-Expires with an absolute time in the past? Note that it is known to be specifically used to achieve the "revalidate on each request" behaviour, and the suggested change will break this. (Also, changing the X-Accel-Expires behaviour is better to be done in ngx_http_upstream_process_accel_expires(), rather than indirectly, in 304 response handling code.) -- Maxim Dounin http://mdounin.ru/ From jiri.setnicka at cdn77.com Fri Oct 15 10:14:38 2021 From: jiri.setnicka at cdn77.com (=?UTF-8?B?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 15 Oct 2021 12:14:38 +0200 Subject: Possible error on revalidate in ngx_http_upstream In-Reply-To: References: <7f42d31a-60fe-8521-5337-d6b6a9948978@cdn77.com> Message-ID: <626f4b2c-4523-b07d-e2aa-667b979a0ce7@cdn77.com> Hello! Thanks for your reply. I didn't realize the implications arising from the RFC mentioned. >> In ngx_http_upstream_test_next(...) in revalidate part there is firstly >> cache time from upstream 304 response saved to temporal variable (valid >> = r->cache->valid_sec) and then request is reinited and >> r->cache->valid_sec is set according to headers in the cached file. >> Problem is when value == 0 (no caching info from upstream) and there is >> an absolute time in the cached file headers. >> >> This patch should fix this behavior - time computed from cached file is >> used only when it is in the future otherwise, time calculated by >> ngx_http_file_cache_valid(...) is used. > As long as Expires is in the past, r->cache->valid_sec is not set > and remains 0, see ngx_http_upstream_process_expires(). As such, > suggested patch is a nop as long as standard Expires and > Cache-Control headers are used: nginx will ignore Expires from the > original response automatically, and will use proxy_cache_valid > instead. As you mentioned below, I deal with the X-Accel-Expires header. I didn't explicitly check ngx_http_upstream_process_expires() and I thought that the behavior is similar as X-Accel-Expires, sorry for that. > Are you trying to address X-Accel-Expires with an absolute time in > the past? Note that it is known to be specifically used to > achieve the "revalidate on each request" behaviour, and the > suggested change will break this. (Also, changing the > X-Accel-Expires behaviour is better to be done in > ngx_http_upstream_process_accel_expires(), rather than indirectly, > in 304 response handling code.) Ok I will look into implementation in ngx_http_upstream_process_accel_expires(). Is "revalidate on each request" behaviour intended as the right one, or it is considered as a hack because there is no other way to do "revalidate on each request"? I did not find it in any documentation, only in some email threads and tickets [1]. Would you be interested in the updated patch or should I patch it only locally for my own usecase? Best regards Jiri Setnicka [1] https://trac.nginx.org/nginx/ticket/1182 From mdounin at mdounin.ru Fri Oct 15 20:33:33 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Oct 2021 23:33:33 +0300 Subject: Possible error on revalidate in ngx_http_upstream In-Reply-To: <626f4b2c-4523-b07d-e2aa-667b979a0ce7@cdn77.com> References: <7f42d31a-60fe-8521-5337-d6b6a9948978@cdn77.com> <626f4b2c-4523-b07d-e2aa-667b979a0ce7@cdn77.com> Message-ID: Hello! On Fri, Oct 15, 2021 at 12:14:38PM +0200, Ji?? Setni?ka wrote: > Hello! > > Thanks for your reply. I didn't realize the implications arising from > the RFC mentioned. > > >> In ngx_http_upstream_test_next(...) in revalidate part there is firstly > >> cache time from upstream 304 response saved to temporal variable (valid > >> = r->cache->valid_sec) and then request is reinited and > >> r->cache->valid_sec is set according to headers in the cached file. > >> Problem is when value == 0 (no caching info from upstream) and there is > >> an absolute time in the cached file headers. > >> > >> This patch should fix this behavior - time computed from cached file is > >> used only when it is in the future otherwise, time calculated by > >> ngx_http_file_cache_valid(...) is used. > > As long as Expires is in the past, r->cache->valid_sec is not set > > and remains 0, see ngx_http_upstream_process_expires(). As such, > > suggested patch is a nop as long as standard Expires and > > Cache-Control headers are used: nginx will ignore Expires from the > > original response automatically, and will use proxy_cache_valid > > instead. > > As you mentioned below, I deal with the X-Accel-Expires header. I didn't > explicitly check ngx_http_upstream_process_expires() and I thought that > the behavior is similar as X-Accel-Expires, sorry for that. > > > Are you trying to address X-Accel-Expires with an absolute time in > > the past? Note that it is known to be specifically used to > > achieve the "revalidate on each request" behaviour, and the > > suggested change will break this. (Also, changing the > > X-Accel-Expires behaviour is better to be done in > > ngx_http_upstream_process_accel_expires(), rather than indirectly, > > in 304 response handling code.) > > Ok I will look into implementation in > ngx_http_upstream_process_accel_expires(). > > Is "revalidate on each request" behaviour intended as the right one, or > it is considered as a hack because there is no other way to do > "revalidate on each request"? I did not find it in any documentation, > only in some email threads and tickets [1]. > > Would you be interested in the updated patch or should I patch it only > locally for my own usecase? I don't think that X-Accel-Expires behaviour should be changed: while not really documented[*], it is known to be used as it is now, and currently there are no other ways to request revalidation on each request. [*] The only X-Accel-Expires documentation I'm aware of is in the original mod_accel docs (http://sysoev.ru/mod_accel/readme.html), and it only accepted relative time at that time. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Oct 15 22:03:56 2021 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Sat, 16 Oct 2021 01:03:56 +0300 Subject: [PATCH] Upstream: fixed logging level of upstream invalid header errors Message-ID: <568299e3799dcd9ec361.1634335436@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1634321061 -10800 # Fri Oct 15 21:04:21 2021 +0300 # Node ID 568299e3799dcd9ec361c998935d267a33b17daf # Parent 01829d1620956241455867fd8ba28ba54eed5aa9 Upstream: fixed logging level of upstream invalid header errors. In b87b7092cedb (nginx 1.21.1), logging level of "upstream sent invalid header" errors was accidentally changed to "info". This change restores the "error" level, which is a proper logging level for upstream-side errors. diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -2021,7 +2021,7 @@ ngx_http_fastcgi_process_header(ngx_http /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "upstream sent invalid header: \"%*s\\x%02xd...\"", r->header_end - r->header_name_start, r->header_name_start, *r->header_end); diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -2021,7 +2021,7 @@ ngx_http_proxy_process_header(ngx_http_r /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "upstream sent invalid header: \"%*s\\x%02xd...\"", r->header_end - r->header_name_start, r->header_name_start, *r->header_end); diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c +++ b/src/http/modules/ngx_http_scgi_module.c @@ -1142,7 +1142,7 @@ ngx_http_scgi_process_header(ngx_http_re /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "upstream sent invalid header: \"%*s\\x%02xd...\"", r->header_end - r->header_name_start, r->header_name_start, *r->header_end); diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -1363,7 +1363,7 @@ ngx_http_uwsgi_process_header(ngx_http_r /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "upstream sent invalid header: \"%*s\\x%02xd...\"", r->header_end - r->header_name_start, r->header_name_start, *r->header_end); From serg.brester at sebres.de Mon Oct 18 06:43:18 2021 From: serg.brester at sebres.de (Dipl. Ing. Sergey Brester) Date: Mon, 18 Oct 2021 08:43:18 +0200 Subject: PCRE2 support? In-Reply-To: <20190125001242.GR1877@mdounin.ru> References: <20180918121231.843FC2C50D50@mail.nginx.com> <20180918155518.GQ56558@mdounin.ru> <0522327a-8b2c-f066-ddd6-392207ec6c1d@thomas-ward.net> <20190124182121.GP1877@mdounin.ru> <5f5a8d73-77b5-bf62-d963-5e2927aa72e1@gmail.com> <20190125001242.GR1877@mdounin.ru> Message-ID: Just for the record (and probably to reopen this discussion again). https://github.com/PhilipHazel/pcre2/issues/26 [3] shows a heavy bug in PCRE library (it is not safe to use it anymore, at least without jit) as well as the statement of the PCRE developer regarding the end of life for PCRE. Regards, Serg. 25.01.2019 01:12, Maxim Dounin wrote: > Hello! > > On Thu, Jan 24, 2019 at 10:47:48AM -0800, PGNet Dev wrote: > Well, this depends on your point of view. If a project which actually developed the library fails to introduce support to the new version of the library - for an external observer this suggests that there is something wrong with the new version. FUD 'suggestions' simply aren't needed. Sure, they aren't. What is wrong with PCRE2 is clear from the very start: it's a different library with different API. And supporting PCRE2 is a question of advantages of PCRE2 over PCRE. > The Exim project didn't develop the pcre2 library ... Philip Hazel did (https://www.pcre.org/current/doc/html/pcre2.html#SEC4 [1]), as a separate project. Philip Hazel developed both Exim and the PCRE library, "originally written for the Exim MTA". And PCRE2 claims to be a "major version" of the PCRE library. > Exim's last (? something newer out there?) rationale for not adopting it was simply, https://bugs.exim.org/show_bug.cgi?id=1878 [2] "The original PCRE support is not broken. If it is going to go away, then adding PCRE2 support becomes much more important, but I've seen nobody saying that yet." I've posted this link in my first response in this thread 4 month ago. The same rationale applies to any project already using the PCRE library. Links: ------ [1] https://www.pcre.org/current/doc/html/pcre2.html#SEC4 [2] https://bugs.exim.org/show_bug.cgi?id=1878 [3] https://github.com/PhilipHazel/pcre2/issues/26 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Mon Oct 18 11:02:42 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 18 Oct 2021 14:02:42 +0300 Subject: [PATCH] Upstream: fixed logging level of upstream invalid header errors In-Reply-To: <568299e3799dcd9ec361.1634335436@vm-bsd.mdounin.ru> References: <568299e3799dcd9ec361.1634335436@vm-bsd.mdounin.ru> Message-ID: <23D752BF-2807-416E-A3F6-819F122FB185@nginx.com> > On 16 Oct 2021, at 01:03, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1634321061 -10800 > # Fri Oct 15 21:04:21 2021 +0300 > # Node ID 568299e3799dcd9ec361c998935d267a33b17daf > # Parent 01829d1620956241455867fd8ba28ba54eed5aa9 > Upstream: fixed logging level of upstream invalid header errors. > > In b87b7092cedb (nginx 1.21.1), logging level of "upstream sent invalid > header" errors was accidentally changed to "info". This change restores > the "error" level, which is a proper logging level for upstream-side > errors. > > diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c > --- a/src/http/modules/ngx_http_fastcgi_module.c > +++ b/src/http/modules/ngx_http_fastcgi_module.c > @@ -2021,7 +2021,7 @@ ngx_http_fastcgi_process_header(ngx_http > > /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ > > - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, > + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > "upstream sent invalid header: \"%*s\\x%02xd...\"", > r->header_end - r->header_name_start, > r->header_name_start, *r->header_end); > diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c > --- a/src/http/modules/ngx_http_proxy_module.c > +++ b/src/http/modules/ngx_http_proxy_module.c > @@ -2021,7 +2021,7 @@ ngx_http_proxy_process_header(ngx_http_r > > /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ > > - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, > + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > "upstream sent invalid header: \"%*s\\x%02xd...\"", > r->header_end - r->header_name_start, > r->header_name_start, *r->header_end); > diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c > --- a/src/http/modules/ngx_http_scgi_module.c > +++ b/src/http/modules/ngx_http_scgi_module.c > @@ -1142,7 +1142,7 @@ ngx_http_scgi_process_header(ngx_http_re > > /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ > > - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, > + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > "upstream sent invalid header: \"%*s\\x%02xd...\"", > r->header_end - r->header_name_start, > r->header_name_start, *r->header_end); > diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c > --- a/src/http/modules/ngx_http_uwsgi_module.c > +++ b/src/http/modules/ngx_http_uwsgi_module.c > @@ -1363,7 +1363,7 @@ ngx_http_uwsgi_process_header(ngx_http_r > > /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ > > - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, > + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > "upstream sent invalid header: \"%*s\\x%02xd...\"", > r->header_end - r->header_name_start, > r->header_name_start, *r->header_end); > Looks good. -- Sergey Kandaurov From arut at nginx.com Mon Oct 18 12:48:27 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 18 Oct 2021 15:48:27 +0300 Subject: [PATCH 0 of 3] HTTP/3 Stream Cancellation and friends Message-ID: The series implements some improvements in HTTP/3. On top of them Stream Cancellation send is added. - patch #1 improves throwing stream/connection errors - patch #2 adds connection reuse and delayed request allocation - patch #3 adds Stream Cancellation send From arut at nginx.com Mon Oct 18 12:48:28 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 18 Oct 2021 15:48:28 +0300 Subject: [PATCH 1 of 3] HTTP/3: adjusted QUIC connection finalization In-Reply-To: References: Message-ID: <8739f475583031399879.1634561308@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1634559753 -10800 # Mon Oct 18 15:22:33 2021 +0300 # Branch quic # Node ID 8739f475583031399879ef0af2eb5af76008449e # Parent 404de224517e33f685613d6425dcdb3c8ef5b97e HTTP/3: adjusted QUIC connection finalization. When an HTTP/3 function returns an error in context of a QUIC stream, it's this function's responsibility now to finalize the entire QUIC connection with the right code, if required. Previously, QUIC connection finalization could be done both outside and inside such functions. The new rule follows a similar rule for logging, leads to cleaner code, and allows to provide more details about the error. While here, a few error cases are no longer treated as fatal and QUIC connection is no longer finalized in these cases. A few other cases now lead to stream reset instead of connection finalization. diff --git a/src/http/v3/ngx_http_v3.c b/src/http/v3/ngx_http_v3.c --- a/src/http/v3/ngx_http_v3.c +++ b/src/http/v3/ngx_http_v3.c @@ -33,7 +33,7 @@ ngx_http_v3_init_session(ngx_connection_ h3c = ngx_pcalloc(pc->pool, sizeof(ngx_http_v3_session_t)); if (h3c == NULL) { - return NGX_ERROR; + goto failed; } h3c->max_push_id = (uint64_t) -1; @@ -49,7 +49,7 @@ ngx_http_v3_init_session(ngx_connection_ cln = ngx_pool_cleanup_add(pc->pool, 0); if (cln == NULL) { - return NGX_ERROR; + goto failed; } cln->handler = ngx_http_v3_cleanup_session; @@ -58,6 +58,14 @@ ngx_http_v3_init_session(ngx_connection_ hc->v3_session = h3c; return ngx_http_v3_send_settings(c); + +failed: + + ngx_log_error(NGX_LOG_ERR, c->log, 0, "failed to create http3 session"); + + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR, + "failed to create http3 session"); + return NGX_ERROR; } diff --git a/src/http/v3/ngx_http_v3_request.c b/src/http/v3/ngx_http_v3_request.c --- a/src/http/v3/ngx_http_v3_request.c +++ b/src/http/v3/ngx_http_v3_request.c @@ -65,8 +65,6 @@ ngx_http_v3_init(ngx_connection_t *c) ngx_http_core_srv_conf_t *cscf; if (ngx_http_v3_init_session(c) != NGX_OK) { - ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR, - "internal error"); ngx_http_close_connection(c); return; } @@ -110,8 +108,6 @@ ngx_http_v3_init(ngx_connection_t *c) h3c->goaway = 1; if (ngx_http_v3_send_goaway(c, (n + 1) << 2) != NGX_OK) { - ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR, - "goaway error"); ngx_http_close_connection(c); return; } @@ -287,15 +283,14 @@ ngx_http_v3_process_request(ngx_event_t rc = ngx_http_v3_parse_headers(c, st, b); if (rc > 0) { - ngx_http_v3_finalize_connection(c, rc, - "could not parse request headers"); + ngx_quic_reset_stream(c, rc); + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "client sent invalid header"); ngx_http_finalize_request(r, NGX_HTTP_BAD_REQUEST); break; } if (rc == NGX_ERROR) { - ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR, - "internal error"); ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); break; } @@ -1167,17 +1162,13 @@ ngx_http_v3_request_body_filter(ngx_http } if (rc > 0) { - ngx_http_v3_finalize_connection(r->connection, rc, - "client sent invalid body"); + ngx_quic_reset_stream(r->connection, rc); ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "client sent invalid body"); return NGX_HTTP_BAD_REQUEST; } if (rc == NGX_ERROR) { - ngx_http_v3_finalize_connection(r->connection, - NGX_HTTP_V3_ERR_INTERNAL_ERROR, - "internal error"); return NGX_HTTP_INTERNAL_SERVER_ERROR; } diff --git a/src/http/v3/ngx_http_v3_streams.c b/src/http/v3/ngx_http_v3_streams.c --- a/src/http/v3/ngx_http_v3_streams.c +++ b/src/http/v3/ngx_http_v3_streams.c @@ -283,7 +283,7 @@ ngx_http_v3_create_push_stream(ngx_conne sc = ngx_quic_open_stream(c, 0); if (sc == NULL) { - return NULL; + goto failed; } p = buf; @@ -318,7 +318,13 @@ ngx_http_v3_create_push_stream(ngx_conne failed: - ngx_http_v3_close_uni_stream(sc); + ngx_log_error(NGX_LOG_ERR, c->log, 0, "failed to create push stream"); + + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_STREAM_CREATION_ERROR, + "failed to create push stream"); + if (sc) { + ngx_http_v3_close_uni_stream(sc); + } return NULL; } @@ -368,7 +374,7 @@ ngx_http_v3_get_uni_stream(ngx_connectio sc = ngx_quic_open_stream(c, 0); if (sc == NULL) { - return NULL; + goto failed; } sc->quic->cancelable = 1; @@ -405,7 +411,13 @@ ngx_http_v3_get_uni_stream(ngx_connectio failed: - ngx_http_v3_close_uni_stream(sc); + ngx_log_error(NGX_LOG_ERR, c->log, 0, "failed to create server stream"); + + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_STREAM_CREATION_ERROR, + "failed to create server stream"); + if (sc) { + ngx_http_v3_close_uni_stream(sc); + } return NULL; } @@ -424,7 +436,7 @@ ngx_http_v3_send_settings(ngx_connection cc = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_CONTROL); if (cc == NULL) { - return NGX_DECLINED; + return NGX_ERROR; } h3scf = ngx_http_v3_get_module_srv_conf(c, ngx_http_v3_module); @@ -457,6 +469,10 @@ ngx_http_v3_send_settings(ngx_connection failed: + ngx_log_error(NGX_LOG_ERR, c->log, 0, "failed to send settings"); + + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_EXCESSIVE_LOAD, + "failed to send settings"); ngx_http_v3_close_uni_stream(cc); return NGX_ERROR; @@ -475,7 +491,7 @@ ngx_http_v3_send_goaway(ngx_connection_t cc = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_CONTROL); if (cc == NULL) { - return NGX_DECLINED; + return NGX_ERROR; } n = ngx_http_v3_encode_varlen_int(NULL, id); @@ -495,6 +511,10 @@ ngx_http_v3_send_goaway(ngx_connection_t failed: + ngx_log_error(NGX_LOG_ERR, c->log, 0, "failed to send goaway"); + + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_EXCESSIVE_LOAD, + "failed to send goaway"); ngx_http_v3_close_uni_stream(cc); return NGX_ERROR; @@ -510,7 +530,7 @@ ngx_http_v3_send_ack_section(ngx_connect ngx_http_v3_session_t *h3c; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, - "http3 client ack section %ui", stream_id); + "http3 send section acknowledgement %ui", stream_id); dc = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_DECODER); if (dc == NULL) { @@ -524,11 +544,21 @@ ngx_http_v3_send_ack_section(ngx_connect h3c->total_bytes += n; if (dc->send(dc, buf, n) != (ssize_t) n) { - ngx_http_v3_close_uni_stream(dc); - return NGX_ERROR; + goto failed; } return NGX_OK; + +failed: + + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "failed to send section acknowledgement"); + + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_EXCESSIVE_LOAD, + "failed to send section acknowledgement"); + ngx_http_v3_close_uni_stream(dc); + + return NGX_ERROR; } @@ -541,7 +571,7 @@ ngx_http_v3_send_cancel_stream(ngx_conne ngx_http_v3_session_t *h3c; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, - "http3 client cancel stream %ui", stream_id); + "http3 send stream cancellation %ui", stream_id); dc = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_DECODER); if (dc == NULL) { @@ -555,11 +585,20 @@ ngx_http_v3_send_cancel_stream(ngx_conne h3c->total_bytes += n; if (dc->send(dc, buf, n) != (ssize_t) n) { - ngx_http_v3_close_uni_stream(dc); - return NGX_ERROR; + goto failed; } return NGX_OK; + +failed: + + ngx_log_error(NGX_LOG_ERR, c->log, 0, "failed to send stream cancellation"); + + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_EXCESSIVE_LOAD, + "failed to send stream cancellation"); + ngx_http_v3_close_uni_stream(dc); + + return NGX_ERROR; } @@ -572,7 +611,7 @@ ngx_http_v3_send_inc_insert_count(ngx_co ngx_http_v3_session_t *h3c; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, - "http3 client increment insert count %ui", inc); + "http3 send insert count increment %ui", inc); dc = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_DECODER); if (dc == NULL) { @@ -586,11 +625,21 @@ ngx_http_v3_send_inc_insert_count(ngx_co h3c->total_bytes += n; if (dc->send(dc, buf, n) != (ssize_t) n) { - ngx_http_v3_close_uni_stream(dc); - return NGX_ERROR; + goto failed; } return NGX_OK; + +failed: + + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "failed to send insert count increment"); + + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_EXCESSIVE_LOAD, + "failed to send insert count increment"); + ngx_http_v3_close_uni_stream(dc); + + return NGX_ERROR; } diff --git a/src/http/v3/ngx_http_v3_tables.c b/src/http/v3/ngx_http_v3_tables.c --- a/src/http/v3/ngx_http_v3_tables.c +++ b/src/http/v3/ngx_http_v3_tables.c @@ -589,6 +589,10 @@ ngx_http_v3_check_insert_count(ngx_conne if (h3c->nblocked == h3scf->max_blocked_streams) { ngx_log_error(NGX_LOG_INFO, c->log, 0, "client exceeded http3_max_blocked_streams limit"); + + ngx_http_v3_finalize_connection(c, + NGX_HTTP_V3_ERR_DECOMPRESSION_FAILED, + "too many blocked streams"); return NGX_HTTP_V3_ERR_DECOMPRESSION_FAILED; } From arut at nginx.com Mon Oct 18 12:48:29 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 18 Oct 2021 15:48:29 +0300 Subject: [PATCH 2 of 3] HTTP/3: allowed QUIC stream connection reuse In-Reply-To: References: Message-ID: <8ae53c592c719af4f3ba.1634561309@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1634561226 -10800 # Mon Oct 18 15:47:06 2021 +0300 # Branch quic # Node ID 8ae53c592c719af4f3ba47dbd85f78be27aaf7db # Parent 8739f475583031399879ef0af2eb5af76008449e HTTP/3: allowed QUIC stream connection reuse. A QUIC stream connection is treated as reusable until first bytes of request arrive, which is also when the request object is now allocated. A connection closed as a result of draining, is reset with the error code H3_REQUEST_REJECTED. Such behavior is allowed by quic-http-34: Once a request stream has been opened, the request MAY be cancelled by either endpoint. Clients cancel requests if the response is no longer of interest; servers cancel requests if they are unable to or choose not to respond. When the server cancels a request without performing any application processing, the request is considered "rejected." The server SHOULD abort its response stream with the error code H3_REQUEST_REJECTED. The client can treat requests rejected by the server as though they had never been sent at all, thereby allowing them to be retried later. diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -3743,15 +3743,14 @@ ngx_http_free_request(ngx_http_request_t log->action = "closing request"; - if (r->connection->timedout) { + if (r->connection->timedout +#if (NGX_HTTP_QUIC) + && r->connection->quic == NULL +#endif + ) + { clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); -#if (NGX_HTTP_V3) - if (r->connection->quic) { - (void) ngx_quic_reset_stream(r->connection, - NGX_HTTP_V3_ERR_GENERAL_PROTOCOL_ERROR); - } else -#endif if (clcf->reset_timedout_connection) { linger.l_onoff = 1; linger.l_linger = 0; @@ -3763,14 +3762,6 @@ ngx_http_free_request(ngx_http_request_t "setsockopt(SO_LINGER) failed"); } } - - } else if (!r->response_sent) { -#if (NGX_HTTP_V3) - if (r->connection->quic) { - (void) ngx_quic_reset_stream(r->connection, - NGX_HTTP_V3_ERR_INTERNAL_ERROR); - } -#endif } /* the various request strings were allocated from r->pool */ @@ -3830,6 +3821,12 @@ ngx_http_close_connection(ngx_connection #endif +#if (NGX_HTTP_V3) + if (ngx_http_v3_connection(c)) { + ngx_http_v3_reset_connection(c); + } +#endif + #if (NGX_STAT_STUB) (void) ngx_atomic_fetch_add(ngx_stat_active, -1); #endif diff --git a/src/http/v3/ngx_http_v3.h b/src/http/v3/ngx_http_v3.h --- a/src/http/v3/ngx_http_v3.h +++ b/src/http/v3/ngx_http_v3.h @@ -90,6 +90,9 @@ #define ngx_http_v3_shutdown_connection(c, code, reason) \ ngx_quic_shutdown_connection(c->quic->parent, code, reason) +#define ngx_http_v3_connection(c) \ + ((c)->quic ? ngx_http_quic_get_connection(c)->addr_conf->http3 : 0) + typedef struct { size_t max_table_capacity; @@ -138,6 +141,7 @@ struct ngx_http_v3_session_s { void ngx_http_v3_init(ngx_connection_t *c); +void ngx_http_v3_reset_connection(ngx_connection_t *c); ngx_int_t ngx_http_v3_init_session(ngx_connection_t *c); ngx_int_t ngx_http_v3_check_flood(ngx_connection_t *c); diff --git a/src/http/v3/ngx_http_v3_request.c b/src/http/v3/ngx_http_v3_request.c --- a/src/http/v3/ngx_http_v3_request.c +++ b/src/http/v3/ngx_http_v3_request.c @@ -10,6 +10,7 @@ #include +static void ngx_http_v3_wait_request_handler(ngx_event_t *rev); static void ngx_http_v3_cleanup_request(void *data); static void ngx_http_v3_process_request(ngx_event_t *rev); static ngx_int_t ngx_http_v3_process_header(ngx_http_request_t *r, @@ -53,12 +54,8 @@ static const struct { void ngx_http_v3_init(ngx_connection_t *c) { - size_t size; uint64_t n; - ngx_buf_t *b; ngx_event_t *rev; - ngx_pool_cleanup_t *cln; - ngx_http_request_t *r; ngx_http_connection_t *hc; ngx_http_v3_session_t *h3c; ngx_http_core_loc_conf_t *clcf; @@ -96,7 +93,7 @@ ngx_http_v3_init(ngx_connection_t *c) h3c = ngx_http_v3_get_session(c); if (h3c->goaway) { - ngx_quic_reset_stream(c, NGX_HTTP_V3_ERR_REQUEST_REJECTED); + c->close = 1; ngx_http_close_connection(c); return; } @@ -116,21 +113,57 @@ ngx_http_v3_init(ngx_connection_t *c) "reached maximum number of requests"); } - cln = ngx_pool_cleanup_add(c->pool, 0); - if (cln == NULL) { + rev = c->read; + rev->handler = ngx_http_v3_wait_request_handler; + c->write->handler = ngx_http_empty_handler; + + if (rev->ready) { + rev->handler(rev); + return; + } + + cscf = ngx_http_get_module_srv_conf(hc->conf_ctx, ngx_http_core_module); + + ngx_add_timer(rev, cscf->client_header_timeout); + ngx_reusable_connection(c, 1); + + if (ngx_handle_read_event(rev, 0) != NGX_OK) { + ngx_http_close_connection(c); + return; + } +} + + +static void +ngx_http_v3_wait_request_handler(ngx_event_t *rev) +{ + size_t size; + ssize_t n; + ngx_buf_t *b; + ngx_connection_t *c; + ngx_pool_cleanup_t *cln; + ngx_http_request_t *r; + ngx_http_connection_t *hc; + ngx_http_v3_session_t *h3c; + ngx_http_core_srv_conf_t *cscf; + + c = rev->data; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 wait request handler"); + + if (rev->timedout) { + ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out"); + c->timedout = 1; ngx_http_close_connection(c); return; } - cln->handler = ngx_http_v3_cleanup_request; - cln->data = c; - - h3c->nrequests++; - - if (h3c->keepalive.timer_set) { - ngx_del_timer(&h3c->keepalive); + if (c->close) { + ngx_http_close_connection(c); + return; } + hc = c->data; cscf = ngx_http_get_module_srv_conf(hc->conf_ctx, ngx_http_core_module); size = cscf->client_header_buffer_size; @@ -159,8 +192,49 @@ ngx_http_v3_init(ngx_connection_t *c) b->end = b->last + size; } + n = c->recv(c, b->last, size); + + if (n == NGX_AGAIN) { + + if (!rev->timer_set) { + ngx_add_timer(rev, cscf->client_header_timeout); + ngx_reusable_connection(c, 1); + } + + if (ngx_handle_read_event(rev, 0) != NGX_OK) { + ngx_http_close_connection(c); + return; + } + + /* + * We are trying to not hold c->buffer's memory for an idle connection. + */ + + if (ngx_pfree(c->pool, b->start) == NGX_OK) { + b->start = NULL; + } + + return; + } + + if (n == NGX_ERROR) { + ngx_http_close_connection(c); + return; + } + + if (n == 0) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client closed connection"); + ngx_http_close_connection(c); + return; + } + + b->last += n; + c->log->action = "reading client request"; + ngx_reusable_connection(c, 0); + r = ngx_http_create_request(c); if (r == NULL) { ngx_http_close_connection(c); @@ -171,7 +245,7 @@ ngx_http_v3_init(ngx_connection_t *c) r->v3_parse = ngx_pcalloc(r->pool, sizeof(ngx_http_v3_parse_t)); if (r->v3_parse == NULL) { - ngx_http_close_connection(c); + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } @@ -179,23 +253,59 @@ ngx_http_v3_init(ngx_connection_t *c) * cscf->large_client_header_buffers.num; c->data = r; - c->requests = n + 1; + c->requests = (c->quic->id >> 2) + 1; + + cln = ngx_pool_cleanup_add(r->pool, 0); + if (cln == NULL) { + ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); + return; + } + + cln->handler = ngx_http_v3_cleanup_request; + cln->data = r; + + h3c = ngx_http_v3_get_session(c); + h3c->nrequests++; + + if (h3c->keepalive.timer_set) { + ngx_del_timer(&h3c->keepalive); + } - rev = c->read; rev->handler = ngx_http_v3_process_request; + ngx_http_v3_process_request(rev); +} - ngx_http_v3_process_request(rev); + +void +ngx_http_v3_reset_connection(ngx_connection_t *c) +{ + if (c->timedout) { + ngx_quic_reset_stream(c, NGX_HTTP_V3_ERR_GENERAL_PROTOCOL_ERROR); + + } else if (c->close) { + ngx_quic_reset_stream(c, NGX_HTTP_V3_ERR_REQUEST_REJECTED); + + } else if (c->requests == 0 || c->error) { + ngx_quic_reset_stream(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR); + } } static void ngx_http_v3_cleanup_request(void *data) { - ngx_connection_t *c = data; + ngx_http_request_t *r = data; + ngx_connection_t *c; ngx_http_v3_session_t *h3c; ngx_http_core_loc_conf_t *clcf; + c = r->connection; + + if (!r->response_sent) { + c->error = 1; + } + h3c = ngx_http_v3_get_session(c); if (--h3c->nrequests == 0) { diff --git a/src/http/v3/ngx_http_v3_streams.c b/src/http/v3/ngx_http_v3_streams.c --- a/src/http/v3/ngx_http_v3_streams.c +++ b/src/http/v3/ngx_http_v3_streams.c @@ -49,7 +49,8 @@ ngx_http_v3_init_uni_stream(ngx_connecti ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_STREAM_CREATION_ERROR, "reached maximum number of uni streams"); - ngx_http_close_connection(c); + c->data = NULL; + ngx_http_v3_close_uni_stream(c); return; } @@ -57,7 +58,11 @@ ngx_http_v3_init_uni_stream(ngx_connecti us = ngx_pcalloc(c->pool, sizeof(ngx_http_v3_uni_stream_t)); if (us == NULL) { - ngx_http_close_connection(c); + ngx_http_v3_finalize_connection(c, + NGX_HTTP_V3_ERR_INTERNAL_ERROR, + "memory allocation error"); + c->data = NULL; + ngx_http_v3_close_uni_stream(c); return; } @@ -79,12 +84,12 @@ ngx_http_v3_close_uni_stream(ngx_connect ngx_http_v3_session_t *h3c; ngx_http_v3_uni_stream_t *us; - us = c->data; - h3c = ngx_http_v3_get_session(c); - ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 close stream"); - if (us->index >= 0) { + us = c->data; + + if (us && us->index >= 0) { + h3c = ngx_http_v3_get_session(c); h3c->known_streams[us->index] = NULL; } From arut at nginx.com Mon Oct 18 12:48:30 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 18 Oct 2021 15:48:30 +0300 Subject: [PATCH 3 of 3] HTTP/3: send Stream Cancellation instruction In-Reply-To: References: Message-ID: <9018cf33137a19df69e7.1634561310@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1634557691 -10800 # Mon Oct 18 14:48:11 2021 +0300 # Branch quic # Node ID 9018cf33137a19df69e70ee4a274164c226e7cbd # Parent 8ae53c592c719af4f3ba47dbd85f78be27aaf7db HTTP/3: send Stream Cancellation instruction. As per quic-qpack-21: When a stream is reset or reading is abandoned, the decoder emits a Stream Cancellation instruction. Previously the instruction was not sent. Now it's sent when closing QUIC stream connection if dynamic table capacity is non-zero and eof was not received from client. The latter condition means that a trailers section may still be on its way from client and the stream needs to be cancelled. diff --git a/src/http/v3/ngx_http_v3_request.c b/src/http/v3/ngx_http_v3_request.c --- a/src/http/v3/ngx_http_v3_request.c +++ b/src/http/v3/ngx_http_v3_request.c @@ -279,6 +279,14 @@ ngx_http_v3_wait_request_handler(ngx_eve void ngx_http_v3_reset_connection(ngx_connection_t *c) { + ngx_http_v3_srv_conf_t *h3scf; + + h3scf = ngx_http_v3_get_module_srv_conf(c, ngx_http_v3_module); + + if (h3scf->max_table_capacity > 0 && !c->read->eof) { + (void) ngx_http_v3_send_cancel_stream(c, c->quic->id); + } + if (c->timedout) { ngx_quic_reset_stream(c, NGX_HTTP_V3_ERR_GENERAL_PROTOCOL_ERROR); From mdounin at mdounin.ru Mon Oct 18 13:45:45 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Oct 2021 16:45:45 +0300 Subject: PCRE2 support? In-Reply-To: References: <20180918121231.843FC2C50D50@mail.nginx.com> <20180918155518.GQ56558@mdounin.ru> <0522327a-8b2c-f066-ddd6-392207ec6c1d@thomas-ward.net> <20190124182121.GP1877@mdounin.ru> <5f5a8d73-77b5-bf62-d963-5e2927aa72e1@gmail.com> <20190125001242.GR1877@mdounin.ru> Message-ID: Hello! On Mon, Oct 18, 2021 at 08:43:18AM +0200, Dipl. Ing. Sergey Brester wrote: > Just for the record (and probably to reopen this discussion again). > > https://github.com/PhilipHazel/pcre2/issues/26 [3] shows a heavy bug in > PCRE library (it is not safe to use it anymore, at least without jit) as > well as the statement of the PCRE developer regarding the end of life > for PCRE. Thanks for the link. So PCRE is basically "gone away" now, and no longer gets even security support. While this particular bug does not seem to be critical for nginx, certainly it's a good reason to add PCRE2 support. I have some preliminary/incomplete patch sitting in my patch queue, and I'm going to revive this work as time permits. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Oct 18 13:48:03 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Oct 2021 13:48:03 +0000 Subject: [nginx] Upstream: fixed logging level of upstream invalid header errors. Message-ID: details: https://hg.nginx.org/nginx/rev/2f443cac3f1e branches: changeset: 7933:2f443cac3f1e user: Maxim Dounin date: Mon Oct 18 16:46:59 2021 +0300 description: Upstream: fixed logging level of upstream invalid header errors. In b87b7092cedb (nginx 1.21.1), logging level of "upstream sent invalid header" errors was accidentally changed to "info". This change restores the "error" level, which is a proper logging level for upstream-side errors. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 2 +- src/http/modules/ngx_http_proxy_module.c | 2 +- src/http/modules/ngx_http_scgi_module.c | 2 +- src/http/modules/ngx_http_uwsgi_module.c | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diffs (48 lines): diff -r 01829d162095 -r 2f443cac3f1e src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c Tue Oct 12 23:18:18 2021 +0300 +++ b/src/http/modules/ngx_http_fastcgi_module.c Mon Oct 18 16:46:59 2021 +0300 @@ -2021,7 +2021,7 @@ ngx_http_fastcgi_process_header(ngx_http /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "upstream sent invalid header: \"%*s\\x%02xd...\"", r->header_end - r->header_name_start, r->header_name_start, *r->header_end); diff -r 01829d162095 -r 2f443cac3f1e src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Tue Oct 12 23:18:18 2021 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Mon Oct 18 16:46:59 2021 +0300 @@ -2021,7 +2021,7 @@ ngx_http_proxy_process_header(ngx_http_r /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "upstream sent invalid header: \"%*s\\x%02xd...\"", r->header_end - r->header_name_start, r->header_name_start, *r->header_end); diff -r 01829d162095 -r 2f443cac3f1e src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c Tue Oct 12 23:18:18 2021 +0300 +++ b/src/http/modules/ngx_http_scgi_module.c Mon Oct 18 16:46:59 2021 +0300 @@ -1142,7 +1142,7 @@ ngx_http_scgi_process_header(ngx_http_re /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "upstream sent invalid header: \"%*s\\x%02xd...\"", r->header_end - r->header_name_start, r->header_name_start, *r->header_end); diff -r 01829d162095 -r 2f443cac3f1e src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c Tue Oct 12 23:18:18 2021 +0300 +++ b/src/http/modules/ngx_http_uwsgi_module.c Mon Oct 18 16:46:59 2021 +0300 @@ -1363,7 +1363,7 @@ ngx_http_uwsgi_process_header(ngx_http_r /* rc == NGX_HTTP_PARSE_INVALID_HEADER */ - ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "upstream sent invalid header: \"%*s\\x%02xd...\"", r->header_end - r->header_name_start, r->header_name_start, *r->header_end); From pluknet at nginx.com Mon Oct 18 15:26:47 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 18 Oct 2021 18:26:47 +0300 Subject: [PATCH 2 of 2] SSL: SSL_sendfile() support with kernel TLS In-Reply-To: References: Message-ID: <8A19E0B0-8DF6-4DAA-8F9C-3E2F8EDA7087@nginx.com> > On 27 Sep 2021, at 16:18, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1632717779 -10800 > # Mon Sep 27 07:42:59 2021 +0300 > # Node ID ff514bf17f7f2257dcf036c5c973b74672cefa9a > # Parent 8f0fd60c33c106fba5f1ce3cafe990f15fcccc0c > SSL: SSL_sendfile() support with kernel TLS. > > Requires OpenSSL 3.0 compiled with "enable-ktls" option. Further, KTLS > needs to be enabled in kernel, and in OpenSSL, either via OpenSSL > configuration file or with "ssl_conf_command Options KTLS;" in nginx > configuration. > > On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and > can be enabled with "sysctl kern.ipc.tls.enable=1" and "kldload ktls_ocf". I am not sure about mentioning ktls_ocf.ko in the commit message. The module is only present in FreeBSD 13.0, it was removed post 13.0, and the functionality is now always present in kernels with KERN_TLS: https://cgit.freebsd.org/src/commit/?id=21e3c1fbe246 Further, it is one of many options to enable KTLS. It could be better to refer to man ktls(4), instead: : On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and : can be enabled with "sysctl kern.ipc.tls.enable=1", see man ktls(4). (but I don't insist) > > On Linux, kernel TLS is available starting with kernel 4.13 (at least 5.2 > is recommended), and needs kernel compiled with CONFIG_TLS=y (with > CONFIG_TLS=m, which is used at least on Ubuntu 21.04 by default, > the tls module needs to be loaded with "modprobe tls"). > On Linux I observe a problem sending data with short socket buffer space. It is Ubuntu 20.04 (5.4.0) and 21.04 (5.11.0), with epoll and select event methods. As per tcpdump traces, it looks like the buffer cannot be pushed to the network, although it is reported as if it was sent. The simplest I could grab (see below) with ssl_buffer_size 4k and sndbuf 8k (note that unlike SSL_write(), buffers aren't limited with ssl_buffer_size). It doesn't stuck starting with sndbuf 16k, so it might have something with how KTLS send buffers correspond with TCP send buffers. (In contrast, the FreeBSD sendfile is strictly constrained by the available send buffer space and hence shouldn't have this problem.) So it doesn't look like a major issue. 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL to write: 514 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL_write: 514 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL to sendfile: @0 1048576 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL_sendfile: 16384 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL to sendfile: @16384 1032192 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL_sendfile: -1 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL_get_error: 3 2021/10/18 14:16:44 [debug] 492598#0: *2 http write filter 000055E11C3C42C8 2021/10/18 14:16:44 [debug] 492598#0: *2 http copy filter: -2 "/file?" 2021/10/18 14:16:44 [debug] 492598#0: *2 http writer output filter: -2, "/file?" 2021/10/18 14:16:44 [debug] 492598#0: *2 event timer: 3, old: 350488404, new: 350488516 2021/10/18 14:16:44 [debug] 492598#0: worker cycle 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:6 wr:0 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:7 wr:0 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:8 wr:0 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:3 wr:0 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:3 wr:1 2021/10/18 14:16:44 [debug] 492598#0: max_fd: 8 2021/10/18 14:16:44 [debug] 492598#0: select timer: 59888 2021/10/18 14:16:44 [debug] 492598#0: select ready 1 2021/10/18 14:16:44 [debug] 492598#0: select write 3 2021/10/18 14:16:44 [debug] 492598#0: *2 post event 000055E11C6F5E30 2021/10/18 14:16:44 [debug] 492598#0: timer delta: 48 2021/10/18 14:16:44 [debug] 492598#0: posted event 000055E11C6F5E30 2021/10/18 14:16:44 [debug] 492598#0: *2 delete posted event 000055E11C6F5E30 2021/10/18 14:16:44 [debug] 492598#0: *2 http run request: "/file?" 2021/10/18 14:16:44 [debug] 492598#0: *2 http writer handler: "/file?" 2021/10/18 14:16:44 [debug] 492598#0: *2 http output filter "/file?" 2021/10/18 14:16:44 [debug] 492598#0: *2 http copy filter: "/file?" 2021/10/18 14:16:44 [debug] 492598#0: *2 image filter 2021/10/18 14:16:44 [debug] 492598#0: *2 xslt filter body 2021/10/18 14:16:44 [debug] 492598#0: *2 http postpone filter "/file?" 0000000000000000 2021/10/18 14:16:44 [debug] 492598#0: *2 write old buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 16384, size: 1032192 2021/10/18 14:16:44 [debug] 492598#0: *2 write old buf t:0 f:0 0000000000000000, pos 000055E11BB28D59, size: 2 file: 0, size: 0 2021/10/18 14:16:44 [debug] 492598#0: *2 write old buf t:0 f:0 0000000000000000, pos 000055E11BB28D56, size: 5 file: 0, size: 0 2021/10/18 14:16:44 [debug] 492598#0: *2 http write filter: l:1 f:0 s:1032199 2021/10/18 14:16:44 [debug] 492598#0: *2 http write filter limit 0 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL to sendfile: @16384 1032192 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL_sendfile: 16384 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL to sendfile: @32768 1015808 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL_sendfile: -1 2021/10/18 14:16:44 [debug] 492598#0: *2 SSL_get_error: 3 2021/10/18 14:16:44 [debug] 492598#0: *2 http write filter 000055E11C3C42C8 2021/10/18 14:16:44 [debug] 492598#0: *2 http copy filter: -2 "/file?" 2021/10/18 14:16:44 [debug] 492598#0: *2 http writer output filter: -2, "/file?" 2021/10/18 14:16:44 [debug] 492598#0: *2 event timer: 3, old: 350488404, new: 350488564 2021/10/18 14:16:44 [debug] 492598#0: worker cycle 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:6 wr:0 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:7 wr:0 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:8 wr:0 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:3 wr:0 2021/10/18 14:16:44 [debug] 492598#0: select event: fd:3 wr:1 2021/10/18 14:16:44 [debug] 492598#0: max_fd: 8 2021/10/18 14:16:44 [debug] 492598#0: select timer: 59840 2021/10/18 14:17:44 [debug] 492598#0: select ready 0 2021/10/18 14:17:44 [debug] 492598#0: timer delta: 59901 2021/10/18 14:17:44 [debug] 492598#0: *2 event timer del: 3: 350488404 2021/10/18 14:17:44 [debug] 492598#0: *2 http run request: "/file?" 2021/10/18 14:17:44 [debug] 492598#0: *2 http writer handler: "/file?" 2021/10/18 14:17:44 [info] 492598#0: *2 client timed out No more events, as though "fd:3 wr:1" is set as shown above. As per the logs, there were two successful SSL_sendfile() calls, following a short tail of the regular SSL_write(). This doesn't correspond to "openssl s_client" strace, though: select(4, [0 3], [], NULL, NULL) = 2 (in [0 3]) read(3, "\27\3\3\2\32", 5) = 5 read(3, "\336\331\33\223n\277:2\26Z\240p\273\f.W\324\317:~\3408F}\200~cX \362\306\275"..., 538) = 538 select(4, [0 3], [], NULL, NULL) = 2 (in [0 3]) read(3, "\27\3\3@\30", 5) = 5 read(3, "\336\331\33\223n\277:3,\33\264\17q\210\252\256\n\266\v^\244\247+\20\377j5d\336\250w\177"..., 16408) = 16408 The second buffer is postponed for some reason, until timeout: select(4, [3], [], NULL, NULL [.. wait for 60 seconds timeout ..] ) = 1 (in [3]) read(3, "\27\3\3@\30", 5) = 5 read(3, "\336\331\33\223n\277:4\22\276\36\0\336\274I\354\227\6\244>\16{\242\3366\254x]\367\21s\260"..., 16408) = 16408 This is confirmed by tcpdump (note a gap in timestamps), where the second SSL_sendfile()'s data is sent delayed with a FIN. 14:16:44.911122 IP 127.0.0.1.8085 > 127.0.0.1.35024: Flags [P.], seq 1155142:1155685, ack 627, win 65392, options [nop,nop,TS val 1397093502 ecr 1397093502], length 543 14:16:44.916372 IP 127.0.0.1.35024 > 127.0.0.1.8085: Flags [.], ack 1155685, win 61875, options [nop,nop,TS val 1397093507 ecr 1397093502], length 0 14:16:44.916388 IP 127.0.0.1.8085 > 127.0.0.1.35024: Flags [.], seq 1155685:1172098, ack 627, win 65392, options [nop,nop,TS val 1397093507 ecr 1397093507], length 16413 14:16:44.958874 IP 127.0.0.1.35024 > 127.0.0.1.8085: Flags [.], ack 1172098, win 61875, options [nop,nop,TS val 1397093549 ecr 1397093507], length 0 14:17:44.859485 IP 127.0.0.1.8085 > 127.0.0.1.35024: Flags [F.], seq 1172098:1188511, ack 627, win 65392, options [nop,nop,TS val 1397153450 ecr 1397093549], length 16413 14:17:44.859734 IP 127.0.0.1.35024 > 127.0.0.1.8085: Flags [.], ack 1188512, win 49239, options [nop,nop,TS val 1397153450 ecr 1397153450], length 0 14:17:44.863155 IP 127.0.0.1.35024 > 127.0.0.1.8085: Flags [P.], seq 627:658, ack 1188512, win 49239, options [nop,nop,TS val 1397153454 ecr 1397153450], length 31 14:17:44.863171 IP 127.0.0.1.8085 > 127.0.0.1.35024: Flags [R], seq 1549272302, win 0, length 0 I've added additional debugging to SSL_sendfile() to see that sendfile() returns EBUSY (11). Sending the same volume of data with a regular sendfile (without SSL) doesn't exhibit such a problem: a write event is always reported. 2021/10/18 14:47:42 [debug] 495187#0: *1 write old buf t:1 f:0 00007F6AA5FEC010, pos 00007F6AA6103F61, size: 648 file: 0, size: 0 2021/10/18 14:47:42 [debug] 495187#0: *1 write old buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 1048576 2021/10/18 14:47:42 [debug] 495187#0: *1 write old buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 2021/10/18 14:47:42 [debug] 495187#0: *1 http write filter: l:1 f:0 s:1049224 2021/10/18 14:47:42 [debug] 495187#0: *1 http write filter limit 0 2021/10/18 14:47:42 [debug] 495187#0: *1 writev: 648 of 648 2021/10/18 14:47:42 [debug] 495187#0: *1 sendfile: @0 1048576 2021/10/18 14:47:42 [debug] 495187#0: *1 sendfile: 32767 of 1048576 @0 2021/10/18 14:47:42 [debug] 495187#0: *1 sendfile: @32767 1015809 2021/10/18 14:47:42 [debug] 495187#0: *1 sendfile() is not ready (11: Resource temporarily unavailable) ... 2021/10/18 14:47:42 [debug] 495187#0: *1 sendfile: @32767 1015809 2021/10/18 14:47:42 [debug] 495187#0: *1 sendfile: 32767 of 1015809 @32767 2021/10/18 14:47:42 [debug] 495187#0: *1 sendfile: @65534 983042 2021/10/18 14:47:42 [debug] 495187#0: *1 sendfile() is not ready (11: Resource temporarily unavailable) ... ... 2021/10/18 14:47:43 [debug] 495187#0: *1 sendfile: @1048544 32 2021/10/18 14:47:43 [debug] 495187#0: *1 sendfile: 32 of 32 @1048544 The patches look good. -- Sergey Kandaurov From pluknet at nginx.com Mon Oct 18 22:07:44 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 19 Oct 2021 01:07:44 +0300 Subject: [PATCH 2 of 2] SSL: SSL_sendfile() support with kernel TLS In-Reply-To: <8A19E0B0-8DF6-4DAA-8F9C-3E2F8EDA7087@nginx.com> References: <8A19E0B0-8DF6-4DAA-8F9C-3E2F8EDA7087@nginx.com> Message-ID: > On 18 Oct 2021, at 18:26, Sergey Kandaurov wrote: > >> >> On 27 Sep 2021, at 16:18, Maxim Dounin wrote: >> >> # HG changeset patch >> # User Maxim Dounin >> # Date 1632717779 -10800 >> # Mon Sep 27 07:42:59 2021 +0300 >> # Node ID ff514bf17f7f2257dcf036c5c973b74672cefa9a >> # Parent 8f0fd60c33c106fba5f1ce3cafe990f15fcccc0c >> SSL: SSL_sendfile() support with kernel TLS. >> >> Requires OpenSSL 3.0 compiled with "enable-ktls" option. Further, KTLS >> needs to be enabled in kernel, and in OpenSSL, either via OpenSSL >> configuration file or with "ssl_conf_command Options KTLS;" in nginx >> configuration. >> >> On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and >> can be enabled with "sysctl kern.ipc.tls.enable=1" and "kldload ktls_ocf". > > [..] >> >> On Linux, kernel TLS is available starting with kernel 4.13 (at least 5.2 >> is recommended), and needs kernel compiled with CONFIG_TLS=y (with >> CONFIG_TLS=m, which is used at least on Ubuntu 21.04 by default, >> the tls module needs to be loaded with "modprobe tls"). >> > > On Linux I observe a problem sending data with short socket buffer space. For the record, there are interesting code paths with TLSv1.3 early data. A request is read in 0-RTT, with read handler left unlocked to discard body: 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL_read_early_data: 1, 1 2021/10/18 21:34:47 [debug] 529189#0: *2 select del event fd:3 ev:8193 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL: TLSv1.3, cipher: "TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD" 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL reused session 2021/10/18 21:34:47 [debug] 529189#0: *2 BIO_get_ktls_send(): 1 2021/10/18 21:34:47 [debug] 529189#0: *2 reusable connection: 1 2021/10/18 21:34:47 [debug] 529189#0: *2 http wait request handler 2021/10/18 21:34:47 [debug] 529189#0: *2 malloc: 000055BB455E9380:1024 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL_read_early_data: 1, 54 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL_read_early_data: 0, 0 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL_get_error: 2 2021/10/18 21:34:47 [debug] 529189#0: *2 reusable connection: 0 2021/10/18 21:34:47 [debug] 529189#0: *2 posix_memalign: 000055BB455CEFF0:4096 @16 2021/10/18 21:34:47 [debug] 529189#0: *2 http process request line 2021/10/18 21:34:47 [debug] 529189#0: *2 http request line: "GET /file HTTP/1.1" 2021/10/18 21:34:47 [debug] 529189#0: *2 http uri: "/file" 2021/10/18 21:34:47 [debug] 529189#0: *2 http args: "" 2021/10/18 21:34:47 [debug] 529189#0: *2 http exten: "" 2021/10/18 21:34:47 [debug] 529189#0: *2 posix_memalign: 000055BB4590D9B0:4096 @16 2021/10/18 21:34:47 [debug] 529189#0: *2 http process request header line 2021/10/18 21:34:47 [debug] 529189#0: *2 http header: "Host: localhost" 2021/10/18 21:34:47 [debug] 529189#0: *2 http header: "Content-Length: 10" 2021/10/18 21:34:47 [debug] 529189#0: *2 http header done [..] Buffers to send: 2021/10/18 21:34:47 [debug] 529189#0: *2 http chunk: 1048576 2021/10/18 21:34:47 [debug] 529189#0: *2 write old buf t:1 f:0 00007F19F2641010, pos 00007F19F2741010, size: 98810 file: 0, size: 0 2021/10/18 21:34:47 [debug] 529189#0: *2 write new buf t:1 f:0 000055BB4590E440, pos 000055BB4590E440, size: 8 file: 0, size: 0 2021/10/18 21:34:47 [debug] 529189#0: *2 write new buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 1048576 2021/10/18 21:34:47 [debug] 529189#0: *2 write new buf t:0 f:0 0000000000000000, pos 000055BB43B53E39, size: 2 file: 0, size: 0 Header is eventually written after several write events. Note disabled reading due to a pending SSL_write_early_data() (will return to this later): 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL buf copy: 1048576 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL to write: 1048576 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL_write_early_data: 0, 0 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL_get_error: 3 2021/10/18 21:34:47 [debug] 529189#0: *2 SSL_write_early_data: want write [..] 2021/10/18 21:34:47 [debug] 529189#0: select ready 1 2021/10/18 21:34:47 [debug] 529189#0: select read 3 2021/10/18 21:34:47 [debug] 529189#0: *2 post event 000055BB458EB370 2021/10/18 21:34:47 [debug] 529189#0: timer delta: 8 2021/10/18 21:34:47 [debug] 529189#0: posted event 000055BB458EB370 2021/10/18 21:34:47 [debug] 529189#0: *2 delete posted event 000055BB458EB370 2021/10/18 21:34:47 [debug] 529189#0: *2 http run request: "/file?" 2021/10/18 21:34:47 [debug] 529189#0: *2 http read discarded body 2021/10/18 21:34:47 [debug] 529189#0: *2 select del event fd:3 ev:8193 [..] Eventually the buffer is written, reading is unlocked, then writing is immediately blocked on 2nd SSL_sendfile(): 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL to write: 98818 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_write_early_data: 1, 98818 2021/10/18 21:34:48 [debug] 529189#0: *2 post event 000055BB458EB370 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL to sendfile: @0 1048576 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_sendfile: 45056 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL to sendfile: @45056 1003520 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_sendfile: -1 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_get_error: 3 2021/10/18 21:34:48 [debug] 529189#0: *2 http write filter 000055BB4590E4C8 2021/10/18 21:34:48 [debug] 529189#0: *2 http copy filter: -2 "/file?" 2021/10/18 21:34:48 [debug] 529189#0: *2 http writer output filter: -2, "/file?" 2021/10/18 21:34:48 [debug] 529189#0: *2 event timer: 3, old: 376771767, new: 376771919 2021/10/18 21:34:48 [debug] 529189#0: posted event 000055BB458EB370 2021/10/18 21:34:48 [debug] 529189#0: *2 delete posted event 000055BB458EB370 2021/10/18 21:34:48 [debug] 529189#0: *2 http run request: "/file?" 2021/10/18 21:34:48 [debug] 529189#0: *2 http read discarded body 2021/10/18 21:34:48 [debug] 529189#0: shmtx lock 2021/10/18 21:34:48 [debug] 529189#0: slab alloc: 156 slot: 5 2021/10/18 21:34:48 [debug] 529189#0: slab alloc: 00007F19F275E200 2021/10/18 21:34:48 [debug] 529189#0: slab alloc: 128 slot: 4 2021/10/18 21:34:48 [debug] 529189#0: slab alloc: 00007F19F275C180 2021/10/18 21:34:48 [debug] 529189#0: *2 ssl new session: 7C15F467:32:156 2021/10/18 21:34:48 [debug] 529189#0: shmtx unlock 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_read_early_data: 2, 0 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_read: -1 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_get_error: 3 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_read: want write 2021/10/18 21:34:48 [debug] 529189#0: *2 event timer del: 3: 376715951 2021/10/18 21:34:48 [debug] 529189#0: *2 event timer add: 3: 5000:376716919 "SSL_read: want write" looks somewhat amusing to me. I've annotated certain OpenSSL parts to see what's happened: - first, it is SSL_sendfile() that blocks on writing, due to sendfile() returned EAGAIN, and signals SSL_ERROR_WANT_WRITE - then SSL_read_early_data() is called. Per the return code, it is the last part of reading 0-RTT. All the handshake messages are received to the moment, which is confirmed with the SSL_CTX_sess_set_new_cb() callback that is always called post-handshake in TLSv1.3; here it is called while in SSL_read_early_data(), which means that we've already received Client Finished and are ready to receive ordinal application data (aka 1-RTT). It looks like there's no any errors, but actually there's still a pending write from SSL_sendfile(). There's even a nested SSL_get_error() called from within SSL_read_early_data() to check why we couldn't actually write session ticket(s): https://github.com/openssl/openssl/blob/openssl-3.0.0/ssl/statem/statem_srvr.c#L988 I've noticed this after annotating SSL_get_error(). - then SSL_read() is called, now it is logged in nginx as an error signalled with SSL_ERROR_WANT_WRITE. It is tried to write session ticket(s) again, to no avail. Eventually, no longer pending write on the next write event. Looks like successfully sent session ticket(s) under the hood. 2021/10/18 21:34:48 [debug] 529189#0: worker cycle 2021/10/18 21:34:48 [debug] 529189#0: select event: fd:6 wr:0 2021/10/18 21:34:48 [debug] 529189#0: select event: fd:7 wr:0 2021/10/18 21:34:48 [debug] 529189#0: select event: fd:8 wr:0 2021/10/18 21:34:48 [debug] 529189#0: select event: fd:3 wr:1 2021/10/18 21:34:48 [debug] 529189#0: max_fd: 8 2021/10/18 21:34:48 [debug] 529189#0: select timer: 5000 2021/10/18 21:34:48 [debug] 529189#0: select ready 1 2021/10/18 21:34:48 [debug] 529189#0: select write 3 2021/10/18 21:34:48 [debug] 529189#0: *2 post event 000055BB458F7380 2021/10/18 21:34:48 [debug] 529189#0: timer delta: 332 2021/10/18 21:34:48 [debug] 529189#0: posted event 000055BB458F7380 2021/10/18 21:34:48 [debug] 529189#0: *2 delete posted event 000055BB458F7380 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL write handler 2021/10/18 21:34:48 [debug] 529189#0: *2 http run request: "/file?" 2021/10/18 21:34:48 [debug] 529189#0: *2 http read discarded body 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_read: 10 2021/10/18 21:34:48 [debug] 529189#0: *2 select del event fd:3 ev:4 2021/10/18 21:34:48 [debug] 529189#0: *2 post event 000055BB458F7380 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_read: avail:0 2021/10/18 21:34:48 [debug] 529189#0: *2 http finalize request: -4, "/file?" a:1, c:2 2021/10/18 21:34:48 [debug] 529189#0: *2 http request count:2 blk:0 2021/10/18 21:34:48 [debug] 529189#0: posted event 000055BB458F7380 2021/10/18 21:34:48 [debug] 529189#0: *2 delete posted event 000055BB458F7380 2021/10/18 21:34:48 [debug] 529189#0: *2 http run request: "/file?" 2021/10/18 21:34:48 [debug] 529189#0: *2 http writer handler: "/file?" 2021/10/18 21:34:48 [debug] 529189#0: *2 http output filter "/file?" 2021/10/18 21:34:48 [debug] 529189#0: *2 http copy filter: "/file?" 2021/10/18 21:34:48 [debug] 529189#0: *2 image filter 2021/10/18 21:34:48 [debug] 529189#0: *2 xslt filter body 2021/10/18 21:34:48 [debug] 529189#0: *2 http postpone filter "/file?" 0000000000000000 2021/10/18 21:34:48 [debug] 529189#0: *2 write old buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 45056, size: 1003520 2021/10/18 21:34:48 [debug] 529189#0: *2 write old buf t:0 f:0 0000000000000000, pos 000055BB43B53E39, size: 2 file: 0, size: 0 2021/10/18 21:34:48 [debug] 529189#0: *2 write old buf t:0 f:0 0000000000000000, pos 000055BB43B53E36, size: 5 file: 0, size: 0 2021/10/18 21:34:48 [debug] 529189#0: *2 http write filter: l:1 f:0 s:1003527 2021/10/18 21:34:48 [debug] 529189#0: *2 http write filter limit 0 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL to sendfile: @45056 1003520 2021/10/18 21:34:48 [debug] 529189#0: *2 SSL_sendfile: 61440 [..] These are internally made annotations that roughly corresponds to that part of the connection, for the record: before SSL_write_early_data WANT_WRITE #2 SSL_want_write - BIO_should_write SSL_ERROR_WANT_WRITE before SSL_write_early_data before SSL_sendfile before SSL_sendfile get_last_sys_error 11 WANT_WRITE #2 SSL_want_write - BIO_should_write SSL_ERROR_WANT_WRITE before SSL_read_early_data TLS_ST_SW_SESSION_TICKET -> SSL_get_error WANT_WRITE #2 SSL_want_write - BIO_should_write after SSL_read_early_data before SSL_read TLS_ST_SW_SESSION_TICKET -> SSL_get_error WANT_WRITE #2 SSL_want_write - BIO_should_write before ngx_ssl_handle_recv -> SSL_get_error WANT_WRITE #2 SSL_want_write - BIO_should_write SSL_ERROR_WANT_WRITE before SSL_read before SSL_sendfile -- Sergey Kandaurov From mdounin at mdounin.ru Tue Oct 19 01:54:32 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Oct 2021 04:54:32 +0300 Subject: [PATCH 2 of 2] SSL: SSL_sendfile() support with kernel TLS In-Reply-To: <8A19E0B0-8DF6-4DAA-8F9C-3E2F8EDA7087@nginx.com> References: <8A19E0B0-8DF6-4DAA-8F9C-3E2F8EDA7087@nginx.com> Message-ID: Hello! On Mon, Oct 18, 2021 at 06:26:47PM +0300, Sergey Kandaurov wrote: > > On 27 Sep 2021, at 16:18, Maxim Dounin wrote: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1632717779 -10800 > > # Mon Sep 27 07:42:59 2021 +0300 > > # Node ID ff514bf17f7f2257dcf036c5c973b74672cefa9a > > # Parent 8f0fd60c33c106fba5f1ce3cafe990f15fcccc0c > > SSL: SSL_sendfile() support with kernel TLS. > > > > Requires OpenSSL 3.0 compiled with "enable-ktls" option. Further, KTLS > > needs to be enabled in kernel, and in OpenSSL, either via OpenSSL > > configuration file or with "ssl_conf_command Options KTLS;" in nginx > > configuration. > > > > On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and > > can be enabled with "sysctl kern.ipc.tls.enable=1" and "kldload ktls_ocf". > > I am not sure about mentioning ktls_ocf.ko in the commit message. > The module is only present in FreeBSD 13.0, it was removed post 13.0, > and the functionality is now always present in kernels with KERN_TLS: > https://cgit.freebsd.org/src/commit/?id=21e3c1fbe246 > Further, it is one of many options to enable KTLS. > It could be better to refer to man ktls(4), instead: > > : On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and > : can be enabled with "sysctl kern.ipc.tls.enable=1", see man ktls(4). > > (but I don't insist) I would rather keep it explicitly mentioned, since it is a required step on FreeBSD 13, and this is the only FreeBSD release with KTLS so far. I don't object adding ktls(4) reference though, updated with the following: : On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and : can be enabled with "sysctl kern.ipc.tls.enable=1" and "kldload ktls_ocf" : to load a software backend, see man ktls(4) for details. > > On Linux, kernel TLS is available starting with kernel 4.13 (at least 5.2 > > is recommended), and needs kernel compiled with CONFIG_TLS=y (with > > CONFIG_TLS=m, which is used at least on Ubuntu 21.04 by default, > > the tls module needs to be loaded with "modprobe tls"). > > On Linux I observe a problem sending data with short socket buffer space. > It is Ubuntu 20.04 (5.4.0) and 21.04 (5.11.0), with epoll and select > event methods. As per tcpdump traces, it looks like the buffer cannot > be pushed to the network, although it is reported as if it was sent. > The simplest I could grab (see below) with ssl_buffer_size 4k and sndbuf 8k > (note that unlike SSL_write(), buffers aren't limited with ssl_buffer_size). You mean records? SSL buffer size limits buffering, and as a side effect it limits maximum size of SSL records generated, since nginx always uses the buffer to call SSL_write(). With SSL_sendfile(), it does not limit records generated by sendfile, since the buffer is not used for SSL_sendfile(). (Just for the record, as of now there is no way to limit maximum record size with SSL_sendfile() (except may be by calling SSL_sendfile() many times with small file fragments, but this approach looks awful), as there are no kernel interfaces to control maximum record size. Further, OpenSSL disables KTLS if SSL_CTX_set_max_send_fragment() is used with anything other than 16384, see tls1_change_cipher_state(), "ktls supports only the maximum fragment size". I don't think this is a major problem though.) > It doesn't stuck starting with sndbuf 16k, so it might have something > with how KTLS send buffers correspond with TCP send buffers. > (In contrast, the FreeBSD sendfile is strictly constrained by the available > send buffer space and hence shouldn't have this problem.) > So it doesn't look like a major issue. I was able to reproduce this with sndbuf=32k over localhost on Ubuntu 21.04 (5.11.0-18-generic). Does not seem to happen with larger buffers, but might be I'm just not testing it hard enough. Over (emulated) network I was able to reproduce this with sndbuf=24k, but not with larger buffers. [...] > I've added additional debugging to SSL_sendfile() > to see that sendfile() returns EBUSY (11). Nitpicking: EAGAIN, not EBUSY. EBUSY on Linux is 16, and sendfile() on Linux shouldn't return EBUSY. Overall, this looks like an issue in Linux KTLS implementation, probably related to the socket buffer size. While it would be good to mitigate this on our side if possible, I don't see anything obvious (I've tried tcp_nodelay, but it doesn't help). [...] -- Maxim Dounin http://mdounin.ru/ From sander at hoentjen.eu Tue Oct 19 08:47:27 2021 From: sander at hoentjen.eu (Sander Hoentjen) Date: Tue, 19 Oct 2021 10:47:27 +0200 Subject: [PATCH] Added support for proxying managesieve protocol In-Reply-To: <0aa27eba-7662-b164-d409-08710ed5a23e@hoentjen.eu> References: <58016a3a-c11d-bb31-2b05-916cb8124598@hoentjen.eu> <89afc309-4185-e0a1-d6d3-e9f25f65235a@hoentjen.eu> <911a2e90-5002-3e1c-eb9a-6be3f4e329fc@nginx.com> <0aa27eba-7662-b164-d409-08710ed5a23e@hoentjen.eu> Message-ID: <9ee2ed1e-e536-e965-fb6e-808014cfe882@hoentjen.eu> Bump On 06/05/2021 22:14, Sander Hoentjen wrote: > It is now May, albeit a year later ;) > > Any chance of this being accepted? If so I can bring it up to date on > github. In the meantime we are using this for 18 months now, without > issues. > > Kind regards, > > Sander > > > On 16-04-2020 19:49, Sander Hoentjen wrote: >> Hi Maxim, >> >> Thanks for your response! I will wait till you have time to properly >> review my code. That is more important than getting it in fast. >> >> The code is available at >> https://github.com/AntagonistHQ/nginx/tree/sieve_v2, so at least it >> is out in the public for anyone interested. >> >> For now, I'll just wait and see what will happen in May. >> >> Thank you, >> >> Sander >> >> >> On 4/15/20 8:56 PM, Maxim Konovalov wrote: >>> Hi Sander, >>> >>> First of all, thanks for your code contribution and your work done >>> on it. >>> >>> Among with other tasks we are now in a preparation for a new >>> development >>> branch 1.19 and don't have much developers resources to make a proper >>> review of the code. >>> >>> I'd suggest to put the module somewhere on external repo for now, e.g. >>> on github.? I hope we'll be able to return to this topic later in May. >>> >>> Thanks, >>> >>> Maxim >>> >>> On 14.04.2020 21:27, Sander Hoentjen wrote: >>>> Hello list, >>>> >>>> Since the Nginx development procedure is unknown to me: Did i do the >>>> right things to get my submission to be considered? What are the next >>>> steps? Will somebody review this, or reject it? Or is it possible that >>>> it just won't get any attention, and that this will mean it will >>>> not be >>>> considered? I hope I will at least get some feedback, even if it is a >>>> rejection :) >>>> >>>> Kind regards, >>>> >>>> Sander >>>> >>>> On 4/8/20 8:33 PM, Sander Hoentjen wrote: >>>>> Hello list, >>>>> >>>>> This is my attempt at adding support for the managesieve protocol. I >>>>> hope this is something that you would consider to add. Comments on >>>>> the >>>>> code are very welcome! Also, I hope I submitted this the right >>>>> way. If >>>>> I need to change anything, please let me know. >>>>> >>>>> Kind regards, >>>>> >>>>> Sander Hoentjen >>>>> >>>>> On 4/8/20 8:26 PM, Sander Hoentjen wrote: >>>>>> # HG changeset patch >>>>>> # User Sander Hoentjen >>>>>> # Date 1586369831 -7200 >>>>>> # Wed Apr 08 20:17:11 2020 +0200 >>>>>> # Node ID f1dffaf619688aaab90caf31781ebe27c3f79598 >>>>>> # Parent 0cb942c1c1aa98118076e72e0b89940e85e6291c >>>>>> Added support for proxying managesieve protocol >>>>>> >>> [...] >>> >>> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From pluknet at nginx.com Tue Oct 19 10:07:56 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 19 Oct 2021 13:07:56 +0300 Subject: performance is affected after merge OCSP changeset In-Reply-To: References: Message-ID: <59EC8F20-542D-4620-889B-05ADAC9E9864@nginx.com> > On 12 Oct 2021, at 14:31, Sergey Kandaurov wrote: > > >> On 12 Oct 2021, at 10:41, sun edward wrote: >> >> Hi, >> There is a changeset fe919fd63b0b "client certificate validation with OCSP" , after merge this changeset, the performance seems not as good as before, the avg response time increased about 50~60ms. is there a way to optimize this problem? >> > > Are you referring to processing 0-RTT HTTP/3 requests? > > Anyway, please try this change and report back. > > # HG changeset patch > # User Sergey Kandaurov > # Date 1634038108 -10800 > # Tue Oct 12 14:28:28 2021 +0300 > # Branch quic > # Node ID af4bd86814fdd0a2da3f7b8a965c41923ebeedd5 > # Parent 9d47948842a3fd1c658a9676e638ef66207ffdcd > QUIC: speeding up processing 0-RTT. > > After fe919fd63b0b, processing 0-RTT was postponed until after handshake > completion (typically seen as 2-RTT), including both ssl_ocsp on and off. > This change allows to start OCSP checks with reused SSL handshakes, > which eliminates 1 additional RTT allowing to process 0-RTT as expected. > > diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c > --- a/src/event/quic/ngx_event_quic_ssl.c > +++ b/src/event/quic/ngx_event_quic_ssl.c > @@ -410,6 +410,10 @@ ngx_quic_crypto_input(ngx_connection_t * > return NGX_ERROR; > } > > + if (SSL_session_reused(c->ssl->connection)) { > + goto ocsp; > + } > + > return NGX_OK; > } > > @@ -463,6 +467,7 @@ ngx_quic_crypto_input(ngx_connection_t * > return NGX_ERROR; > } > > +ocsp: > rc = ngx_ssl_ocsp_validate(c); > > if (rc == NGX_ERROR) { > Below is alternative patch, it brings closer to how OCSP validation is done with SSL_read_early_data(), with its inherent design flaws. Namely, the case of regular SSL session reuse is still pessimized, but that shouldn't bring further slowdown with ssl_ocsp disabled, which is slow by itself. # HG changeset patch # User Sergey Kandaurov # Date 1634637049 -10800 # Tue Oct 19 12:50:49 2021 +0300 # Branch quic # Node ID 6f26d6656b4ef97a3a245354bd7fa9e5c8671237 # Parent 1798acc01970ae5a03f785b7679fe34c32adcfea QUIC: speeding up processing 0-RTT. After fe919fd63b0b, processing QUIC streams was postponed until after handshake completion, which means that 0-RTT is effectively off. With ssl_ocsp enabled, it could be further delayed. This differs to how SSL_read_early_data() works. This change unlocks processing streams on successful 0-RTT packet decryption. diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c --- a/src/event/quic/ngx_event_quic.c +++ b/src/event/quic/ngx_event_quic.c @@ -989,6 +989,21 @@ ngx_quic_process_payload(ngx_connection_ } } + if (pkt->level == ssl_encryption_early_data && !qc->streams.initialized) { + rc = ngx_ssl_ocsp_validate(c); + + if (rc == NGX_ERROR) { + return NGX_ERROR; + } + + if (rc == NGX_AGAIN) { + c->ssl->handler = ngx_quic_init_streams; + + } else { + ngx_quic_init_streams(c); + } + } + if (pkt->level == ssl_encryption_handshake) { /* * RFC 9001, 4.9.1. Discarding Initial Keys diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c --- a/src/event/quic/ngx_event_quic_ssl.c +++ b/src/event/quic/ngx_event_quic_ssl.c @@ -463,6 +463,11 @@ ngx_quic_crypto_input(ngx_connection_t * return NGX_ERROR; } + if (qc->streams.initialized) { + /* done while processing 0-RTT */ + return NGX_OK; + } + rc = ngx_ssl_ocsp_validate(c); if (rc == NGX_ERROR) { -- Sergey Kandaurov From pluknet at nginx.com Tue Oct 19 10:49:48 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 19 Oct 2021 13:49:48 +0300 Subject: [PATCH 2 of 2] SSL: SSL_sendfile() support with kernel TLS In-Reply-To: References: <8A19E0B0-8DF6-4DAA-8F9C-3E2F8EDA7087@nginx.com> Message-ID: <0EEC19CF-B264-41F1-8A0F-3E8902F8F684@nginx.com> > On 19 Oct 2021, at 04:54, Maxim Dounin wrote: > > Hello! > > On Mon, Oct 18, 2021 at 06:26:47PM +0300, Sergey Kandaurov wrote: > >>> On 27 Sep 2021, at 16:18, Maxim Dounin wrote: >>> >>> # HG changeset patch >>> # User Maxim Dounin >>> # Date 1632717779 -10800 >>> # Mon Sep 27 07:42:59 2021 +0300 >>> # Node ID ff514bf17f7f2257dcf036c5c973b74672cefa9a >>> # Parent 8f0fd60c33c106fba5f1ce3cafe990f15fcccc0c >>> SSL: SSL_sendfile() support with kernel TLS. >>> >>> Requires OpenSSL 3.0 compiled with "enable-ktls" option. Further, KTLS >>> needs to be enabled in kernel, and in OpenSSL, either via OpenSSL >>> configuration file or with "ssl_conf_command Options KTLS;" in nginx >>> configuration. >>> >>> On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and >>> can be enabled with "sysctl kern.ipc.tls.enable=1" and "kldload ktls_ocf". >> >> I am not sure about mentioning ktls_ocf.ko in the commit message. >> The module is only present in FreeBSD 13.0, it was removed post 13.0, >> and the functionality is now always present in kernels with KERN_TLS: >> https://cgit.freebsd.org/src/commit/?id=21e3c1fbe246 >> Further, it is one of many options to enable KTLS. >> It could be better to refer to man ktls(4), instead: >> >> : On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and >> : can be enabled with "sysctl kern.ipc.tls.enable=1", see man ktls(4). >> >> (but I don't insist) > > I would rather keep it explicitly mentioned, since it is a > required step on FreeBSD 13, and this is the only FreeBSD release > with KTLS so far. I don't object adding ktls(4) reference though, > updated with the following: > > : On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and > : can be enabled with "sysctl kern.ipc.tls.enable=1" and "kldload ktls_ocf" > : to load a software backend, see man ktls(4) for details. > It looks good, thanks. >>> On Linux, kernel TLS is available starting with kernel 4.13 (at least 5.2 >>> is recommended), and needs kernel compiled with CONFIG_TLS=y (with >>> CONFIG_TLS=m, which is used at least on Ubuntu 21.04 by default, >>> the tls module needs to be loaded with "modprobe tls"). >> >> On Linux I observe a problem sending data with short socket buffer space. >> It is Ubuntu 20.04 (5.4.0) and 21.04 (5.11.0), with epoll and select >> event methods. As per tcpdump traces, it looks like the buffer cannot >> be pushed to the network, although it is reported as if it was sent. >> The simplest I could grab (see below) with ssl_buffer_size 4k and sndbuf 8k >> (note that unlike SSL_write(), buffers aren't limited with ssl_buffer_size). > > You mean records? SSL buffer size limits buffering, and as a side > effect it limits maximum size of SSL records generated, since > nginx always uses the buffer to call SSL_write(). With > SSL_sendfile(), it does not limit records generated by sendfile, > since the buffer is not used for SSL_sendfile(). > > (Just for the record, as of now there is no way to limit maximum > record size with SSL_sendfile() (except may be by calling > SSL_sendfile() many times with small file fragments, but this > approach looks awful), as there are no kernel interfaces to > control maximum record size. Further, OpenSSL disables KTLS if > SSL_CTX_set_max_send_fragment() is used with anything other than > 16384, see tls1_change_cipher_state(), "ktls supports only the > maximum fragment size". I don't think this is a major problem > though.) Ok, it was useful to know. > >> It doesn't stuck starting with sndbuf 16k, so it might have something >> with how KTLS send buffers correspond with TCP send buffers. >> (In contrast, the FreeBSD sendfile is strictly constrained by the available >> send buffer space and hence shouldn't have this problem.) >> So it doesn't look like a major issue. > > I was able to reproduce this with sndbuf=32k over localhost on > Ubuntu 21.04 (5.11.0-18-generic). Does not seem to happen with > larger buffers, but might be I'm just not testing it hard enough. > > Over (emulated) network I was able to reproduce this with > sndbuf=24k, but not with larger buffers. > > [...] > >> I've added additional debugging to SSL_sendfile() >> to see that sendfile() returns EBUSY (11). > > Nitpicking: EAGAIN, not EBUSY. EBUSY on Linux is 16, and > sendfile() on Linux shouldn't return EBUSY. Yes, surely EAGAIN. Thanks for noticing. > > Overall, this looks like an issue in Linux KTLS implementation, > probably related to the socket buffer size. While it would be > good to mitigate this on our side if possible, I don't see > anything obvious (I've tried tcp_nodelay, but it doesn't help). > > [...] I think so, too. -- Sergey Kandaurov From pluknet at nginx.com Tue Oct 19 12:07:24 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 19 Oct 2021 15:07:24 +0300 Subject: [PATCH] Removed CLOCK_MONOTONIC_COARSE support In-Reply-To: <3217b92006f8807d1613.1633978413@vm-bsd.mdounin.ru> References: <3217b92006f8807d1613.1633978413@vm-bsd.mdounin.ru> Message-ID: <0EA142F7-4101-4DE6-BBDA-9E7F8CB97699@nginx.com> > On 11 Oct 2021, at 21:53, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1633978301 -10800 > # Mon Oct 11 21:51:41 2021 +0300 > # Node ID 3217b92006f8807d16134246a064baab64fa7b32 > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 > Removed CLOCK_MONOTONIC_COARSE support. > > While clock_gettime(CLOCK_MONOTONIC_COARSE) is faster than > clock_gettime(CLOCK_MONOTONIC), the latter is fast enough on Linux for > practical usage, and the difference is negligible compared to other costs > at each event loop iteration. On the other hand, CLOCK_MONOTONIC_COARSE > causes various issues with typical CONFIG_HZ=250, notably very inacurate "inaccurate" > limit_rate handling in some edge cases (ticket #1678) and negative difference > between $request_time and $upstream_response_time (ticket #1965). > > diff --git a/src/core/ngx_times.c b/src/core/ngx_times.c > --- a/src/core/ngx_times.c > +++ b/src/core/ngx_times.c > @@ -200,10 +200,6 @@ ngx_monotonic_time(time_t sec, ngx_uint_ > > #if defined(CLOCK_MONOTONIC_FAST) > clock_gettime(CLOCK_MONOTONIC_FAST, &ts); > - > -#elif defined(CLOCK_MONOTONIC_COARSE) > - clock_gettime(CLOCK_MONOTONIC_COARSE, &ts); > - > #else > clock_gettime(CLOCK_MONOTONIC, &ts); > #endif > While fast clock is certainly important in general, and _COARSE is faster even in userspace clock_gettime(), I tend to agree that it has too coarse granularity, which causes more harm than good. -- Sergey Kandaurov From xeioex at nginx.com Tue Oct 19 12:54:37 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 19 Oct 2021 12:54:37 +0000 Subject: [njs] Version 0.7.0. Message-ID: details: https://hg.nginx.org/njs/rev/8418bd4a4ce3 branches: changeset: 1726:8418bd4a4ce3 user: Dmitry Volyntsev date: Tue Oct 19 12:24:13 2021 +0000 description: Version 0.7.0. diffstat: CHANGES | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diffs (29 lines): diff -r 6545769f30bf -r 8418bd4a4ce3 CHANGES --- a/CHANGES Thu Oct 14 17:16:10 2021 +0000 +++ b/CHANGES Tue Oct 19 12:24:13 2021 +0000 @@ -1,3 +1,25 @@ +Changes with njs 0.7.0 19 Oct 2021 + + nginx modules: + + *) Feature: added HTTPS support for Fetch API. + + *) Feature: added setReturnValue() method. + + Core: + + *) Feature: introduced Async/Await implementation. + + *) Feature: added WebCrypto API implementation. + + *) Bugfix: fixed copying of closures for declared + functions. The bug was introduced in 0.6.0. + + *) Bugfix: fixed unhandled promise rejection in handle + events. + + *) Bugfix: fixed Response.headers getter in Fetch API. + Changes with njs 0.6.2 31 Aug 2021 nginx modules: From xeioex at nginx.com Tue Oct 19 12:54:39 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 19 Oct 2021 12:54:39 +0000 Subject: [njs] Added tag 0.7.0 for changeset 8418bd4a4ce3 Message-ID: details: https://hg.nginx.org/njs/rev/2feef1dd21d0 branches: changeset: 1727:2feef1dd21d0 user: Dmitry Volyntsev date: Tue Oct 19 12:54:08 2021 +0000 description: Added tag 0.7.0 for changeset 8418bd4a4ce3 diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r 8418bd4a4ce3 -r 2feef1dd21d0 .hgtags --- a/.hgtags Tue Oct 19 12:24:13 2021 +0000 +++ b/.hgtags Tue Oct 19 12:54:08 2021 +0000 @@ -45,3 +45,4 @@ 282b9412976ceee31eb12876f1499fe975e6f08c 742ebceef2b5d15febc093172fe6174e427b26c8 0.6.0 4adbe67b292af2adc0a6fde4ec6cb95dbba9470a 0.6.1 dfba7f61745c7454ffdd55303a793206d0a9a84a 0.6.2 +8418bd4a4ce3114d57b4d75f913e8c4912bf4b5d 0.7.0 From andre.romcke at gmail.com Tue Oct 19 13:32:57 2021 From: andre.romcke at gmail.com (=?UTF-8?B?QW5kcsOpIFLDuG1ja2U=?=) Date: Tue, 19 Oct 2021 15:32:57 +0200 Subject: [PATCH] Add image/avif to conf/mime.types In-Reply-To: References: Message-ID: man. 27. sep. 2021 kl. 10:55: > > Format is stable and broader AVIF support (& likely also adoption) is incoming: > > - About 1/2 size compared to jpeg, 2/3 of webp, and roughly 1/1 with JPEG XL* > - Already supported in Chrome and Firefox: > - Also in Chromium** so soon in Edge, Opera, ... > - And apparently landed in Webkit*** > > > Kind regards. > > > * JPEG XL bitstream is frozen, but still work in progress & not > supported out of the box: > - https://en.wikipedia.org/wiki/JPEG_XL > - https://caniuse.com/jpegxl > > ** https://bugs.chromium.org/p/chromium/issues/detail?id=960620 > > *** https://bugs.webkit.org/show_bug.cgi?id=207750 Patch inline as attachment did not seem to work: diff -r bfad703459b4 conf/mime.types --- a/conf/mime.types Wed Sep 22 10:20:00 2021 +0300 +++ b/conf/mime.types Mon Sep 27 10:13:55 2021 +0200 @@ -15,6 +15,7 @@ text/vnd.wap.wml wml; text/x-component htc; + image/avif avif; image/png png; image/svg+xml svg svgz; image/tiff tif tiff; From xeioex at nginx.com Wed Oct 20 13:27:30 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 20 Oct 2021 13:27:30 +0000 Subject: [njs] Version bump. Message-ID: details: https://hg.nginx.org/njs/rev/9f5285c82b88 branches: changeset: 1728:9f5285c82b88 user: Dmitry Volyntsev date: Wed Oct 20 12:16:37 2021 +0000 description: Version bump. diffstat: src/njs.h | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 2feef1dd21d0 -r 9f5285c82b88 src/njs.h --- a/src/njs.h Tue Oct 19 12:54:08 2021 +0000 +++ b/src/njs.h Wed Oct 20 12:16:37 2021 +0000 @@ -11,7 +11,7 @@ #include -#define NJS_VERSION "0.7.0" +#define NJS_VERSION "0.7.1" #include /* STDOUT_FILENO, STDERR_FILENO */ From xeioex at nginx.com Wed Oct 20 13:27:32 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 20 Oct 2021 13:27:32 +0000 Subject: [njs] Stream: fixed build without --with-http_ssl_module. Message-ID: details: https://hg.nginx.org/njs/rev/b9bbb230fe4f branches: changeset: 1729:b9bbb230fe4f user: Dmitry Volyntsev date: Wed Oct 20 13:01:55 2021 +0000 description: Stream: fixed build without --with-http_ssl_module. This closes #434 issue on Github. diffstat: nginx/ngx_stream_js_module.c | 16 ++++++++-------- 1 files changed, 8 insertions(+), 8 deletions(-) diffs (73 lines): diff -r 9f5285c82b88 -r b9bbb230fe4f nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Wed Oct 20 12:16:37 2021 +0000 +++ b/nginx/ngx_stream_js_module.c Wed Oct 20 13:01:55 2021 +0000 @@ -34,7 +34,7 @@ typedef struct { ngx_str_t access; ngx_str_t preread; ngx_str_t filter; -#if (NGX_SSL) +#if (NGX_STREAM_SSL) ngx_ssl_t *ssl; ngx_str_t ssl_ciphers; ngx_uint_t ssl_protocols; @@ -145,13 +145,13 @@ static char *ngx_stream_js_merge_srv_con void *child); static ngx_int_t ngx_stream_js_init(ngx_conf_t *cf); -#if (NGX_SSL) +#if (NGX_STREAM_SSL) static char * ngx_stream_js_set_ssl(ngx_conf_t *cf, ngx_stream_js_srv_conf_t *jscf); #endif static ngx_ssl_t *ngx_stream_js_ssl(njs_vm_t *vm, ngx_stream_session_t *s); -#if (NGX_HTTP_SSL) +#if (NGX_STREAM_SSL) static ngx_conf_bitmask_t ngx_stream_js_ssl_protocols[] = { { ngx_string("TLSv1"), NGX_SSL_TLSv1 }, @@ -221,7 +221,7 @@ static ngx_command_t ngx_stream_js_comm offsetof(ngx_stream_js_srv_conf_t, filter), NULL }, -#if (NGX_SSL) +#if (NGX_STREAM_SSL) { ngx_string("js_fetch_ciphers"), NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, @@ -1992,7 +1992,7 @@ ngx_stream_js_create_srv_conf(ngx_conf_t * conf->ssl_trusted_certificate = { 0, NULL }; */ -#if (NGX_SSL) +#if (NGX_STREAM_SSL) conf->ssl_verify_depth = NGX_CONF_UNSET; #endif return conf; @@ -2009,7 +2009,7 @@ ngx_stream_js_merge_srv_conf(ngx_conf_t ngx_conf_merge_str_value(conf->preread, prev->preread, ""); ngx_conf_merge_str_value(conf->filter, prev->filter, ""); -#if (NGX_HTTP_SSL) +#if (NGX_STREAM_SSL) ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, "DEFAULT"); ngx_conf_merge_bitmask_value(conf->ssl_protocols, prev->ssl_protocols, @@ -2057,7 +2057,7 @@ ngx_stream_js_init(ngx_conf_t *cf) } -#if (NGX_SSL) +#if (NGX_STREAM_SSL) static char * ngx_stream_js_set_ssl(ngx_conf_t *cf, ngx_stream_js_srv_conf_t *jscf) @@ -2106,7 +2106,7 @@ ngx_stream_js_set_ssl(ngx_conf_t *cf, ng static ngx_ssl_t * ngx_stream_js_ssl(njs_vm_t *vm, ngx_stream_session_t *s) { -#if (NGX_SSL) +#if (NGX_STREAM_SSL) ngx_stream_js_srv_conf_t *jscf; jscf = ngx_stream_get_module_srv_conf(s, ngx_stream_js_module); From vl at nginx.com Wed Oct 20 17:27:28 2021 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 20 Oct 2021 17:27:28 +0000 Subject: [nginx] HTTP/2: removed support for NPN. Message-ID: details: https://hg.nginx.org/nginx/rev/61abb35bb8cf branches: changeset: 7934:61abb35bb8cf user: Vladimir Homutov date: Fri Oct 15 10:02:15 2021 +0300 description: HTTP/2: removed support for NPN. NPN was replaced with ALPN, published as RFC 7301 in July 2014. It used to negotiate SPDY (and, in transition, HTTP/2). NPN supported appeared in OpenSSL 1.0.1. It does not work with TLSv1.3 [1]. ALPN is supported since OpenSSL 1.0.2. The NPN support was dropped in Firefox 53 [2] and Chrome 51 [3]. [1] https://github.com/openssl/openssl/issues/3665. [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1248198 [3] https://www.chromestatus.com/feature/5767920709795840 diffstat: src/http/modules/ngx_http_ssl_module.c | 59 ++------------------------------- src/http/ngx_http.c | 5 +- src/http/ngx_http_request.c | 14 +------- src/http/v2/ngx_http_v2.h | 3 +- 4 files changed, 9 insertions(+), 72 deletions(-) diffs (166 lines): diff -r 2f443cac3f1e -r 61abb35bb8cf src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Mon Oct 18 16:46:59 2021 +0300 +++ b/src/http/modules/ngx_http_ssl_module.c Fri Oct 15 10:02:15 2021 +0300 @@ -17,7 +17,7 @@ typedef ngx_int_t (*ngx_ssl_variable_han #define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5" #define NGX_DEFAULT_ECDH_CURVE "auto" -#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1" +#define NGX_HTTP_ALPN_PROTO "\x08http/1.1" #ifdef TLSEXT_TYPE_application_layer_protocol_negotiation @@ -26,11 +26,6 @@ static int ngx_http_ssl_alpn_select(ngx_ const unsigned char *in, unsigned int inlen, void *arg); #endif -#ifdef TLSEXT_TYPE_next_proto_neg -static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, - const unsigned char **out, unsigned int *outlen, void *arg); -#endif - static ngx_int_t ngx_http_ssl_static_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_ssl_variable(ngx_http_request_t *r, @@ -444,15 +439,14 @@ ngx_http_ssl_alpn_select(ngx_ssl_conn_t hc = c->data; if (hc->addr_conf->http2) { - srv = - (unsigned char *) NGX_HTTP_V2_ALPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; - srvlen = sizeof(NGX_HTTP_V2_ALPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; + srv = (unsigned char *) NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTO; + srvlen = sizeof(NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTO) - 1; } else #endif { - srv = (unsigned char *) NGX_HTTP_NPN_ADVERTISE; - srvlen = sizeof(NGX_HTTP_NPN_ADVERTISE) - 1; + srv = (unsigned char *) NGX_HTTP_ALPN_PROTO; + srvlen = sizeof(NGX_HTTP_ALPN_PROTO) - 1; } if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen, @@ -471,44 +465,6 @@ ngx_http_ssl_alpn_select(ngx_ssl_conn_t #endif -#ifdef TLSEXT_TYPE_next_proto_neg - -static int -ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, - const unsigned char **out, unsigned int *outlen, void *arg) -{ -#if (NGX_HTTP_V2 || NGX_DEBUG) - ngx_connection_t *c; - - c = ngx_ssl_get_connection(ssl_conn); - ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "SSL NPN advertised"); -#endif - -#if (NGX_HTTP_V2) - { - ngx_http_connection_t *hc; - - hc = c->data; - - if (hc->addr_conf->http2) { - *out = - (unsigned char *) NGX_HTTP_V2_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; - *outlen = sizeof(NGX_HTTP_V2_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; - - return SSL_TLSEXT_ERR_OK; - } - } -#endif - - *out = (unsigned char *) NGX_HTTP_NPN_ADVERTISE; - *outlen = sizeof(NGX_HTTP_NPN_ADVERTISE) - 1; - - return SSL_TLSEXT_ERR_OK; -} - -#endif - - static ngx_int_t ngx_http_ssl_static_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) @@ -792,11 +748,6 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * SSL_CTX_set_alpn_select_cb(conf->ssl.ctx, ngx_http_ssl_alpn_select, NULL); #endif -#ifdef TLSEXT_TYPE_next_proto_neg - SSL_CTX_set_next_protos_advertised_cb(conf->ssl.ctx, - ngx_http_ssl_npn_advertised, NULL); -#endif - if (ngx_ssl_ciphers(cf, &conf->ssl, &conf->ciphers, conf->prefer_server_ciphers) != NGX_OK) diff -r 2f443cac3f1e -r 61abb35bb8cf src/http/ngx_http.c --- a/src/http/ngx_http.c Mon Oct 18 16:46:59 2021 +0300 +++ b/src/http/ngx_http.c Fri Oct 15 10:02:15 2021 +0300 @@ -1338,13 +1338,12 @@ ngx_http_add_address(ngx_conf_t *cf, ngx } #if (NGX_HTTP_V2 && NGX_HTTP_SSL \ - && !defined TLSEXT_TYPE_application_layer_protocol_negotiation \ - && !defined TLSEXT_TYPE_next_proto_neg) + && !defined TLSEXT_TYPE_application_layer_protocol_negotiation) if (lsopt->http2 && lsopt->ssl) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0, "nginx was built with OpenSSL that lacks ALPN " - "and NPN support, HTTP/2 is not enabled for %V", + "support, HTTP/2 is not enabled for %V", &lsopt->addr_text); } diff -r 2f443cac3f1e -r 61abb35bb8cf src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Mon Oct 18 16:46:59 2021 +0300 +++ b/src/http/ngx_http_request.c Fri Oct 15 10:02:15 2021 +0300 @@ -806,8 +806,7 @@ ngx_http_ssl_handshake_handler(ngx_conne c->ssl->no_wait_shutdown = 1; #if (NGX_HTTP_V2 \ - && (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ - || defined TLSEXT_TYPE_next_proto_neg)) + && defined TLSEXT_TYPE_application_layer_protocol_negotiation) { unsigned int len; const unsigned char *data; @@ -817,19 +816,8 @@ ngx_http_ssl_handshake_handler(ngx_conne if (hc->addr_conf->http2) { -#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation SSL_get0_alpn_selected(c->ssl->connection, &data, &len); -#ifdef TLSEXT_TYPE_next_proto_neg - if (len == 0) { - SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); - } -#endif - -#else /* TLSEXT_TYPE_next_proto_neg */ - SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); -#endif - if (len == 2 && data[0] == 'h' && data[1] == '2') { ngx_http_v2_init(c->read); return; diff -r 2f443cac3f1e -r 61abb35bb8cf src/http/v2/ngx_http_v2.h --- a/src/http/v2/ngx_http_v2.h Mon Oct 18 16:46:59 2021 +0300 +++ b/src/http/v2/ngx_http_v2.h Fri Oct 15 10:02:15 2021 +0300 @@ -13,8 +13,7 @@ #include -#define NGX_HTTP_V2_ALPN_ADVERTISE "\x02h2" -#define NGX_HTTP_V2_NPN_ADVERTISE NGX_HTTP_V2_ALPN_ADVERTISE +#define NGX_HTTP_V2_ALPN_PROTO "\x02h2" #define NGX_HTTP_V2_STATE_BUFFER_SIZE 16 From vl at nginx.com Wed Oct 20 17:27:30 2021 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 20 Oct 2021 17:27:30 +0000 Subject: [nginx] SSL: added $ssl_alpn_protocol variable. Message-ID: details: https://hg.nginx.org/nginx/rev/eb6c77e6d55d branches: changeset: 7935:eb6c77e6d55d user: Vladimir Homutov date: Thu Oct 14 11:46:23 2021 +0300 description: SSL: added $ssl_alpn_protocol variable. The variable contains protocol selected by ALPN during handshake and is empty otherwise. diffstat: src/event/ngx_event_openssl.c | 30 ++++++++++++++++++++++++++++++ src/event/ngx_event_openssl.h | 2 ++ src/http/modules/ngx_http_ssl_module.c | 3 +++ src/stream/ngx_stream_ssl_module.c | 3 +++ 4 files changed, 38 insertions(+), 0 deletions(-) diffs (78 lines): diff -r 61abb35bb8cf -r eb6c77e6d55d src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Fri Oct 15 10:02:15 2021 +0300 +++ b/src/event/ngx_event_openssl.c Thu Oct 14 11:46:23 2021 +0300 @@ -4699,6 +4699,36 @@ ngx_ssl_get_server_name(ngx_connection_t ngx_int_t +ngx_ssl_get_alpn_protocol(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s) +{ +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + + unsigned int len; + const unsigned char *data; + + SSL_get0_alpn_selected(c->ssl->connection, &data, &len); + + if (len > 0) { + + s->data = ngx_pnalloc(pool, len); + if (s->data == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(s->data, data, len); + s->len = len; + + return NGX_OK; + } + +#endif + + s->len = 0; + return NGX_OK; +} + + +ngx_int_t ngx_ssl_get_raw_certificate(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s) { size_t len; diff -r 61abb35bb8cf -r eb6c77e6d55d src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Fri Oct 15 10:02:15 2021 +0300 +++ b/src/event/ngx_event_openssl.h Thu Oct 14 11:46:23 2021 +0300 @@ -265,6 +265,8 @@ ngx_int_t ngx_ssl_get_early_data(ngx_con ngx_str_t *s); ngx_int_t ngx_ssl_get_server_name(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s); +ngx_int_t ngx_ssl_get_alpn_protocol(ngx_connection_t *c, ngx_pool_t *pool, + ngx_str_t *s); ngx_int_t ngx_ssl_get_raw_certificate(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s); ngx_int_t ngx_ssl_get_certificate(ngx_connection_t *c, ngx_pool_t *pool, diff -r 61abb35bb8cf -r eb6c77e6d55d src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Fri Oct 15 10:02:15 2021 +0300 +++ b/src/http/modules/ngx_http_ssl_module.c Thu Oct 14 11:46:23 2021 +0300 @@ -358,6 +358,9 @@ static ngx_http_variable_t ngx_http_ssl { ngx_string("ssl_server_name"), NULL, ngx_http_ssl_variable, (uintptr_t) ngx_ssl_get_server_name, NGX_HTTP_VAR_CHANGEABLE, 0 }, + { ngx_string("ssl_alpn_protocol"), NULL, ngx_http_ssl_variable, + (uintptr_t) ngx_ssl_get_alpn_protocol, NGX_HTTP_VAR_CHANGEABLE, 0 }, + { ngx_string("ssl_client_cert"), NULL, ngx_http_ssl_variable, (uintptr_t) ngx_ssl_get_certificate, NGX_HTTP_VAR_CHANGEABLE, 0 }, diff -r 61abb35bb8cf -r eb6c77e6d55d src/stream/ngx_stream_ssl_module.c --- a/src/stream/ngx_stream_ssl_module.c Fri Oct 15 10:02:15 2021 +0300 +++ b/src/stream/ngx_stream_ssl_module.c Thu Oct 14 11:46:23 2021 +0300 @@ -266,6 +266,9 @@ static ngx_stream_variable_t ngx_stream { ngx_string("ssl_server_name"), NULL, ngx_stream_ssl_variable, (uintptr_t) ngx_ssl_get_server_name, NGX_STREAM_VAR_CHANGEABLE, 0 }, + { ngx_string("ssl_alpn_protocol"), NULL, ngx_stream_ssl_variable, + (uintptr_t) ngx_ssl_get_alpn_protocol, NGX_STREAM_VAR_CHANGEABLE, 0 }, + { ngx_string("ssl_client_cert"), NULL, ngx_stream_ssl_variable, (uintptr_t) ngx_ssl_get_certificate, NGX_STREAM_VAR_CHANGEABLE, 0 }, From vl at nginx.com Wed Oct 20 17:27:33 2021 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 20 Oct 2021 17:27:33 +0000 Subject: [nginx] Stream: the "ssl_alpn" directive. Message-ID: details: https://hg.nginx.org/nginx/rev/b9e02e9b2f1d branches: changeset: 7936:b9e02e9b2f1d user: Vladimir Homutov date: Tue Oct 19 12:19:59 2021 +0300 description: Stream: the "ssl_alpn" directive. The directive sets the server list of supported application protocols and requires one of this protocols to be negotiated if client is using ALPN. diffstat: src/event/ngx_event_openssl.c | 3 + src/stream/ngx_stream_ssl_module.c | 117 +++++++++++++++++++++++++++++++++++++ src/stream/ngx_stream_ssl_module.h | 1 + 3 files changed, 121 insertions(+), 0 deletions(-) diffs (200 lines): diff -r eb6c77e6d55d -r b9e02e9b2f1d src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Thu Oct 14 11:46:23 2021 +0300 +++ b/src/event/ngx_event_openssl.c Tue Oct 19 12:19:59 2021 +0300 @@ -3134,6 +3134,9 @@ ngx_ssl_connection_error(ngx_connection_ #ifdef SSL_R_CALLBACK_FAILED || n == SSL_R_CALLBACK_FAILED /* 234 */ #endif +#ifdef SSL_R_NO_APPLICATION_PROTOCOL + || n == SSL_R_NO_APPLICATION_PROTOCOL /* 235 */ +#endif || n == SSL_R_UNEXPECTED_MESSAGE /* 244 */ || n == SSL_R_UNEXPECTED_RECORD /* 245 */ || n == SSL_R_UNKNOWN_ALERT_TYPE /* 246 */ diff -r eb6c77e6d55d -r b9e02e9b2f1d src/stream/ngx_stream_ssl_module.c --- a/src/stream/ngx_stream_ssl_module.c Thu Oct 14 11:46:23 2021 +0300 +++ b/src/stream/ngx_stream_ssl_module.c Tue Oct 19 12:19:59 2021 +0300 @@ -25,6 +25,11 @@ static void ngx_stream_ssl_handshake_han #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME int ngx_stream_ssl_servername(ngx_ssl_conn_t *ssl_conn, int *ad, void *arg); #endif +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation +static int ngx_stream_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, + const unsigned char **out, unsigned char *outlen, + const unsigned char *in, unsigned int inlen, void *arg); +#endif #ifdef SSL_R_CERT_CB_ERROR static int ngx_stream_ssl_certificate(ngx_ssl_conn_t *ssl_conn, void *arg); #endif @@ -45,6 +50,8 @@ static char *ngx_stream_ssl_password_fil void *conf); static char *ngx_stream_ssl_session_cache(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static char *ngx_stream_ssl_alpn(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); static char *ngx_stream_ssl_conf_command_check(ngx_conf_t *cf, void *post, void *data); @@ -211,6 +218,13 @@ static ngx_command_t ngx_stream_ssl_com offsetof(ngx_stream_ssl_conf_t, conf_commands), &ngx_stream_ssl_conf_command_post }, + { ngx_string("ssl_alpn"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_1MORE, + ngx_stream_ssl_alpn, + NGX_STREAM_SRV_CONF_OFFSET, + 0, + NULL }, + ngx_null_command }; @@ -446,6 +460,46 @@ ngx_stream_ssl_servername(ngx_ssl_conn_t #endif +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + +static int +ngx_stream_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, + unsigned char *outlen, const unsigned char *in, unsigned int inlen, + void *arg) +{ + ngx_str_t *alpn; +#if (NGX_DEBUG) + unsigned int i; + ngx_connection_t *c; + + c = ngx_ssl_get_connection(ssl_conn); + + for (i = 0; i < inlen; i += in[i] + 1) { + ngx_log_debug2(NGX_LOG_DEBUG_STREAM, c->log, 0, + "SSL ALPN supported by client: %*s", + (size_t) in[i], &in[i + 1]); + } + +#endif + + alpn = arg; + + if (SSL_select_next_proto((unsigned char **) out, outlen, alpn->data, + alpn->len, in, inlen) + != OPENSSL_NPN_NEGOTIATED) + { + return SSL_TLSEXT_ERR_ALERT_FATAL; + } + + ngx_log_debug2(NGX_LOG_DEBUG_STREAM, c->log, 0, + "SSL ALPN selected: %*s", (size_t) *outlen, *out); + + return SSL_TLSEXT_ERR_OK; +} + +#endif + + #ifdef SSL_R_CERT_CB_ERROR int @@ -605,6 +659,7 @@ ngx_stream_ssl_create_conf(ngx_conf_t *c * scf->client_certificate = { 0, NULL }; * scf->trusted_certificate = { 0, NULL }; * scf->crl = { 0, NULL }; + * scf->alpn = { 0, NULL }; * scf->ciphers = { 0, NULL }; * scf->shm_zone = NULL; */ @@ -663,6 +718,7 @@ ngx_stream_ssl_merge_conf(ngx_conf_t *cf ngx_conf_merge_str_value(conf->trusted_certificate, prev->trusted_certificate, ""); ngx_conf_merge_str_value(conf->crl, prev->crl, ""); + ngx_conf_merge_str_value(conf->alpn, prev->alpn, ""); ngx_conf_merge_str_value(conf->ecdh_curve, prev->ecdh_curve, NGX_DEFAULT_ECDH_CURVE); @@ -723,6 +779,13 @@ ngx_stream_ssl_merge_conf(ngx_conf_t *cf ngx_stream_ssl_servername); #endif +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + if (conf->alpn.len) { + SSL_CTX_set_alpn_select_cb(conf->ssl.ctx, ngx_stream_ssl_alpn_select, + &conf->alpn); + } +#endif + if (ngx_ssl_ciphers(cf, &conf->ssl, &conf->ciphers, conf->prefer_server_ciphers) != NGX_OK) @@ -1060,6 +1123,60 @@ invalid: static char * +ngx_stream_ssl_alpn(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + + ngx_stream_ssl_conf_t *scf = conf; + + u_char *p; + size_t len; + ngx_str_t *value; + ngx_uint_t i; + + if (scf->alpn.len) { + return "is duplicate"; + } + + value = cf->args->elts; + + len = 0; + + for (i = 1; i < cf->args->nelts; i++) { + + if (value[i].len > 255) { + return "protocol too long"; + } + + len += value[i].len + 1; + } + + scf->alpn.data = ngx_pnalloc(cf->pool, len); + if (scf->alpn.data == NULL) { + return NGX_CONF_ERROR; + } + + p = scf->alpn.data; + + for (i = 1; i < cf->args->nelts; i++) { + *p++ = value[i].len; + p = ngx_cpymem(p, value[i].data, value[i].len); + } + + scf->alpn.len = len; + + return NGX_CONF_OK; + +#else + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "the \"ssl_alpn\" directive requires OpenSSL " + "with ALPN support"); + return NGX_CONF_ERROR; +#endif +} + + +static char * ngx_stream_ssl_conf_command_check(ngx_conf_t *cf, void *post, void *data) { #ifndef SSL_CONF_FLAG_FILE diff -r eb6c77e6d55d -r b9e02e9b2f1d src/stream/ngx_stream_ssl_module.h --- a/src/stream/ngx_stream_ssl_module.h Thu Oct 14 11:46:23 2021 +0300 +++ b/src/stream/ngx_stream_ssl_module.h Tue Oct 19 12:19:59 2021 +0300 @@ -42,6 +42,7 @@ typedef struct { ngx_str_t client_certificate; ngx_str_t trusted_certificate; ngx_str_t crl; + ngx_str_t alpn; ngx_str_t ciphers; From vl at nginx.com Wed Oct 20 17:27:37 2021 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 20 Oct 2021 17:27:37 +0000 Subject: [nginx] HTTP: connections with wrong ALPN protocols are now rejected. Message-ID: details: https://hg.nginx.org/nginx/rev/db6b630e6086 branches: changeset: 7937:db6b630e6086 user: Vladimir Homutov date: Wed Oct 20 09:50:02 2021 +0300 description: HTTP: connections with wrong ALPN protocols are now rejected. This is a recommended behavior by RFC 7301 and is useful for mitigation of protocol confusion attacks [1]. To avoid possible negative effects, list of supported protocols was extended to include all possible HTTP protocol ALPN IDs registered by IANA [2], i.e. "http/1.0" and "http/0.9". [1] https://alpaca-attack.com/ [2] https://www.iana.org/assignments/tls-extensiontype-values/ diffstat: src/http/modules/ngx_http_ssl_module.c | 13 ++++++------- 1 files changed, 6 insertions(+), 7 deletions(-) diffs (39 lines): diff -r b9e02e9b2f1d -r db6b630e6086 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Tue Oct 19 12:19:59 2021 +0300 +++ b/src/http/modules/ngx_http_ssl_module.c Wed Oct 20 09:50:02 2021 +0300 @@ -17,7 +17,7 @@ typedef ngx_int_t (*ngx_ssl_variable_han #define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5" #define NGX_DEFAULT_ECDH_CURVE "auto" -#define NGX_HTTP_ALPN_PROTO "\x08http/1.1" +#define NGX_HTTP_ALPN_PROTOS "\x08http/1.1\x08http/1.0\x08http/0.9" #ifdef TLSEXT_TYPE_application_layer_protocol_negotiation @@ -442,21 +442,20 @@ ngx_http_ssl_alpn_select(ngx_ssl_conn_t hc = c->data; if (hc->addr_conf->http2) { - srv = (unsigned char *) NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTO; - srvlen = sizeof(NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTO) - 1; - + srv = (unsigned char *) NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTOS; + srvlen = sizeof(NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTOS) - 1; } else #endif { - srv = (unsigned char *) NGX_HTTP_ALPN_PROTO; - srvlen = sizeof(NGX_HTTP_ALPN_PROTO) - 1; + srv = (unsigned char *) NGX_HTTP_ALPN_PROTOS; + srvlen = sizeof(NGX_HTTP_ALPN_PROTOS) - 1; } if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen, in, inlen) != OPENSSL_NPN_NEGOTIATED) { - return SSL_TLSEXT_ERR_NOACK; + return SSL_TLSEXT_ERR_ALERT_FATAL; } ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, From vl at nginx.com Wed Oct 20 17:27:40 2021 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 20 Oct 2021 17:27:40 +0000 Subject: [nginx] Mail: connections with wrong ALPN protocols are now rejected. Message-ID: details: https://hg.nginx.org/nginx/rev/dc955d274130 branches: changeset: 7938:dc955d274130 user: Vladimir Homutov date: Wed Oct 20 09:45:34 2021 +0300 description: Mail: connections with wrong ALPN protocols are now rejected. This is a recommended behavior by RFC 7301 and is useful for mitigation of protocol confusion attacks [1]. For POP3 and IMAP protocols IANA-assigned ALPN IDs are used [2]. For the SMTP protocol "smtp" is used. [1] https://alpaca-attack.com/ [2] https://www.iana.org/assignments/tls-extensiontype-values/ diffstat: src/mail/ngx_mail.h | 1 + src/mail/ngx_mail_imap_module.c | 1 + src/mail/ngx_mail_pop3_module.c | 1 + src/mail/ngx_mail_smtp_module.c | 1 + src/mail/ngx_mail_ssl_module.c | 58 +++++++++++++++++++++++++++++++++++++++++ 5 files changed, 62 insertions(+), 0 deletions(-) diffs (126 lines): diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail.h --- a/src/mail/ngx_mail.h Wed Oct 20 09:50:02 2021 +0300 +++ b/src/mail/ngx_mail.h Wed Oct 20 09:45:34 2021 +0300 @@ -324,6 +324,7 @@ typedef ngx_int_t (*ngx_mail_parse_comma struct ngx_mail_protocol_s { ngx_str_t name; + ngx_str_t alpn; in_port_t port[4]; ngx_uint_t type; diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail_imap_module.c --- a/src/mail/ngx_mail_imap_module.c Wed Oct 20 09:50:02 2021 +0300 +++ b/src/mail/ngx_mail_imap_module.c Wed Oct 20 09:45:34 2021 +0300 @@ -46,6 +46,7 @@ static ngx_str_t ngx_mail_imap_auth_met static ngx_mail_protocol_t ngx_mail_imap_protocol = { ngx_string("imap"), + ngx_string("\x04imap"), { 143, 993, 0, 0 }, NGX_MAIL_IMAP_PROTOCOL, diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail_pop3_module.c --- a/src/mail/ngx_mail_pop3_module.c Wed Oct 20 09:50:02 2021 +0300 +++ b/src/mail/ngx_mail_pop3_module.c Wed Oct 20 09:45:34 2021 +0300 @@ -46,6 +46,7 @@ static ngx_str_t ngx_mail_pop3_auth_met static ngx_mail_protocol_t ngx_mail_pop3_protocol = { ngx_string("pop3"), + ngx_string("\x04pop3"), { 110, 995, 0, 0 }, NGX_MAIL_POP3_PROTOCOL, diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail_smtp_module.c --- a/src/mail/ngx_mail_smtp_module.c Wed Oct 20 09:50:02 2021 +0300 +++ b/src/mail/ngx_mail_smtp_module.c Wed Oct 20 09:45:34 2021 +0300 @@ -39,6 +39,7 @@ static ngx_str_t ngx_mail_smtp_auth_met static ngx_mail_protocol_t ngx_mail_smtp_protocol = { ngx_string("smtp"), + ngx_string("\x04smtp"), { 25, 465, 587, 0 }, NGX_MAIL_SMTP_PROTOCOL, diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c Wed Oct 20 09:50:02 2021 +0300 +++ b/src/mail/ngx_mail_ssl_module.c Wed Oct 20 09:45:34 2021 +0300 @@ -14,6 +14,12 @@ #define NGX_DEFAULT_ECDH_CURVE "auto" +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation +static int ngx_mail_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, + const unsigned char **out, unsigned char *outlen, + const unsigned char *in, unsigned int inlen, void *arg); +#endif + static void *ngx_mail_ssl_create_conf(ngx_conf_t *cf); static char *ngx_mail_ssl_merge_conf(ngx_conf_t *cf, void *parent, void *child); @@ -244,6 +250,54 @@ ngx_module_t ngx_mail_ssl_module = { static ngx_str_t ngx_mail_ssl_sess_id_ctx = ngx_string("MAIL"); +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + +static int +ngx_mail_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, + unsigned char *outlen, const unsigned char *in, unsigned int inlen, + void *arg) +{ + unsigned int srvlen; + unsigned char *srv; + ngx_connection_t *c; + ngx_mail_session_t *s; + ngx_mail_core_srv_conf_t *cscf; +#if (NGX_DEBUG) + unsigned int i; +#endif + + c = ngx_ssl_get_connection(ssl_conn); + s = c->data; + +#if (NGX_DEBUG) + for (i = 0; i < inlen; i += in[i] + 1) { + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, + "SSL ALPN supported by client: %*s", + (size_t) in[i], &in[i + 1]); + } +#endif + + cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); + + srv = cscf->protocol->alpn.data; + srvlen = cscf->protocol->alpn.len; + + if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen, + in, inlen) + != OPENSSL_NPN_NEGOTIATED) + { + return SSL_TLSEXT_ERR_ALERT_FATAL; + } + + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, + "SSL ALPN selected: %*s", (size_t) *outlen, *out); + + return SSL_TLSEXT_ERR_OK; +} + +#endif + + static void * ngx_mail_ssl_create_conf(ngx_conf_t *cf) { @@ -394,6 +448,10 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, cln->handler = ngx_ssl_cleanup_ctx; cln->data = &conf->ssl; +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + SSL_CTX_set_alpn_select_cb(conf->ssl.ctx, ngx_mail_ssl_alpn_select, NULL); +#endif + if (ngx_ssl_ciphers(cf, &conf->ssl, &conf->ciphers, conf->prefer_server_ciphers) != NGX_OK) From mdounin at mdounin.ru Thu Oct 21 03:32:38 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Oct 2021 06:32:38 +0300 Subject: [PATCH] Add optional "mp4_exact_start" nginx config off/on to show video between keyframes In-Reply-To: <20241A9E-BDF1-42D8-9848-AF628717EFE3@archive.org> References: <20210628095320.px3ggmmoyjalyv5m@Romans-MacBook-Pro.local> <5F32216C-A041-454C-A73C-0E1C259E434C@archive.org> <20210930134811.epttik4joflf2qj6@Romans-MacBook-Pro.local> <20241A9E-BDF1-42D8-9848-AF628717EFE3@archive.org> Message-ID: Hello! On Mon, Oct 04, 2021 at 03:41:47PM -0700, Tracey Jaquith wrote: > Hi Roman, > > OK, thanks! > > I?ve tested this on macosx & linux, so far with: chrome, safari, Firefox and iOS. > > However, I?m seeing Firefox is having alternate behavior where it plays video from the prior keyframe, > without audio, until it hits the desired start time in at least one video, though it?s not consistently doing this. > I suspect it?s the edit list ? a nice solve for this. > I?ve had minor issues with edit lists in the past, for what that?s worth. Thanks for testing. Just for the record: https://bugzilla.mozilla.org/show_bug.cgi?id=1735300 Hopefully this will be eventually fixed. [...] > And deep apologies? > > Another problem is track delay > > I *should have* mentioned when I initially wrote in, that I was aware of the a/v sync slight slip > ? and that in practice and running for over 3 months now, it hasn?t seemed to be any kind of issue. > > Assuming: > * the average (US TV) video might be 29.97 fps > * and thus timescale / duration of 30000 / 1001 > * and that a typical max distance between keyframe GOPs w/ ffmpeg encoders and similar is 300 frames or about 10s > > Then: > * with a max of 10s between keyframes > * and 300 frames max would get ?sped up? from 1001 => 1 > > Then we?re looking at a maximum additional video frames duration of 1/100th of a second. > > (300 * 1001 / 30000) == 10.01 > > (300 * 1 / 30000) == 0.01 > > So the most the A/V sync could ?drift? from those early added frames is 1/100th of a second, > where average might be 2-3x smaller than that. > In practice, it didn?t seem noticeable ? > but I am quite impressed by your desire to minimize/eliminate that. > (In practice, from the broadcasters at least in the US, 1/100th of a second A/V slip is not uncommon). While it looks quite well with timescale 30000, it is not uncommon for video tracks to have timescale 25 or so. For example, the test video in the ticket linked above uses timescale 24. With such a timescale, resulting desync will be much more noticeable. -- Maxim Dounin http://mdounin.ru/ From ts.stadler at gmx.de Thu Oct 21 09:36:00 2021 From: ts.stadler at gmx.de (Tobias Stadler) Date: Thu, 21 Oct 2021 11:36:00 +0200 Subject: http_proxy_module hooks Message-ID: ?Hello everyone, Does the http_proxy (or the http_upstream) module provide any hooks (for a 3rd party plugin) to intercept/process the request to the upstream server/the response received by the upstream server? Best regards Tobias From arut at nginx.com Thu Oct 21 12:15:06 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 21 Oct 2021 15:15:06 +0300 Subject: performance is affected after merge OCSP changeset In-Reply-To: <59EC8F20-542D-4620-889B-05ADAC9E9864@nginx.com> References: <59EC8F20-542D-4620-889B-05ADAC9E9864@nginx.com> Message-ID: <20211021121506.fknsphvlhluk4qqh@Romans-MacBook-Pro.local> On Tue, Oct 19, 2021 at 01:07:56PM +0300, Sergey Kandaurov wrote: > > > On 12 Oct 2021, at 14:31, Sergey Kandaurov wrote: > > > > > >> On 12 Oct 2021, at 10:41, sun edward wrote: > >> > >> Hi, > >> There is a changeset fe919fd63b0b "client certificate validation with OCSP" , after merge this changeset, the performance seems not as good as before, the avg response time increased about 50~60ms. is there a way to optimize this problem? > >> > > > > Are you referring to processing 0-RTT HTTP/3 requests? > > > > Anyway, please try this change and report back. > > > > # HG changeset patch > > # User Sergey Kandaurov > > # Date 1634038108 -10800 > > # Tue Oct 12 14:28:28 2021 +0300 > > # Branch quic > > # Node ID af4bd86814fdd0a2da3f7b8a965c41923ebeedd5 > > # Parent 9d47948842a3fd1c658a9676e638ef66207ffdcd > > QUIC: speeding up processing 0-RTT. > > > > After fe919fd63b0b, processing 0-RTT was postponed until after handshake > > completion (typically seen as 2-RTT), including both ssl_ocsp on and off. > > This change allows to start OCSP checks with reused SSL handshakes, > > which eliminates 1 additional RTT allowing to process 0-RTT as expected. > > > > diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c > > --- a/src/event/quic/ngx_event_quic_ssl.c > > +++ b/src/event/quic/ngx_event_quic_ssl.c > > @@ -410,6 +410,10 @@ ngx_quic_crypto_input(ngx_connection_t * > > return NGX_ERROR; > > } > > > > + if (SSL_session_reused(c->ssl->connection)) { > > + goto ocsp; > > + } > > + > > return NGX_OK; > > } > > > > @@ -463,6 +467,7 @@ ngx_quic_crypto_input(ngx_connection_t * > > return NGX_ERROR; > > } > > > > +ocsp: > > rc = ngx_ssl_ocsp_validate(c); > > > > if (rc == NGX_ERROR) { > > > > Below is alternative patch, it brings closer to how OCSP validation > is done with SSL_read_early_data(), with its inherent design flaws. > Namely, the case of regular SSL session reuse is still pessimized, > but that shouldn't bring further slowdown with ssl_ocsp disabled, > which is slow by itself. > > # HG changeset patch > # User Sergey Kandaurov > # Date 1634637049 -10800 > # Tue Oct 19 12:50:49 2021 +0300 > # Branch quic > # Node ID 6f26d6656b4ef97a3a245354bd7fa9e5c8671237 > # Parent 1798acc01970ae5a03f785b7679fe34c32adcfea > QUIC: speeding up processing 0-RTT. > > After fe919fd63b0b, processing QUIC streams was postponed until after handshake > completion, which means that 0-RTT is effectively off. With ssl_ocsp enabled, > it could be further delayed. This differs to how SSL_read_early_data() works. differs FROM ? > This change unlocks processing streams on successful 0-RTT packet decryption. > > diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c > --- a/src/event/quic/ngx_event_quic.c > +++ b/src/event/quic/ngx_event_quic.c > @@ -989,6 +989,21 @@ ngx_quic_process_payload(ngx_connection_ > } > } > > + if (pkt->level == ssl_encryption_early_data && !qc->streams.initialized) { > + rc = ngx_ssl_ocsp_validate(c); > + > + if (rc == NGX_ERROR) { > + return NGX_ERROR; > + } > + > + if (rc == NGX_AGAIN) { > + c->ssl->handler = ngx_quic_init_streams; > + > + } else { > + ngx_quic_init_streams(c); > + } > + } > + > if (pkt->level == ssl_encryption_handshake) { > /* > * RFC 9001, 4.9.1. Discarding Initial Keys > diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c > --- a/src/event/quic/ngx_event_quic_ssl.c > +++ b/src/event/quic/ngx_event_quic_ssl.c > @@ -463,6 +463,11 @@ ngx_quic_crypto_input(ngx_connection_t * > return NGX_ERROR; > } > > + if (qc->streams.initialized) { > + /* done while processing 0-RTT */ > + return NGX_OK; > + } > + > rc = ngx_ssl_ocsp_validate(c); > > if (rc == NGX_ERROR) { > > > -- > Sergey Kandaurov > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel It would be nice to always call ngx_ssl_ocsp_validate() from the same source file (presumably ngx_event_quic_ssl.c). But this does not seem to occur naturally so let's leave it as it is. Looks good. PS: Also, this can be further refactored to move ngx_ssl_ocsp_validate() inside ngx_quic_init_streams(). In this case we can only call ngx_quic_init_streams() both times. -- Roman Arutyunyan From arut at nginx.com Thu Oct 21 13:40:58 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 21 Oct 2021 16:40:58 +0300 Subject: [PATCH 0 of 2] HTTP/3 Insert Count Increment delay Message-ID: The series introduces a delay for Insert Count Increment instruction. From arut at nginx.com Thu Oct 21 13:40:59 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 21 Oct 2021 16:40:59 +0300 Subject: [PATCH 1 of 2] QUIC: allowed main QUIC connection for some operations In-Reply-To: References: Message-ID: <8b049432ef2dcdb8d1a8.1634823659@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1634219818 -10800 # Thu Oct 14 16:56:58 2021 +0300 # Branch quic # Node ID 8b049432ef2dcdb8d1a8ec1a5e41c0a340285b65 # Parent 404de224517e33f685613d6425dcdb3c8ef5b97e QUIC: allowed main QUIC connection for some operations. Operations like ngx_quic_open_stream(), ngx_http_quic_get_connection(), ngx_http_v3_finalize_connection(), ngx_http_v3_shutdown_connection() used to receive a QUIC stream connection. Now they can receive the main QUIC connection as well. This is useful when calling them out of a stream context. diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c +++ b/src/event/quic/ngx_event_quic_streams.c @@ -35,11 +35,12 @@ ngx_connection_t * ngx_quic_open_stream(ngx_connection_t *c, ngx_uint_t bidi) { uint64_t id; - ngx_quic_stream_t *qs, *nqs; + ngx_connection_t *pc; + ngx_quic_stream_t *nqs; ngx_quic_connection_t *qc; - qs = c->quic; - qc = ngx_quic_get_connection(qs->parent); + pc = c->quic ? c->quic->parent : c; + qc = ngx_quic_get_connection(pc); if (bidi) { if (qc->streams.server_streams_bidi @@ -85,7 +86,7 @@ ngx_quic_open_stream(ngx_connection_t *c qc->streams.server_streams_uni++; } - nqs = ngx_quic_create_stream(qs->parent, id); + nqs = ngx_quic_create_stream(pc, id); if (nqs == NULL) { return NULL; } diff --git a/src/http/modules/ngx_http_quic_module.h b/src/http/modules/ngx_http_quic_module.h --- a/src/http/modules/ngx_http_quic_module.h +++ b/src/http/modules/ngx_http_quic_module.h @@ -19,7 +19,8 @@ #define ngx_http_quic_get_connection(c) \ - ((ngx_http_connection_t *) (c)->quic->parent->data) + ((ngx_http_connection_t *) ((c)->quic ? (c)->quic->parent->data \ + : (c)->data)) ngx_int_t ngx_http_quic_init(ngx_connection_t *c); diff --git a/src/http/v3/ngx_http_v3.c b/src/http/v3/ngx_http_v3.c --- a/src/http/v3/ngx_http_v3.c +++ b/src/http/v3/ngx_http_v3.c @@ -70,8 +70,8 @@ ngx_http_v3_keepalive_handler(ngx_event_ ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 keepalive handler"); - ngx_quic_finalize_connection(c, NGX_HTTP_V3_ERR_NO_ERROR, - "keepalive timeout"); + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_NO_ERROR, + "keepalive timeout"); } diff --git a/src/http/v3/ngx_http_v3.h b/src/http/v3/ngx_http_v3.h --- a/src/http/v3/ngx_http_v3.h +++ b/src/http/v3/ngx_http_v3.h @@ -85,10 +85,12 @@ module) #define ngx_http_v3_finalize_connection(c, code, reason) \ - ngx_quic_finalize_connection(c->quic->parent, code, reason) + ngx_quic_finalize_connection((c)->quic ? (c)->quic->parent : (c), \ + code, reason) #define ngx_http_v3_shutdown_connection(c, code, reason) \ - ngx_quic_shutdown_connection(c->quic->parent, code, reason) + ngx_quic_shutdown_connection((c)->quic ? (c)->quic->parent : (c), \ + code, reason) typedef struct { From arut at nginx.com Thu Oct 21 13:41:00 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 21 Oct 2021 16:41:00 +0300 Subject: [PATCH 2 of 2] HTTP/3: delayed Insert Count Increment instruction In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1634804424 -10800 # Thu Oct 21 11:20:24 2021 +0300 # Branch quic # Node ID e2d65b59ccb9035cbd619358a121ba5bcca3404a # Parent 8b049432ef2dcdb8d1a8ec1a5e41c0a340285b65 HTTP/3: delayed Insert Count Increment instruction. Sending the instruction is delayed until the end of the current event cycle. Delaying the instruction is allowed by quic-qpack-21, section 2.2.2.3. The goal is to reduce the amount of data sent back to client by accumulating inserts. diff --git a/src/http/v3/ngx_http_v3.c b/src/http/v3/ngx_http_v3.c --- a/src/http/v3/ngx_http_v3.c +++ b/src/http/v3/ngx_http_v3.c @@ -47,6 +47,10 @@ ngx_http_v3_init_session(ngx_connection_ h3c->keepalive.handler = ngx_http_v3_keepalive_handler; h3c->keepalive.cancelable = 1; + h3c->table.send_insert_count.log = pc->log; + h3c->table.send_insert_count.data = pc; + h3c->table.send_insert_count.handler = ngx_http_v3_inc_insert_count_handler; + cln = ngx_pool_cleanup_add(pc->pool, 0); if (cln == NULL) { return NGX_ERROR; @@ -85,6 +89,10 @@ ngx_http_v3_cleanup_session(void *data) if (h3c->keepalive.timer_set) { ngx_del_timer(&h3c->keepalive); } + + if (h3c->table.send_insert_count.posted) { + ngx_delete_posted_event(&h3c->table.send_insert_count); + } } diff --git a/src/http/v3/ngx_http_v3_parse.c b/src/http/v3/ngx_http_v3_parse.c --- a/src/http/v3/ngx_http_v3_parse.c +++ b/src/http/v3/ngx_http_v3_parse.c @@ -395,6 +395,8 @@ done: if (ngx_http_v3_send_ack_section(c, c->quic->id) != NGX_OK) { return NGX_ERROR; } + + ngx_http_v3_ack_insert_count(c, st->prefix.insert_count); } st->state = sw_start; diff --git a/src/http/v3/ngx_http_v3_tables.c b/src/http/v3/ngx_http_v3_tables.c --- a/src/http/v3/ngx_http_v3_tables.c +++ b/src/http/v3/ngx_http_v3_tables.c @@ -232,11 +232,9 @@ ngx_http_v3_insert(ngx_connection_t *c, dt->elts[dt->nelts++] = field; dt->size += size; - /* TODO increment can be sent less often */ + dt->insert_count++; - if (ngx_http_v3_send_inc_insert_count(c, 1) != NGX_OK) { - return NGX_ERROR; - } + ngx_post_event(&dt->send_insert_count, &ngx_posted_events); if (ngx_http_v3_new_entry(c) != NGX_OK) { return NGX_ERROR; @@ -246,6 +244,34 @@ ngx_http_v3_insert(ngx_connection_t *c, } +void +ngx_http_v3_inc_insert_count_handler(ngx_event_t *ev) +{ + ngx_connection_t *c; + ngx_http_v3_session_t *h3c; + ngx_http_v3_dynamic_table_t *dt; + + c = ev->data; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, + "http3 inc insert count handler"); + + h3c = ngx_http_v3_get_session(c); + dt = &h3c->table; + + if (dt->insert_count > dt->ack_insert_count) { + if (ngx_http_v3_send_inc_insert_count(c, + dt->insert_count - dt->ack_insert_count) + != NGX_OK) + { + return; + } + + dt->ack_insert_count = dt->insert_count; + } +} + + ngx_int_t ngx_http_v3_set_capacity(ngx_connection_t *c, ngx_uint_t capacity) { @@ -603,6 +629,21 @@ ngx_http_v3_check_insert_count(ngx_conne } +void +ngx_http_v3_ack_insert_count(ngx_connection_t *c, uint64_t insert_count) +{ + ngx_http_v3_session_t *h3c; + ngx_http_v3_dynamic_table_t *dt; + + h3c = ngx_http_v3_get_session(c); + dt = &h3c->table; + + if (dt->ack_insert_count < insert_count) { + dt->ack_insert_count = insert_count; + } +} + + static void ngx_http_v3_unblock(void *data) { diff --git a/src/http/v3/ngx_http_v3_tables.h b/src/http/v3/ngx_http_v3_tables.h --- a/src/http/v3/ngx_http_v3_tables.h +++ b/src/http/v3/ngx_http_v3_tables.h @@ -26,9 +26,13 @@ typedef struct { ngx_uint_t base; size_t size; size_t capacity; + uint64_t insert_count; + uint64_t ack_insert_count; + ngx_event_t send_insert_count; } ngx_http_v3_dynamic_table_t; +void ngx_http_v3_inc_insert_count_handler(ngx_event_t *ev); void ngx_http_v3_cleanup_table(ngx_http_v3_session_t *h3c); ngx_int_t ngx_http_v3_ref_insert(ngx_connection_t *c, ngx_uint_t dynamic, ngx_uint_t index, ngx_str_t *value); @@ -46,6 +50,7 @@ ngx_int_t ngx_http_v3_decode_insert_coun ngx_uint_t *insert_count); ngx_int_t ngx_http_v3_check_insert_count(ngx_connection_t *c, ngx_uint_t insert_count); +void ngx_http_v3_ack_insert_count(ngx_connection_t *c, uint64_t insert_count); ngx_int_t ngx_http_v3_set_param(ngx_connection_t *c, uint64_t id, uint64_t value); From mdounin at mdounin.ru Thu Oct 21 15:40:40 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Oct 2021 15:40:40 +0000 Subject: [nginx] Removed CLOCK_MONOTONIC_COARSE support. Message-ID: details: https://hg.nginx.org/nginx/rev/9e7de0547f09 branches: changeset: 7939:9e7de0547f09 user: Maxim Dounin date: Thu Oct 21 18:38:38 2021 +0300 description: Removed CLOCK_MONOTONIC_COARSE support. While clock_gettime(CLOCK_MONOTONIC_COARSE) is faster than clock_gettime(CLOCK_MONOTONIC), the latter is fast enough on Linux for practical usage, and the difference is negligible compared to other costs at each event loop iteration. On the other hand, CLOCK_MONOTONIC_COARSE causes various issues with typical CONFIG_HZ=250, notably very inaccurate limit_rate handling in some edge cases (ticket #1678) and negative difference between $request_time and $upstream_response_time (ticket #1965). diffstat: src/core/ngx_times.c | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diffs (14 lines): diff -r dc955d274130 -r 9e7de0547f09 src/core/ngx_times.c --- a/src/core/ngx_times.c Wed Oct 20 09:45:34 2021 +0300 +++ b/src/core/ngx_times.c Thu Oct 21 18:38:38 2021 +0300 @@ -200,10 +200,6 @@ ngx_monotonic_time(time_t sec, ngx_uint_ #if defined(CLOCK_MONOTONIC_FAST) clock_gettime(CLOCK_MONOTONIC_FAST, &ts); - -#elif defined(CLOCK_MONOTONIC_COARSE) - clock_gettime(CLOCK_MONOTONIC_COARSE, &ts); - #else clock_gettime(CLOCK_MONOTONIC, &ts); #endif From mdounin at mdounin.ru Thu Oct 21 15:40:51 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Oct 2021 18:40:51 +0300 Subject: [PATCH] Removed CLOCK_MONOTONIC_COARSE support In-Reply-To: <0EA142F7-4101-4DE6-BBDA-9E7F8CB97699@nginx.com> References: <3217b92006f8807d1613.1633978413@vm-bsd.mdounin.ru> <0EA142F7-4101-4DE6-BBDA-9E7F8CB97699@nginx.com> Message-ID: Hello! On Tue, Oct 19, 2021 at 03:07:24PM +0300, Sergey Kandaurov wrote: > > On 11 Oct 2021, at 21:53, Maxim Dounin wrote: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1633978301 -10800 > > # Mon Oct 11 21:51:41 2021 +0300 > > # Node ID 3217b92006f8807d16134246a064baab64fa7b32 > > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 > > Removed CLOCK_MONOTONIC_COARSE support. > > > > While clock_gettime(CLOCK_MONOTONIC_COARSE) is faster than > > clock_gettime(CLOCK_MONOTONIC), the latter is fast enough on Linux for > > practical usage, and the difference is negligible compared to other costs > > at each event loop iteration. On the other hand, CLOCK_MONOTONIC_COARSE > > causes various issues with typical CONFIG_HZ=250, notably very inacurate > > "inaccurate" Fixed, thnx. > > limit_rate handling in some edge cases (ticket #1678) and negative difference > > between $request_time and $upstream_response_time (ticket #1965). > > > > diff --git a/src/core/ngx_times.c b/src/core/ngx_times.c > > --- a/src/core/ngx_times.c > > +++ b/src/core/ngx_times.c > > @@ -200,10 +200,6 @@ ngx_monotonic_time(time_t sec, ngx_uint_ > > > > #if defined(CLOCK_MONOTONIC_FAST) > > clock_gettime(CLOCK_MONOTONIC_FAST, &ts); > > - > > -#elif defined(CLOCK_MONOTONIC_COARSE) > > - clock_gettime(CLOCK_MONOTONIC_COARSE, &ts); > > - > > #else > > clock_gettime(CLOCK_MONOTONIC, &ts); > > #endif > > > > While fast clock is certainly important in general, > and _COARSE is faster even in userspace clock_gettime(), > I tend to agree that it has too coarse granularity, > which causes more harm than good. Committed, thnx for the review. -- Maxim Dounin http://mdounin.ru/ From bcodding at redhat.com Thu Oct 21 16:20:21 2021 From: bcodding at redhat.com (Benjamin Coddington) Date: Thu, 21 Oct 2021 12:20:21 -0400 Subject: NGX_STREAM_UPS_CONF handling Message-ID: <33D221CB-73D3-418B-BB35-C315115E0199@redhat.com> Hi devs, I'm new here, be gentle. I'm hacking up a tls offloading proxy for sunrpc, getting up to speed on nginx codebase, but I'm sinking too much time into figuring something: I'd like to use a flag configuration directive within an upstream config stanza: static ngx_command_t ngx_stream_rpc_tls_commands[] = { { ngx_string("rpc_tls_client"), NGX_STREAM_SRV_CONF|NGX_STREAM_UPS_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_STREAM_SRV_CONF_OFFSET, offsetof(ngx_stream_rpc_tls_conf_t, client), NULL }, ngx_null_command }; .. but when I go to merge server configs, the value is always unset. The debugger showed me that the upstream module does its own pass at module->create_srv_conf and ngx_conf_parse() so the configuration, while set, isn't there in my merge server config function, it's somewhere else and I'm not sure how to access it. I'm at the point where it feels like I'm "doing it wrong". What's the correct way to have both NGX_STREAM_SRV_CONF|NGX_CONF_FLAG and NGX_STREAM_SRV_CONF|NGX_STREAM_UPS_CONF|NGX_CONF_FLAG directives in the same module that can be handled by the same module's configuration? Ben From pluknet at nginx.com Thu Oct 21 16:41:05 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 21 Oct 2021 19:41:05 +0300 Subject: performance is affected after merge OCSP changeset In-Reply-To: <20211021121506.fknsphvlhluk4qqh@Romans-MacBook-Pro.local> References: <59EC8F20-542D-4620-889B-05ADAC9E9864@nginx.com> <20211021121506.fknsphvlhluk4qqh@Romans-MacBook-Pro.local> Message-ID: <90E56409-D9A7-4A5E-980E-A2920E8E3305@nginx.com> > On 21 Oct 2021, at 15:15, Roman Arutyunyan wrote: > > On Tue, Oct 19, 2021 at 01:07:56PM +0300, Sergey Kandaurov wrote: >> >> [..] >> Below is alternative patch, it brings closer to how OCSP validation >> is done with SSL_read_early_data(), with its inherent design flaws. >> Namely, the case of regular SSL session reuse is still pessimized, >> but that shouldn't bring further slowdown with ssl_ocsp disabled, >> which is slow by itself. >> >> # HG changeset patch >> # User Sergey Kandaurov >> # Date 1634637049 -10800 >> # Tue Oct 19 12:50:49 2021 +0300 >> # Branch quic >> # Node ID 6f26d6656b4ef97a3a245354bd7fa9e5c8671237 >> # Parent 1798acc01970ae5a03f785b7679fe34c32adcfea >> QUIC: speeding up processing 0-RTT. >> >> After fe919fd63b0b, processing QUIC streams was postponed until after handshake >> completion, which means that 0-RTT is effectively off. With ssl_ocsp enabled, >> it could be further delayed. This differs to how SSL_read_early_data() works. > > differs FROM ? > >> This change unlocks processing streams on successful 0-RTT packet decryption. >> Both forms seem to be used, but "differs to" looks less popular. Rewrote it this way: This differs from how OCSP validation works with SSL_read_early_data(). >> diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c >> --- a/src/event/quic/ngx_event_quic.c >> +++ b/src/event/quic/ngx_event_quic.c >> @@ -989,6 +989,21 @@ ngx_quic_process_payload(ngx_connection_ >> } >> } >> >> + if (pkt->level == ssl_encryption_early_data && !qc->streams.initialized) { >> + rc = ngx_ssl_ocsp_validate(c); >> + >> + if (rc == NGX_ERROR) { >> + return NGX_ERROR; >> + } >> + >> + if (rc == NGX_AGAIN) { >> + c->ssl->handler = ngx_quic_init_streams; >> + >> + } else { >> + ngx_quic_init_streams(c); >> + } >> + } >> + >> if (pkt->level == ssl_encryption_handshake) { >> /* >> * RFC 9001, 4.9.1. Discarding Initial Keys >> diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c >> --- a/src/event/quic/ngx_event_quic_ssl.c >> +++ b/src/event/quic/ngx_event_quic_ssl.c >> @@ -463,6 +463,11 @@ ngx_quic_crypto_input(ngx_connection_t * >> return NGX_ERROR; >> } >> >> + if (qc->streams.initialized) { >> + /* done while processing 0-RTT */ >> + return NGX_OK; >> + } >> + >> rc = ngx_ssl_ocsp_validate(c); >> >> if (rc == NGX_ERROR) { >> > > It would be nice to always call ngx_ssl_ocsp_validate() from the same source > file (presumably ngx_event_quic_ssl.c). But this does not seem to occur > naturally so let's leave it as it is. > > Looks good. > > PS: Also, this can be further refactored to move ngx_ssl_ocsp_validate() inside > ngx_quic_init_streams(). In this case we can only call ngx_quic_init_streams() > both times. This is feasible, if init streams closer to obtaining 0-RTT secret. Actually, it is even better, I believe, and it's invoked just once regardless of the number of 0-RTT packets. Requirement for successful 0-RTT decryption doesn't buy us much. N.B. I decided to leave in place "quic init streams" debug. This is where streams are now actually initialized, and it looks reasonable to see that logged only once. # HG changeset patch # User Sergey Kandaurov # Date 1634832181 -10800 # Thu Oct 21 19:03:01 2021 +0300 # Branch quic # Node ID 11119f9fda599c890a93b348310f582e3c49ebb7 # Parent 1798acc01970ae5a03f785b7679fe34c32adcfea QUIC: refactored OCSP validation in preparation for 0-RTT support. diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c --- a/src/event/quic/ngx_event_quic_ssl.c +++ b/src/event/quic/ngx_event_quic_ssl.c @@ -361,7 +361,6 @@ static ngx_int_t ngx_quic_crypto_input(ngx_connection_t *c, ngx_chain_t *data) { int n, sslerr; - ngx_int_t rc; ngx_buf_t *b; ngx_chain_t *cl; ngx_ssl_conn_t *ssl_conn; @@ -463,19 +462,10 @@ ngx_quic_crypto_input(ngx_connection_t * return NGX_ERROR; } - rc = ngx_ssl_ocsp_validate(c); - - if (rc == NGX_ERROR) { + if (ngx_quic_init_streams(c) != NGX_OK) { return NGX_ERROR; } - if (rc == NGX_AGAIN) { - c->ssl->handler = ngx_quic_init_streams; - return NGX_OK; - } - - ngx_quic_init_streams(c); - return NGX_OK; } diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c +++ b/src/event/quic/ngx_event_quic_streams.c @@ -16,6 +16,7 @@ static ngx_quic_stream_t *ngx_quic_create_client_stream(ngx_connection_t *c, uint64_t id); static ngx_int_t ngx_quic_init_stream(ngx_quic_stream_t *qs); +static void ngx_quic_init_streams_handler(ngx_connection_t *c); static ngx_quic_stream_t *ngx_quic_create_stream(ngx_connection_t *c, uint64_t id); static void ngx_quic_empty_handler(ngx_event_t *ev); @@ -369,9 +370,37 @@ ngx_quic_init_stream(ngx_quic_stream_t * } -void +ngx_int_t ngx_quic_init_streams(ngx_connection_t *c) { + ngx_int_t rc; + ngx_quic_connection_t *qc; + + qc = ngx_quic_get_connection(c); + + if (qc->streams.initialized) { + return NGX_OK; + } + + rc = ngx_ssl_ocsp_validate(c); + + if (rc == NGX_ERROR) { + return NGX_ERROR; + } + + if (rc == NGX_AGAIN) { + c->ssl->handler = ngx_quic_init_streams_handler; + return NGX_OK; + } + + ngx_quic_init_streams_handler(c); + + return NGX_OK; +} + +static void +ngx_quic_init_streams_handler(ngx_connection_t *c) +{ ngx_queue_t *q; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; diff --git a/src/event/quic/ngx_event_quic_streams.h b/src/event/quic/ngx_event_quic_streams.h --- a/src/event/quic/ngx_event_quic_streams.h +++ b/src/event/quic/ngx_event_quic_streams.h @@ -31,7 +31,7 @@ ngx_int_t ngx_quic_handle_stop_sending_f ngx_int_t ngx_quic_handle_max_streams_frame(ngx_connection_t *c, ngx_quic_header_t *pkt, ngx_quic_max_streams_frame_t *f); -void ngx_quic_init_streams(ngx_connection_t *c); +ngx_int_t ngx_quic_init_streams(ngx_connection_t *c); void ngx_quic_rbtree_insert_stream(ngx_rbtree_node_t *temp, ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); ngx_quic_stream_t *ngx_quic_find_stream(ngx_rbtree_t *rbtree, # HG changeset patch # User Sergey Kandaurov # Date 1634832186 -10800 # Thu Oct 21 19:03:06 2021 +0300 # Branch quic # Node ID b53e361bee7dfbb027507a717e6648234a06ef13 # Parent 11119f9fda599c890a93b348310f582e3c49ebb7 QUIC: speeding up processing 0-RTT. After fe919fd63b0b, processing QUIC streams was postponed until after handshake completion, which means that 0-RTT is effectively off. With ssl_ocsp enabled, it could be further delayed. This differs from how OCSP validation works with SSL_read_early_data(). With this change, processing QUIC streams is unlocked when obtaining 0-RTT secret. diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c --- a/src/event/quic/ngx_event_quic_ssl.c +++ b/src/event/quic/ngx_event_quic_ssl.c @@ -71,8 +71,20 @@ ngx_quic_set_read_secret(ngx_ssl_conn_t secret_len, rsecret); #endif - return ngx_quic_keys_set_encryption_secret(c->pool, 0, qc->keys, level, - cipher, rsecret, secret_len); + if (ngx_quic_keys_set_encryption_secret(c->pool, 0, qc->keys, level, + cipher, rsecret, secret_len) + != 1) + { + return 0; + } + + if (level == ssl_encryption_early_data) { + if (ngx_quic_init_streams(c) != NGX_OK) { + return 0; + } + } + + return 1; } @@ -131,6 +143,10 @@ ngx_quic_set_encryption_secrets(ngx_ssl_ } if (level == ssl_encryption_early_data) { + if (ngx_quic_init_streams(c) != NGX_OK) { + return 0; + } + return 1; } -- Sergey Kandaurov From mdounin at mdounin.ru Thu Oct 21 19:46:32 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Oct 2021 19:46:32 +0000 Subject: [nginx] Style: added missing "static" specifiers. Message-ID: details: https://hg.nginx.org/nginx/rev/46a02ed7c966 branches: changeset: 7940:46a02ed7c966 user: Maxim Dounin date: Thu Oct 21 18:43:13 2021 +0300 description: Style: added missing "static" specifiers. Mostly found by gcc -Wtraditional, per "non-static declaration of ... follows static declaration [-Wtraditional]" warnings. diffstat: src/event/ngx_event_openssl.c | 2 +- src/stream/ngx_stream_ssl_module.c | 7 ++++--- 2 files changed, 5 insertions(+), 4 deletions(-) diffs (43 lines): diff -r 9e7de0547f09 -r 46a02ed7c966 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Thu Oct 21 18:38:38 2021 +0300 +++ b/src/event/ngx_event_openssl.c Thu Oct 21 18:43:13 2021 +0300 @@ -2767,7 +2767,7 @@ ngx_ssl_write(ngx_connection_t *c, u_cha #ifdef SSL_READ_EARLY_DATA_SUCCESS -ssize_t +static ssize_t ngx_ssl_write_early(ngx_connection_t *c, u_char *data, size_t size) { int n, sslerr; diff -r 9e7de0547f09 -r 46a02ed7c966 src/stream/ngx_stream_ssl_module.c --- a/src/stream/ngx_stream_ssl_module.c Thu Oct 21 18:38:38 2021 +0300 +++ b/src/stream/ngx_stream_ssl_module.c Thu Oct 21 18:43:13 2021 +0300 @@ -23,7 +23,8 @@ static ngx_int_t ngx_stream_ssl_init_con ngx_connection_t *c); static void ngx_stream_ssl_handshake_handler(ngx_connection_t *c); #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME -int ngx_stream_ssl_servername(ngx_ssl_conn_t *ssl_conn, int *ad, void *arg); +static int ngx_stream_ssl_servername(ngx_ssl_conn_t *ssl_conn, int *ad, + void *arg); #endif #ifdef TLSEXT_TYPE_application_layer_protocol_negotiation static int ngx_stream_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, @@ -451,7 +452,7 @@ ngx_stream_ssl_handshake_handler(ngx_con #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME -int +static int ngx_stream_ssl_servername(ngx_ssl_conn_t *ssl_conn, int *ad, void *arg) { return SSL_TLSEXT_ERR_OK; @@ -502,7 +503,7 @@ ngx_stream_ssl_alpn_select(ngx_ssl_conn_ #ifdef SSL_R_CERT_CB_ERROR -int +static int ngx_stream_ssl_certificate(ngx_ssl_conn_t *ssl_conn, void *arg) { ngx_str_t cert, key; From mdounin at mdounin.ru Thu Oct 21 19:46:35 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Oct 2021 19:46:35 +0000 Subject: [nginx] SSL: SSL_sendfile() support with kernel TLS. Message-ID: details: https://hg.nginx.org/nginx/rev/65946a191197 branches: changeset: 7941:65946a191197 user: Maxim Dounin date: Thu Oct 21 18:44:07 2021 +0300 description: SSL: SSL_sendfile() support with kernel TLS. Requires OpenSSL 3.0 compiled with "enable-ktls" option. Further, KTLS needs to be enabled in kernel, and in OpenSSL, either via OpenSSL configuration file or with "ssl_conf_command Options KTLS;" in nginx configuration. On FreeBSD, kernel TLS is available starting with FreeBSD 13.0, and can be enabled with "sysctl kern.ipc.tls.enable=1" and "kldload ktls_ocf" to load a software backend, see man ktls(4) for details. On Linux, kernel TLS is available starting with kernel 4.13 (at least 5.2 is recommended), and needs kernel compiled with CONFIG_TLS=y (with CONFIG_TLS=m, which is used at least on Ubuntu 21.04 by default, the tls module needs to be loaded with "modprobe tls"). diffstat: src/event/ngx_event_openssl.c | 209 ++++++++++++++++++++++++++++++++++++++++- src/event/ngx_event_openssl.h | 1 + src/http/ngx_http_request.c | 2 +- src/http/ngx_http_upstream.c | 8 +- 4 files changed, 211 insertions(+), 9 deletions(-) diffs (318 lines): diff -r 46a02ed7c966 -r 65946a191197 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Thu Oct 21 18:43:13 2021 +0300 +++ b/src/event/ngx_event_openssl.c Thu Oct 21 18:44:07 2021 +0300 @@ -47,6 +47,8 @@ static void ngx_ssl_write_handler(ngx_ev static ssize_t ngx_ssl_write_early(ngx_connection_t *c, u_char *data, size_t size); #endif +static ssize_t ngx_ssl_sendfile(ngx_connection_t *c, ngx_buf_t *file, + size_t size); static void ngx_ssl_read_handler(ngx_event_t *rev); static void ngx_ssl_shutdown_handler(ngx_event_t *ev); static void ngx_ssl_connection_error(ngx_connection_t *c, int sslerr, @@ -1764,6 +1766,16 @@ ngx_ssl_handshake(ngx_connection_t *c) #endif #endif +#ifdef BIO_get_ktls_send + + if (BIO_get_ktls_send(SSL_get_wbio(c->ssl->connection)) == 1) { + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, + "BIO_get_ktls_send(): 1"); + c->ssl->sendfile = 1; + } + +#endif + rc = ngx_ssl_ocsp_validate(c); if (rc == NGX_ERROR) { @@ -1899,6 +1911,16 @@ ngx_ssl_try_early_data(ngx_connection_t c->read->ready = 1; c->write->ready = 1; +#ifdef BIO_get_ktls_send + + if (BIO_get_ktls_send(SSL_get_wbio(c->ssl->connection)) == 1) { + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, + "BIO_get_ktls_send(): 1"); + c->ssl->sendfile = 1; + } + +#endif + rc = ngx_ssl_ocsp_validate(c); if (rc == NGX_ERROR) { @@ -2502,10 +2524,11 @@ ngx_ssl_write_handler(ngx_event_t *wev) ngx_chain_t * ngx_ssl_send_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit) { - int n; - ngx_uint_t flush; - ssize_t send, size; - ngx_buf_t *buf; + int n; + ngx_uint_t flush; + ssize_t send, size, file_size; + ngx_buf_t *buf; + ngx_chain_t *cl; if (!c->ssl->buffer) { @@ -2579,6 +2602,11 @@ ngx_ssl_send_chain(ngx_connection_t *c, continue; } + if (in->buf->in_file && c->ssl->sendfile) { + flush = 1; + break; + } + size = in->buf->last - in->buf->pos; if (size > buf->end - buf->last) { @@ -2610,8 +2638,35 @@ ngx_ssl_send_chain(ngx_connection_t *c, size = buf->last - buf->pos; if (size == 0) { + + if (in && in->buf->in_file && send < limit) { + + /* coalesce the neighbouring file bufs */ + + cl = in; + file_size = (size_t) ngx_chain_coalesce_file(&cl, limit - send); + + n = ngx_ssl_sendfile(c, in->buf, file_size); + + if (n == NGX_ERROR) { + return NGX_CHAIN_ERROR; + } + + if (n == NGX_AGAIN) { + break; + } + + in = ngx_chain_update_sent(in, n); + + send += n; + flush = 0; + + continue; + } + buf->flush = 0; c->buffered &= ~NGX_SSL_BUFFERED; + return in; } @@ -2636,7 +2691,7 @@ ngx_ssl_send_chain(ngx_connection_t *c, buf->pos = buf->start; buf->last = buf->start; - if (in == NULL || send == limit) { + if (in == NULL || send >= limit) { break; } } @@ -2882,6 +2937,150 @@ ngx_ssl_write_early(ngx_connection_t *c, #endif +static ssize_t +ngx_ssl_sendfile(ngx_connection_t *c, ngx_buf_t *file, size_t size) +{ +#ifdef BIO_get_ktls_send + + int sslerr; + ssize_t n; + ngx_err_t err; + + ngx_ssl_clear_error(c->log); + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "SSL to sendfile: @%O %uz", + file->file_pos, size); + + ngx_set_errno(0); + + n = SSL_sendfile(c->ssl->connection, file->file->fd, file->file_pos, + size, 0); + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "SSL_sendfile: %d", n); + + if (n > 0) { + + if (c->ssl->saved_read_handler) { + + c->read->handler = c->ssl->saved_read_handler; + c->ssl->saved_read_handler = NULL; + c->read->ready = 1; + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + return NGX_ERROR; + } + + ngx_post_event(c->read, &ngx_posted_events); + } + + c->sent += n; + + return n; + } + + if (n == 0) { + + /* + * if sendfile returns zero, then someone has truncated the file, + * so the offset became beyond the end of the file + */ + + ngx_log_error(NGX_LOG_ALERT, c->log, 0, + "SSL_sendfile() reported that \"%s\" was truncated at %O", + file->file->name.data, file->file_pos); + + return NGX_ERROR; + } + + sslerr = SSL_get_error(c->ssl->connection, n); + + if (sslerr == SSL_ERROR_ZERO_RETURN) { + + /* + * OpenSSL fails to return SSL_ERROR_SYSCALL if an error + * happens during writing after close_notify alert from the + * peer, and returns SSL_ERROR_ZERO_RETURN instead + */ + + sslerr = SSL_ERROR_SYSCALL; + } + + if (sslerr == SSL_ERROR_SSL + && ERR_GET_REASON(ERR_peek_error()) == SSL_R_UNINITIALIZED + && ngx_errno != 0) + { + /* + * OpenSSL fails to return SSL_ERROR_SYSCALL if an error + * happens in sendfile(), and returns SSL_ERROR_SSL with + * SSL_R_UNINITIALIZED reason instead + */ + + sslerr = SSL_ERROR_SYSCALL; + } + + err = (sslerr == SSL_ERROR_SYSCALL) ? ngx_errno : 0; + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "SSL_get_error: %d", sslerr); + + if (sslerr == SSL_ERROR_WANT_WRITE) { + + if (c->ssl->saved_read_handler) { + + c->read->handler = c->ssl->saved_read_handler; + c->ssl->saved_read_handler = NULL; + c->read->ready = 1; + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + return NGX_ERROR; + } + + ngx_post_event(c->read, &ngx_posted_events); + } + + c->write->ready = 0; + return NGX_AGAIN; + } + + if (sslerr == SSL_ERROR_WANT_READ) { + + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, + "SSL_sendfile: want read"); + + c->read->ready = 0; + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + return NGX_ERROR; + } + + /* + * we do not set the timer because there is already + * the write event timer + */ + + if (c->ssl->saved_read_handler == NULL) { + c->ssl->saved_read_handler = c->read->handler; + c->read->handler = ngx_ssl_read_handler; + } + + return NGX_AGAIN; + } + + c->ssl->no_wait_shutdown = 1; + c->ssl->no_send_shutdown = 1; + c->write->error = 1; + + ngx_ssl_connection_error(c, sslerr, err, "SSL_sendfile() failed"); + +#else + ngx_log_error(NGX_LOG_ALERT, c->log, 0, + "SSL_sendfile() not available"); +#endif + + return NGX_ERROR; +} + + static void ngx_ssl_read_handler(ngx_event_t *rev) { diff -r 46a02ed7c966 -r 65946a191197 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Thu Oct 21 18:43:13 2021 +0300 +++ b/src/event/ngx_event_openssl.h Thu Oct 21 18:44:07 2021 +0300 @@ -109,6 +109,7 @@ struct ngx_ssl_connection_s { unsigned handshake_rejected:1; unsigned renegotiation:1; unsigned buffer:1; + unsigned sendfile:1; unsigned no_wait_shutdown:1; unsigned no_send_shutdown:1; unsigned shutdown_without_free:1; diff -r 46a02ed7c966 -r 65946a191197 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Thu Oct 21 18:43:13 2021 +0300 +++ b/src/http/ngx_http_request.c Thu Oct 21 18:44:07 2021 +0300 @@ -607,7 +607,7 @@ ngx_http_alloc_request(ngx_connection_t } #if (NGX_HTTP_SSL) - if (c->ssl) { + if (c->ssl && !c->ssl->sendfile) { r->main_filter_need_in_memory = 1; } #endif diff -r 46a02ed7c966 -r 65946a191197 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Thu Oct 21 18:43:13 2021 +0300 +++ b/src/http/ngx_http_upstream.c Thu Oct 21 18:44:07 2021 +0300 @@ -1683,9 +1683,6 @@ ngx_http_upstream_ssl_init_connection(ng return; } - c->sendfile = 0; - u->output.sendfile = 0; - if (u->conf->ssl_server_name || u->conf->ssl_verify) { if (ngx_http_upstream_ssl_name(r, u, c) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, @@ -1791,6 +1788,11 @@ ngx_http_upstream_ssl_handshake(ngx_http } } + if (!c->ssl->sendfile) { + c->sendfile = 0; + u->output.sendfile = 0; + } + c->write->handler = ngx_http_upstream_handler; c->read->handler = ngx_http_upstream_handler; From mdounin at mdounin.ru Fri Oct 22 01:40:33 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 22 Oct 2021 04:40:33 +0300 Subject: NGX_STREAM_UPS_CONF handling In-Reply-To: <33D221CB-73D3-418B-BB35-C315115E0199@redhat.com> References: <33D221CB-73D3-418B-BB35-C315115E0199@redhat.com> Message-ID: Hello! On Thu, Oct 21, 2021 at 12:20:21PM -0400, Benjamin Coddington wrote: > Hi devs, I'm new here, be gentle. > > I'm hacking up a tls offloading proxy for sunrpc, getting up to speed on > nginx codebase, but I'm sinking too much time into figuring something: > > I'd like to use a flag configuration directive within an upstream config > stanza: > > static ngx_command_t > ngx_stream_rpc_tls_commands[] = { > > { ngx_string("rpc_tls_client"), > NGX_STREAM_SRV_CONF|NGX_STREAM_UPS_CONF|NGX_CONF_FLAG, > ngx_conf_set_flag_slot, > NGX_STREAM_SRV_CONF_OFFSET, > offsetof(ngx_stream_rpc_tls_conf_t, client), > NULL }, > > ngx_null_command > }; > > .. but when I go to merge server configs, the value is always unset. The > debugger showed me that the upstream module does its own pass at > > module->create_srv_conf > > and > > ngx_conf_parse() > > so the configuration, while set, isn't there in my merge server config > function, it's somewhere else and I'm not sure how to access it. > > I'm at the point where it feels like I'm "doing it wrong". What's the > correct way to have both NGX_STREAM_SRV_CONF|NGX_CONF_FLAG and > NGX_STREAM_SRV_CONF|NGX_STREAM_UPS_CONF|NGX_CONF_FLAG directives in the same > module that can be handled by the same module's configuration? The upstream{} block is a separate entity which isn't subject for configuration merging. Any initialization is expected to be handled in the uscf->peer.init_upstream handler, as set by balancers and called by the ngx_stream_upstream_init_main_conf(). For examples on how to use it, check various balancers code. In particular, ngx_http_upstream_keepalive_module might be interesting, as it provides several configuration directives. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Oct 25 17:59:07 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Oct 2021 17:59:07 +0000 Subject: [nginx] MIME: added image/avif type. Message-ID: details: https://hg.nginx.org/nginx/rev/3f0ab7b6cd71 branches: changeset: 7942:3f0ab7b6cd71 user: Maxim Dounin date: Mon Oct 25 20:49:15 2021 +0300 description: MIME: added image/avif type. Prodded by Ryo Hirafuji, Andr? R?mcke, Artur Juraszek. diffstat: conf/mime.types | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r 65946a191197 -r 3f0ab7b6cd71 conf/mime.types --- a/conf/mime.types Thu Oct 21 18:44:07 2021 +0300 +++ b/conf/mime.types Mon Oct 25 20:49:15 2021 +0300 @@ -15,6 +15,7 @@ types { text/vnd.wap.wml wml; text/x-component htc; + image/avif avif; image/png png; image/svg+xml svg svgz; image/tiff tif tiff; From mdounin at mdounin.ru Mon Oct 25 18:00:22 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Oct 2021 21:00:22 +0300 Subject: [PATCH] Add image/avif to conf/mime.types In-Reply-To: References: Message-ID: Hello! On Tue, Oct 19, 2021 at 03:32:57PM +0200, Andr? R?mcke wrote: > man. 27. sep. 2021 kl. 10:55: > > > > Format is stable and broader AVIF support (& likely also adoption) is incoming: > > > > - About 1/2 size compared to jpeg, 2/3 of webp, and roughly 1/1 with JPEG XL* > > - Already supported in Chrome and Firefox: > > - Also in Chromium** so soon in Edge, Opera, ... > > - And apparently landed in Webkit*** > > > > > > Kind regards. > > > > > > * JPEG XL bitstream is frozen, but still work in progress & not > > supported out of the box: > > - https://en.wikipedia.org/wiki/JPEG_XL > > - https://caniuse.com/jpegxl > > > > ** https://bugs.chromium.org/p/chromium/issues/detail?id=960620 > > > > *** https://bugs.webkit.org/show_bug.cgi?id=207750 > > Patch inline as attachment did not seem to work: > > diff -r bfad703459b4 conf/mime.types > --- a/conf/mime.types Wed Sep 22 10:20:00 2021 +0300 > +++ b/conf/mime.types Mon Sep 27 10:13:55 2021 +0200 > @@ -15,6 +15,7 @@ > text/vnd.wap.wml wml; > text/x-component htc; > > + image/avif avif; > image/png png; > image/svg+xml svg svgz; > image/tiff tif tiff; Added, thanks for prodding this. https://hg.nginx.org/nginx/rev/3f0ab7b6cd71 -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Oct 25 18:00:56 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Oct 2021 21:00:56 +0300 Subject: [PATCH] Recognize image/avif in mime.types In-Reply-To: References: Message-ID: Hello! On Sun, Oct 10, 2021 at 09:43:16PM +0200, Artur Juraszek wrote: > # HG changeset patch > # User Artur Juraszek > # Date 1633893497 -7200 > # Sun Oct 10 21:18:17 2021 +0200 > # Node ID d62a7ff2ec94678392024b875bbadac149a0feaf > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 > Recognize image/avif in mime.types > > It's an officially registered[1] image format that's now supported by most major web browsers. > > [1] https://www.iana.org/assignments/media-types/media-types.xhtml > > diff -r ae7c767aa491 -r d62a7ff2ec94 conf/mime.types > --- a/conf/mime.types Wed Oct 06 18:01:42 2021 +0300 > +++ b/conf/mime.types Sun Oct 10 21:18:17 2021 +0200 > @@ -15,6 +15,7 @@ > text/vnd.wap.wml wml; > text/x-component htc; > > + image/avif avif; > image/png png; > image/svg+xml svg svgz; > image/tiff tif tiff; Added, thanks for prodding this. https://hg.nginx.org/nginx/rev/3f0ab7b6cd71 -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Oct 26 00:42:32 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Oct 2021 03:42:32 +0300 Subject: [PATCH] Avoid unnecessary restriction on nohash http variables In-Reply-To: References: Message-ID: Hello! On Thu, Aug 19, 2021 at 09:34:39PM +0300, Alexey Radkov wrote: > # HG changeset patch > # User Alexey Radkov > # Date 1629395487 -10800 > # Thu Aug 19 20:51:27 2021 +0300 > # Node ID a1065b2252855730ed8e5368c88fe41a7ff5a698 > # Parent 13d0c1d26d47c203b1874ca1ffdb7a9ba7fd2d77 > Avoid unnecessary restriction on nohash http variables. > > When I use variables with long names albeit being tagged as > NGX_HTTP_VARIABLE_NOHASH, Nginx says "could not build variables_hash, > you should increase variables_hash_bucket_size: 64". It seems that this is > an unnecessary restriction, as soon as the hash gets only built for variables > with names[n].key.data == NULL (note that other pieces in ngx_hash_init() > where the macro NGX_HASH_ELT_SIZE is used, are always guarded with this > condition). This fix puts this same condition into the only unguarded piece: > when testing against the hash_bucket_size. > > The issue arises after assignment of key[n].key.data = NULL without symmetric > assignment of key[n].key.len in ngx_http_variables_init_vars(): after this, > the key[n].key comes to an inconsistent state. Perhaps this was made > intentionally, as hash initialization in other places seems to follow the > same pattern (for instance, see how ngx_hash_init() gets called from > ngx_http_upstream_hide_headers_hash()). > > Without this fix, I must put in the config "variables_hash_bucket_size 128;" > even if the long-named variables are nohash. > > diff -r 13d0c1d26d47 -r a1065b225285 src/core/ngx_hash.c > --- a/src/core/ngx_hash.c Fri Aug 13 03:57:47 2021 -0400 > +++ b/src/core/ngx_hash.c Thu Aug 19 20:51:27 2021 +0300 > @@ -274,6 +274,9 @@ > } > > for (n = 0; n < nelts; n++) { > + if (names[n].key.data == NULL) { > + continue; > + } > if (hinit->bucket_size < NGX_HASH_ELT_SIZE(&names[n]) + sizeof(void *)) > { > ngx_log_error(NGX_LOG_EMERG, hinit->pool->log, 0, > Thanks for spotting this. Here is a version of the patch slightly cleaned up to better match style, please take a look if looks good for you: # HG changeset patch # User Alexey Radkov # Date 1629395487 -10800 # Thu Aug 19 20:51:27 2021 +0300 # Node ID 2a7155733855d1c2ea1c1ded8d1a4ba654b533cb # Parent 3f0ab7b6cd71eb02b4714278cabcd2db5c79b3a9 Core: removed unnecessary restriction in hash initialization. Hash initialization ignores elements with key.data set to NULL. Nevertheless, the initial hash bucket size check didn't skip them, resulting in unnecessary restrictions on, for example, variables with long names and with the NGX_HTTP_VARIABLE_NOHASH flag. Fix is to update the initial hash bucket size check to skip elements with key.data set to NULL, similarly to how it is done in other parts of the code. diff --git a/src/core/ngx_hash.c b/src/core/ngx_hash.c --- a/src/core/ngx_hash.c +++ b/src/core/ngx_hash.c @@ -274,6 +274,10 @@ ngx_hash_init(ngx_hash_init_t *hinit, ng } for (n = 0; n < nelts; n++) { + if (names[n].key.data == NULL) { + continue; + } + if (hinit->bucket_size < NGX_HASH_ELT_SIZE(&names[n]) + sizeof(void *)) { ngx_log_error(NGX_LOG_EMERG, hinit->pool->log, 0, -- Maxim Dounin http://mdounin.ru/ From alexey.radkov at gmail.com Tue Oct 26 06:15:37 2021 From: alexey.radkov at gmail.com (Alexey Radkov) Date: Tue, 26 Oct 2021 09:15:37 +0300 Subject: [PATCH] Avoid unnecessary restriction on nohash http variables In-Reply-To: References: Message-ID: Thanks! It looks good. Cheers, Alexey. ??, 26 ???. 2021 ?., 03:42 Maxim Dounin : > Hello! > > On Thu, Aug 19, 2021 at 09:34:39PM +0300, Alexey Radkov wrote: > > > # HG changeset patch > > # User Alexey Radkov > > # Date 1629395487 -10800 > > # Thu Aug 19 20:51:27 2021 +0300 > > # Node ID a1065b2252855730ed8e5368c88fe41a7ff5a698 > > # Parent 13d0c1d26d47c203b1874ca1ffdb7a9ba7fd2d77 > > Avoid unnecessary restriction on nohash http variables. > > > > When I use variables with long names albeit being tagged as > > NGX_HTTP_VARIABLE_NOHASH, Nginx says "could not build variables_hash, > > you should increase variables_hash_bucket_size: 64". It seems that this > is > > an unnecessary restriction, as soon as the hash gets only built for > variables > > with names[n].key.data == NULL (note that other pieces in ngx_hash_init() > > where the macro NGX_HASH_ELT_SIZE is used, are always guarded with this > > condition). This fix puts this same condition into the only unguarded > piece: > > when testing against the hash_bucket_size. > > > > The issue arises after assignment of key[n].key.data = NULL without > symmetric > > assignment of key[n].key.len in ngx_http_variables_init_vars(): after > this, > > the key[n].key comes to an inconsistent state. Perhaps this was made > > intentionally, as hash initialization in other places seems to follow the > > same pattern (for instance, see how ngx_hash_init() gets called from > > ngx_http_upstream_hide_headers_hash()). > > > > Without this fix, I must put in the config "variables_hash_bucket_size > 128;" > > even if the long-named variables are nohash. > > > > diff -r 13d0c1d26d47 -r a1065b225285 src/core/ngx_hash.c > > --- a/src/core/ngx_hash.c Fri Aug 13 03:57:47 2021 -0400 > > +++ b/src/core/ngx_hash.c Thu Aug 19 20:51:27 2021 +0300 > > @@ -274,6 +274,9 @@ > > } > > > > for (n = 0; n < nelts; n++) { > > + if (names[n].key.data == NULL) { > > + continue; > > + } > > if (hinit->bucket_size < NGX_HASH_ELT_SIZE(&names[n]) + > sizeof(void *)) > > { > > ngx_log_error(NGX_LOG_EMERG, hinit->pool->log, 0, > > > > Thanks for spotting this. > > Here is a version of the patch slightly cleaned up to better match > style, please take a look if looks good for you: > > # HG changeset patch > # User Alexey Radkov > # Date 1629395487 -10800 > # Thu Aug 19 20:51:27 2021 +0300 > # Node ID 2a7155733855d1c2ea1c1ded8d1a4ba654b533cb > # Parent 3f0ab7b6cd71eb02b4714278cabcd2db5c79b3a9 > Core: removed unnecessary restriction in hash initialization. > > Hash initialization ignores elements with key.data set to NULL. > Nevertheless, the initial hash bucket size check didn't skip them, > resulting in unnecessary restrictions on, for example, variables with > long names and with the NGX_HTTP_VARIABLE_NOHASH flag. > > Fix is to update the initial hash bucket size check to skip elements > with key.data set to NULL, similarly to how it is done in other parts > of the code. > > diff --git a/src/core/ngx_hash.c b/src/core/ngx_hash.c > --- a/src/core/ngx_hash.c > +++ b/src/core/ngx_hash.c > @@ -274,6 +274,10 @@ ngx_hash_init(ngx_hash_init_t *hinit, ng > } > > for (n = 0; n < nelts; n++) { > + if (names[n].key.data == NULL) { > + continue; > + } > + > if (hinit->bucket_size < NGX_HASH_ELT_SIZE(&names[n]) + > sizeof(void *)) > { > ngx_log_error(NGX_LOG_EMERG, hinit->pool->log, 0, > > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Oct 26 11:35:01 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 26 Oct 2021 14:35:01 +0300 Subject: [PATCH 1 of 4] Switched to using posted next events after sendfile_max_chunk In-Reply-To: References: Message-ID: <98BBE75D-7E8D-42C6-84DD-2B1694C68D05@nginx.com> > On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1633978533 -10800 > # Mon Oct 11 21:55:33 2021 +0300 > # Node ID d175cd09ac9d2bab7f7226eac3bfce196a296cc0 > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 > Switched to using posted next events after sendfile_max_chunk. > > Previously, 1 millisecond delay was used instead. In certain edge cases > this might result in noticeable performance degradation though, notably on > Linux with typical CONFIG_HZ=250 (so 1ms delay becomes 4ms), > sendfile_max_chunk 2m, and link speed above 2.5 Gbps. > > Using posted next events removes the artificial delay and makes processing > fast in all cases. > > diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c > --- a/src/http/ngx_http_write_filter_module.c > +++ b/src/http/ngx_http_write_filter_module.c > @@ -331,8 +331,7 @@ ngx_http_write_filter(ngx_http_request_t > && c->write->ready > && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) > { > - c->write->delayed = 1; > - ngx_add_timer(c->write, 1); > + ngx_post_event(c->write, &ngx_posted_next_events); > } > > for (cl = r->out; cl && cl != chain; /* void */) { > A side note: removing c->write->delayed no longer prevents further writing from ngx_http_write_filter() within one worker cycle. For a (somewhat degenerate) example, if we stepped on the limit in ngx_http_send_header(), a subsequent ngx_http_output_filter() will write something more. Specifically, with sendfile_max_chunk 256k: : *1 write new buf t:1 f:0 00007F3CBA77F010, pos 00007F3CBA77F010, size: 1147353 file: 0, size: 0 : *1 http write filter: l:0 f:0 s:1147353 : *1 http write filter limit 262144 : *1 writev: 262144 of 262144 : *1 http write filter 000055596A8D1220 : *1 post event 000055596AC04970 : *1 add cleanup: 000055596A8D1368 : *1 http output filter "/file?" : *1 http copy filter: "/file?" : *1 image filter : *1 xslt filter body : *1 http postpone filter "/file?" 00007FFC91E2E090 : *1 write old buf t:1 f:0 00007F3CBA77F010, pos 00007F3CBA7BF010, size: 885209 file: 0, size: 0 : *1 write new buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 1048576 : *1 http write filter: l:0 f:0 s:1933785 : *1 http write filter limit 262144 : *1 writev: 262144 of 262144 : *1 http write filter 000055596A8D1220 : *1 update posted event 000055596AC04970 : *1 http copy filter: -2 "/file?" : *1 call_sv: 0 : *1 perl handler done: 0 : *1 http output filter "/file?" : *1 http copy filter: "/file?" : *1 image filter : *1 xslt filter body : *1 http postpone filter "/file?" 00007FFC91E2E470 : *1 write old buf t:1 f:0 00007F3CBA77F010, pos 00007F3CBA7FF010, size: 623065 file: 0, size: 0 : *1 write old buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 1048576 : *1 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0 : *1 http write filter: l:1 f:0 s:1671641 : *1 http write filter limit 262144 : *1 writev: 262144 of 262144 : *1 http write filter 000055596A8D1220 : *1 update posted event 000055596AC04970 : *1 http copy filter: -2 "/file?" : *1 http finalize request: 0, "/file?" a:1, c:2 This can also be achieved with multiple explicit $r->flush() in Perl, for simplicity, but that is assumed to be a controlled environment. For a less controlled environment, it could be a large response proxied from upstream, but this requires large enough proxy buffers to read in such that they would exceed (in total) a configured limit. Although data is not transferred with a single write operation, it still tends to monopolize a worker proccess, but still I don't think this should really harm in real use cases. -- Sergey Kandaurov From arut at nginx.com Tue Oct 26 11:48:03 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 26 Oct 2021 14:48:03 +0300 Subject: performance is affected after merge OCSP changeset In-Reply-To: <90E56409-D9A7-4A5E-980E-A2920E8E3305@nginx.com> References: <59EC8F20-542D-4620-889B-05ADAC9E9864@nginx.com> <20211021121506.fknsphvlhluk4qqh@Romans-MacBook-Pro.local> <90E56409-D9A7-4A5E-980E-A2920E8E3305@nginx.com> Message-ID: <20211026114802.5djafcj3gpye7wnl@Romans-MacBook-Pro.local> On Thu, Oct 21, 2021 at 07:41:05PM +0300, Sergey Kandaurov wrote: > > > On 21 Oct 2021, at 15:15, Roman Arutyunyan wrote: > > > > On Tue, Oct 19, 2021 at 01:07:56PM +0300, Sergey Kandaurov wrote: > >> > >> [..] > >> Below is alternative patch, it brings closer to how OCSP validation > >> is done with SSL_read_early_data(), with its inherent design flaws. > >> Namely, the case of regular SSL session reuse is still pessimized, > >> but that shouldn't bring further slowdown with ssl_ocsp disabled, > >> which is slow by itself. > >> > >> # HG changeset patch > >> # User Sergey Kandaurov > >> # Date 1634637049 -10800 > >> # Tue Oct 19 12:50:49 2021 +0300 > >> # Branch quic > >> # Node ID 6f26d6656b4ef97a3a245354bd7fa9e5c8671237 > >> # Parent 1798acc01970ae5a03f785b7679fe34c32adcfea > >> QUIC: speeding up processing 0-RTT. > >> > >> After fe919fd63b0b, processing QUIC streams was postponed until after handshake > >> completion, which means that 0-RTT is effectively off. With ssl_ocsp enabled, > >> it could be further delayed. This differs to how SSL_read_early_data() works. > > > > differs FROM ? > > > >> This change unlocks processing streams on successful 0-RTT packet decryption. > >> > > Both forms seem to be used, but "differs to" looks less popular. > Rewrote it this way: > > This differs from how OCSP validation works with SSL_read_early_data(). > > >> diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c > >> --- a/src/event/quic/ngx_event_quic.c > >> +++ b/src/event/quic/ngx_event_quic.c > >> @@ -989,6 +989,21 @@ ngx_quic_process_payload(ngx_connection_ > >> } > >> } > >> > >> + if (pkt->level == ssl_encryption_early_data && !qc->streams.initialized) { > >> + rc = ngx_ssl_ocsp_validate(c); > >> + > >> + if (rc == NGX_ERROR) { > >> + return NGX_ERROR; > >> + } > >> + > >> + if (rc == NGX_AGAIN) { > >> + c->ssl->handler = ngx_quic_init_streams; > >> + > >> + } else { > >> + ngx_quic_init_streams(c); > >> + } > >> + } > >> + > >> if (pkt->level == ssl_encryption_handshake) { > >> /* > >> * RFC 9001, 4.9.1. Discarding Initial Keys > >> diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c > >> --- a/src/event/quic/ngx_event_quic_ssl.c > >> +++ b/src/event/quic/ngx_event_quic_ssl.c > >> @@ -463,6 +463,11 @@ ngx_quic_crypto_input(ngx_connection_t * > >> return NGX_ERROR; > >> } > >> > >> + if (qc->streams.initialized) { > >> + /* done while processing 0-RTT */ > >> + return NGX_OK; > >> + } > >> + > >> rc = ngx_ssl_ocsp_validate(c); > >> > >> if (rc == NGX_ERROR) { > >> > > > > It would be nice to always call ngx_ssl_ocsp_validate() from the same source > > file (presumably ngx_event_quic_ssl.c). But this does not seem to occur > > naturally so let's leave it as it is. > > > > Looks good. > > > > PS: Also, this can be further refactored to move ngx_ssl_ocsp_validate() inside > > ngx_quic_init_streams(). In this case we can only call ngx_quic_init_streams() > > both times. > > This is feasible, if init streams closer to obtaining 0-RTT secret. > Actually, it is even better, I believe, and it's invoked just once > regardless of the number of 0-RTT packets. > Requirement for successful 0-RTT decryption doesn't buy us much. > > N.B. I decided to leave in place "quic init streams" debug. > This is where streams are now actually initialized, > and it looks reasonable to see that logged only once. > > # HG changeset patch > # User Sergey Kandaurov > # Date 1634832181 -10800 > # Thu Oct 21 19:03:01 2021 +0300 > # Branch quic > # Node ID 11119f9fda599c890a93b348310f582e3c49ebb7 > # Parent 1798acc01970ae5a03f785b7679fe34c32adcfea > QUIC: refactored OCSP validation in preparation for 0-RTT support. > > diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c > --- a/src/event/quic/ngx_event_quic_ssl.c > +++ b/src/event/quic/ngx_event_quic_ssl.c > @@ -361,7 +361,6 @@ static ngx_int_t > ngx_quic_crypto_input(ngx_connection_t *c, ngx_chain_t *data) > { > int n, sslerr; > - ngx_int_t rc; > ngx_buf_t *b; > ngx_chain_t *cl; > ngx_ssl_conn_t *ssl_conn; > @@ -463,19 +462,10 @@ ngx_quic_crypto_input(ngx_connection_t * > return NGX_ERROR; > } > > - rc = ngx_ssl_ocsp_validate(c); > - > - if (rc == NGX_ERROR) { > + if (ngx_quic_init_streams(c) != NGX_OK) { > return NGX_ERROR; > } > > - if (rc == NGX_AGAIN) { > - c->ssl->handler = ngx_quic_init_streams; > - return NGX_OK; > - } > - > - ngx_quic_init_streams(c); > - > return NGX_OK; > } > > diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c > --- a/src/event/quic/ngx_event_quic_streams.c > +++ b/src/event/quic/ngx_event_quic_streams.c > @@ -16,6 +16,7 @@ > static ngx_quic_stream_t *ngx_quic_create_client_stream(ngx_connection_t *c, > uint64_t id); > static ngx_int_t ngx_quic_init_stream(ngx_quic_stream_t *qs); > +static void ngx_quic_init_streams_handler(ngx_connection_t *c); > static ngx_quic_stream_t *ngx_quic_create_stream(ngx_connection_t *c, > uint64_t id); > static void ngx_quic_empty_handler(ngx_event_t *ev); > @@ -369,9 +370,37 @@ ngx_quic_init_stream(ngx_quic_stream_t * > } > > > -void > +ngx_int_t > ngx_quic_init_streams(ngx_connection_t *c) > { > + ngx_int_t rc; > + ngx_quic_connection_t *qc; > + > + qc = ngx_quic_get_connection(c); > + > + if (qc->streams.initialized) { > + return NGX_OK; > + } > + > + rc = ngx_ssl_ocsp_validate(c); > + > + if (rc == NGX_ERROR) { > + return NGX_ERROR; > + } > + > + if (rc == NGX_AGAIN) { > + c->ssl->handler = ngx_quic_init_streams_handler; > + return NGX_OK; > + } > + > + ngx_quic_init_streams_handler(c); > + > + return NGX_OK; > +} > + Missing an empty line. > +static void > +ngx_quic_init_streams_handler(ngx_connection_t *c) > +{ > ngx_queue_t *q; > ngx_quic_stream_t *qs; > ngx_quic_connection_t *qc; > diff --git a/src/event/quic/ngx_event_quic_streams.h b/src/event/quic/ngx_event_quic_streams.h > --- a/src/event/quic/ngx_event_quic_streams.h > +++ b/src/event/quic/ngx_event_quic_streams.h > @@ -31,7 +31,7 @@ ngx_int_t ngx_quic_handle_stop_sending_f > ngx_int_t ngx_quic_handle_max_streams_frame(ngx_connection_t *c, > ngx_quic_header_t *pkt, ngx_quic_max_streams_frame_t *f); > > -void ngx_quic_init_streams(ngx_connection_t *c); > +ngx_int_t ngx_quic_init_streams(ngx_connection_t *c); > void ngx_quic_rbtree_insert_stream(ngx_rbtree_node_t *temp, > ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); > ngx_quic_stream_t *ngx_quic_find_stream(ngx_rbtree_t *rbtree, > # HG changeset patch > # User Sergey Kandaurov > # Date 1634832186 -10800 > # Thu Oct 21 19:03:06 2021 +0300 > # Branch quic > # Node ID b53e361bee7dfbb027507a717e6648234a06ef13 > # Parent 11119f9fda599c890a93b348310f582e3c49ebb7 > QUIC: speeding up processing 0-RTT. > > After fe919fd63b0b, processing QUIC streams was postponed until after handshake > completion, which means that 0-RTT is effectively off. With ssl_ocsp enabled, > it could be further delayed. This differs from how OCSP validation works with > SSL_read_early_data(). With this change, processing QUIC streams is unlocked > when obtaining 0-RTT secret. > > diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c > --- a/src/event/quic/ngx_event_quic_ssl.c > +++ b/src/event/quic/ngx_event_quic_ssl.c > @@ -71,8 +71,20 @@ ngx_quic_set_read_secret(ngx_ssl_conn_t > secret_len, rsecret); > #endif > > - return ngx_quic_keys_set_encryption_secret(c->pool, 0, qc->keys, level, > - cipher, rsecret, secret_len); > + if (ngx_quic_keys_set_encryption_secret(c->pool, 0, qc->keys, level, > + cipher, rsecret, secret_len) > + != 1) > + { > + return 0; > + } > + > + if (level == ssl_encryption_early_data) { > + if (ngx_quic_init_streams(c) != NGX_OK) { > + return 0; > + } > + } > + > + return 1; > } > > > @@ -131,6 +143,10 @@ ngx_quic_set_encryption_secrets(ngx_ssl_ > } > > if (level == ssl_encryption_early_data) { > + if (ngx_quic_init_streams(c) != NGX_OK) { > + return 0; > + } > + > return 1; > } > > > -- > Sergey Kandaurov > Looks good. -- Roman Arutyunyan From vbart at nginx.com Tue Oct 26 13:15:05 2021 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 26 Oct 2021 13:15:05 +0000 Subject: [njs] Removed surplus condition from Base64 decoded length counting. Message-ID: details: https://hg.nginx.org/njs/rev/264fb92817cd branches: changeset: 1730:264fb92817cd user: Valentin Bartenev date: Tue Oct 26 16:14:07 2021 +0300 description: Removed surplus condition from Base64 decoded length counting. diffstat: src/njs_string.c | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diffs (14 lines): diff -r b9bbb230fe4f -r 264fb92817cd src/njs_string.c --- a/src/njs_string.c Wed Oct 20 13:01:55 2021 +0000 +++ b/src/njs_string.c Tue Oct 26 16:14:07 2021 +0300 @@ -1885,10 +1885,6 @@ njs_decode_base64_length_core(const njs_ size_t len; for (len = 0; len < src->length; len++) { - if (src->start[len] == '=') { - break; - } - if (basis[src->start[len]] == 77) { break; } From pluknet at nginx.com Tue Oct 26 14:06:08 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 26 Oct 2021 17:06:08 +0300 Subject: [PATCH 1 of 4] Switched to using posted next events after sendfile_max_chunk In-Reply-To: References: Message-ID: <7D95FEDA-7BDB-45F9-93D5-3CF898DFFC59@nginx.com> > On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1633978533 -10800 > # Mon Oct 11 21:55:33 2021 +0300 > # Node ID d175cd09ac9d2bab7f7226eac3bfce196a296cc0 > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 > Switched to using posted next events after sendfile_max_chunk. > > Previously, 1 millisecond delay was used instead. In certain edge cases > this might result in noticeable performance degradation though, notably on > Linux with typical CONFIG_HZ=250 (so 1ms delay becomes 4ms), Looks like the description will need to be adjusted after landing 9e7de0547f09 with CLOCK_MONOTONIC_COARSE removal, which is the one known to return the time at the last tick. > sendfile_max_chunk 2m, and link speed above 2.5 Gbps. > > Using posted next events removes the artificial delay and makes processing > fast in all cases. > > diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c > --- a/src/http/ngx_http_write_filter_module.c > +++ b/src/http/ngx_http_write_filter_module.c > @@ -331,8 +331,7 @@ ngx_http_write_filter(ngx_http_request_t > && c->write->ready > && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) > { > - c->write->delayed = 1; > - ngx_add_timer(c->write, 1); > + ngx_post_event(c->write, &ngx_posted_next_events); > } > > for (cl = r->out; cl && cl != chain; /* void */) { > -- Sergey Kandaurov From mdounin at mdounin.ru Tue Oct 26 14:15:56 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Oct 2021 14:15:56 +0000 Subject: [nginx] Core: removed unnecessary restriction in hash initialization. Message-ID: details: https://hg.nginx.org/nginx/rev/2a7155733855 branches: changeset: 7943:2a7155733855 user: Alexey Radkov date: Thu Aug 19 20:51:27 2021 +0300 description: Core: removed unnecessary restriction in hash initialization. Hash initialization ignores elements with key.data set to NULL. Nevertheless, the initial hash bucket size check didn't skip them, resulting in unnecessary restrictions on, for example, variables with long names and with the NGX_HTTP_VARIABLE_NOHASH flag. Fix is to update the initial hash bucket size check to skip elements with key.data set to NULL, similarly to how it is done in other parts of the code. diffstat: src/core/ngx_hash.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r 3f0ab7b6cd71 -r 2a7155733855 src/core/ngx_hash.c --- a/src/core/ngx_hash.c Mon Oct 25 20:49:15 2021 +0300 +++ b/src/core/ngx_hash.c Thu Aug 19 20:51:27 2021 +0300 @@ -274,6 +274,10 @@ ngx_hash_init(ngx_hash_init_t *hinit, ng } for (n = 0; n < nelts; n++) { + if (names[n].key.data == NULL) { + continue; + } + if (hinit->bucket_size < NGX_HASH_ELT_SIZE(&names[n]) + sizeof(void *)) { ngx_log_error(NGX_LOG_EMERG, hinit->pool->log, 0, From mdounin at mdounin.ru Tue Oct 26 14:16:50 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Oct 2021 17:16:50 +0300 Subject: [PATCH] Avoid unnecessary restriction on nohash http variables In-Reply-To: References: Message-ID: Hello! On Tue, Oct 26, 2021 at 09:15:37AM +0300, Alexey Radkov wrote: > Thanks! It looks good. Committed, thanks. https://hg.nginx.org/nginx/rev/2a7155733855 -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Oct 26 15:27:39 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Oct 2021 18:27:39 +0300 Subject: [PATCH 1 of 4] Switched to using posted next events after sendfile_max_chunk In-Reply-To: <98BBE75D-7E8D-42C6-84DD-2B1694C68D05@nginx.com> References: <98BBE75D-7E8D-42C6-84DD-2B1694C68D05@nginx.com> Message-ID: Hello! On Tue, Oct 26, 2021 at 02:35:01PM +0300, Sergey Kandaurov wrote: > > > On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1633978533 -10800 > > # Mon Oct 11 21:55:33 2021 +0300 > > # Node ID d175cd09ac9d2bab7f7226eac3bfce196a296cc0 > > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 > > Switched to using posted next events after sendfile_max_chunk. > > > > Previously, 1 millisecond delay was used instead. In certain edge cases > > this might result in noticeable performance degradation though, notably on > > Linux with typical CONFIG_HZ=250 (so 1ms delay becomes 4ms), > > sendfile_max_chunk 2m, and link speed above 2.5 Gbps. > > > > Using posted next events removes the artificial delay and makes processing > > fast in all cases. > > > > diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c > > --- a/src/http/ngx_http_write_filter_module.c > > +++ b/src/http/ngx_http_write_filter_module.c > > @@ -331,8 +331,7 @@ ngx_http_write_filter(ngx_http_request_t > > && c->write->ready > > && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) > > { > > - c->write->delayed = 1; > > - ngx_add_timer(c->write, 1); > > + ngx_post_event(c->write, &ngx_posted_next_events); > > } > > > > for (cl = r->out; cl && cl != chain; /* void */) { > > > > A side note: removing c->write->delayed no longer prevents further > writing from ngx_http_write_filter() within one worker cycle. > For a (somewhat degenerate) example, if we stepped on the limit in > ngx_http_send_header(), a subsequent ngx_http_output_filter() will > write something more. Specifically, with sendfile_max_chunk 256k: [...] > This can also be achieved with multiple explicit $r->flush() in Perl, > for simplicity, but that is assumed to be a controlled environment. > For a less controlled environment, it could be a large response proxied > from upstream, but this requires large enough proxy buffers to read in > such that they would exceed (in total) a configured limit. > > Although data is not transferred with a single write operation, > it still tends to monopolize a worker proccess, but still > I don't think this should really harm in real use cases. As of currently implemented, sendfile_max_chunk cannot prevent all worker monopolization cases. Notably, when not using sendfile, but using instead output_buffers smaller than sendfile_max_chunk - it simply cannot not prevent infinite sending if network is faster than disk. This change does not seem to make worker monopolization possible in cases where it previously wasn't possible. In particular, when using output_buffers, as long as buffer size is smaller than sendfile_max_chunk, worker monopolization is currently possible (regardless of the number of buffers), and this change doesn't make things worse. As long as buffer size is larger than sendfile_max_chunk, the exact behaviour and the amount of data sent might change, since ngx_output_chain() will attempt further reading and sending as long as it have free buffers - yet eventually it will exhaust all buffers anyway, and stop till the next event loop iteration. Proper solution for all possible cases of worker monopolization when reading from disk and writing to the network is probably implementing something similar to c->read->available on c->write, with counting total bytes sent in the particular event loop iteration. This looks like a separate non-trivial task though. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Oct 26 15:51:43 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Oct 2021 18:51:43 +0300 Subject: [PATCH 1 of 4] Switched to using posted next events after sendfile_max_chunk In-Reply-To: <7D95FEDA-7BDB-45F9-93D5-3CF898DFFC59@nginx.com> References: <7D95FEDA-7BDB-45F9-93D5-3CF898DFFC59@nginx.com> Message-ID: Hello! On Tue, Oct 26, 2021 at 05:06:08PM +0300, Sergey Kandaurov wrote: > > On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1633978533 -10800 > > # Mon Oct 11 21:55:33 2021 +0300 > > # Node ID d175cd09ac9d2bab7f7226eac3bfce196a296cc0 > > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89 > > Switched to using posted next events after sendfile_max_chunk. > > > > Previously, 1 millisecond delay was used instead. In certain edge cases > > this might result in noticeable performance degradation though, notably on > > Linux with typical CONFIG_HZ=250 (so 1ms delay becomes 4ms), > > Looks like the description will need to be adjusted > after landing 9e7de0547f09 with CLOCK_MONOTONIC_COARSE removal, > which is the one known to return the time at the last tick. As far as I understand, epoll_wait() timeout resolution is limited to ticks (https://man7.org/linux/man-pages/man7/time.7.html), so even with CLOCK_MONOTONIC_COARSE removal this statement is correct. (Actually, the initial idea was to remove CLOCK_MONOTONIC_COARSE to make sendfile_max_chunk faster with minimal changes, but this didn't work because of the timeout resolution in epoll_wait(). I've submitted the patch to remove CLOCK_MONOTONIC_COARSE anyway though.) -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Wed Oct 27 14:19:19 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 27 Oct 2021 17:19:19 +0300 Subject: [PATCH 2 of 4] Simplified sendfile_max_chunk handling In-Reply-To: <489323e194e4c3b1a793.1633978701@vm-bsd.mdounin.ru> References: <489323e194e4c3b1a793.1633978701@vm-bsd.mdounin.ru> Message-ID: <918E7FCB-2172-4DEC-B19A-A5B71BF97B14@nginx.com> > On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1633978587 -10800 > # Mon Oct 11 21:56:27 2021 +0300 > # Node ID 489323e194e4c3b1a7937c51bd4e1671c70f52f8 > # Parent d175cd09ac9d2bab7f7226eac3bfce196a296cc0 > Simplified sendfile_max_chunk handling. > > Previously, it was checked that sendfile_max_chunk was enabled and > almost whole sendfile_max_chunk was sent (see e67ef50c3176), to avoid > delaying connections where sendfile_max_chunk wasn't reached (for example, > when sending responses smaller than sendfile_max_chunk). Now we instead > check if there are unsent data, and the connection is still ready for writing. > Additionally we also check c->write->delayed to ignore connections already > delayed by limit_rate. > > This approach is believed to be more robust, and correctly handles > not only sendfile_max_chunk, but also internal limits of c->send_chain(), > such as sendfile() maximum supported length (ticket #1870). > > diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c > --- a/src/http/ngx_http_write_filter_module.c > +++ b/src/http/ngx_http_write_filter_module.c > @@ -321,16 +321,12 @@ ngx_http_write_filter(ngx_http_request_t > delay = (ngx_msec_t) ((nsent - sent) * 1000 / r->limit_rate); > > if (delay > 0) { > - limit = 0; > c->write->delayed = 1; > ngx_add_timer(c->write, delay); > } > } > > - if (limit > - && c->write->ready > - && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) > - { > + if (chain && c->write->ready && !c->write->delayed) { > ngx_post_event(c->write, &ngx_posted_next_events); > } > Looks good. Not strictly related to this change, so FYI. I noticed a stray writev() after Linux sendfile(), when it writes more than its internal limits. 2021/10/27 12:44:34 [debug] 1462058#0: *1 write old buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 416072437, size: 3878894859 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter: l:1 f:0 s:3878894859 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter limit 0 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: @416072437 2147482891 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: 2147479552 of 2147482891 @416072437 2021/10/27 12:44:34 [debug] 1462058#0: *1 writev: 0 of 0 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter 0000561528695820 2021/10/27 12:44:34 [debug] 1462058#0: *1 post event 00005615289C2310 Here sendfile() partially sent 2147479552, which is above its internal limit NGX_SENDFILE_MAXSIZE - ngx_pagesize. On the second iteration, due to this, it falls back to writev() with zero-size headers. Then, with the patch applied, it posts the next write event, as designed (previously, it would seemingly stuck instead, such as in ticket #1870). -- Sergey Kandaurov From mdounin at mdounin.ru Wed Oct 27 19:19:25 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Oct 2021 22:19:25 +0300 Subject: [PATCH 2 of 4] Simplified sendfile_max_chunk handling In-Reply-To: <918E7FCB-2172-4DEC-B19A-A5B71BF97B14@nginx.com> References: <489323e194e4c3b1a793.1633978701@vm-bsd.mdounin.ru> <918E7FCB-2172-4DEC-B19A-A5B71BF97B14@nginx.com> Message-ID: Hello! On Wed, Oct 27, 2021 at 05:19:19PM +0300, Sergey Kandaurov wrote: > > On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1633978587 -10800 > > # Mon Oct 11 21:56:27 2021 +0300 > > # Node ID 489323e194e4c3b1a7937c51bd4e1671c70f52f8 > > # Parent d175cd09ac9d2bab7f7226eac3bfce196a296cc0 > > Simplified sendfile_max_chunk handling. > > > > Previously, it was checked that sendfile_max_chunk was enabled and > > almost whole sendfile_max_chunk was sent (see e67ef50c3176), to avoid > > delaying connections where sendfile_max_chunk wasn't reached (for example, > > when sending responses smaller than sendfile_max_chunk). Now we instead > > check if there are unsent data, and the connection is still ready for writing. > > Additionally we also check c->write->delayed to ignore connections already > > delayed by limit_rate. > > > > This approach is believed to be more robust, and correctly handles > > not only sendfile_max_chunk, but also internal limits of c->send_chain(), > > such as sendfile() maximum supported length (ticket #1870). > > > > diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c > > --- a/src/http/ngx_http_write_filter_module.c > > +++ b/src/http/ngx_http_write_filter_module.c > > @@ -321,16 +321,12 @@ ngx_http_write_filter(ngx_http_request_t > > delay = (ngx_msec_t) ((nsent - sent) * 1000 / r->limit_rate); > > > > if (delay > 0) { > > - limit = 0; > > c->write->delayed = 1; > > ngx_add_timer(c->write, delay); > > } > > } > > > > - if (limit > > - && c->write->ready > > - && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) > > - { > > + if (chain && c->write->ready && !c->write->delayed) { > > ngx_post_event(c->write, &ngx_posted_next_events); > > } > > > > Looks good. > > Not strictly related to this change, so FYI. I noticed a stray writev() > after Linux sendfile(), when it writes more than its internal limits. > > 2021/10/27 12:44:34 [debug] 1462058#0: *1 write old buf t:0 f:1 0000000000000000, > pos 0000000000000000, size: 0 file: 416072437, size: 3878894859 > 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter: l:1 f:0 s:3878894859 > 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter limit 0 > 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: @416072437 2147482891 > 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: 2147479552 of 2147482891 @416072437 > 2021/10/27 12:44:34 [debug] 1462058#0: *1 writev: 0 of 0 > 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter 0000561528695820 > 2021/10/27 12:44:34 [debug] 1462058#0: *1 post event 00005615289C2310 > > Here sendfile() partially sent 2147479552, which is above its internal > limit NGX_SENDFILE_MAXSIZE - ngx_pagesize. On the second iteration, > due to this, it falls back to writev() with zero-size headers. > Then, with the patch applied, it posts the next write event, as designed > (previously, it would seemingly stuck instead, such as in ticket #1870). Interesting. Overall it looks harmless, but we may want to look further why sendfile() only sent 2147479552 instead of 2147482891. It seems that 2147479552 is in pages (524287 x 4096) despite the fact that the initial offset is not page-aligned. We expect sendfile() to send page-aligned ranges instead (416072437 + 2147482891 == 625868 x 4096). Looking into Linux sendfile() manpage suggests that 2,147,479,552 is a documented limit: sendfile() will transfer at most 0x7ffff000 (2,147,479,552) bytes, returning the number of bytes actually transferred. (This is true on both 32-bit and 64-bit systems.) This seems to be mostly arbitrary limitation appeared in Linux kernel 2.6.16 (https://github.com/torvalds/linux/commit/e28cc71572da38a5a12c1cfe4d7032017adccf69). Interesting enough, the actual limitation is not 0x7ffff000 as documented, but instead MAX_RW_COUNT, which is defined as (INT_MAX & PAGE_MASK). This suggests that the behaviour will be actually different on platforms with larger pages. Something as simple as: diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c +++ b/src/os/unix/ngx_linux_sendfile_chain.c @@ -216,7 +216,6 @@ ngx_linux_sendfile_chain(ngx_connection_ */ send = prev_send + sent; - continue; } if (send >= limit || in == NULL) { should be enough to resolve this additional 0-sized writev(). Untested though, as I don't have a test playground on hand where 2G sendfile() can be reached. It would be great if you'll test it. Full patch: # HG changeset patch # User Maxim Dounin # Date 1635361800 -10800 # Wed Oct 27 22:10:00 2021 +0300 # Node ID 859447c7b7076b676a3421597514b324b708658d # Parent 2a7155733855d1c2ea1c1ded8d1a4ba654b533cb Fixed sendfile() limit handling on Linux. On Linux starting with 2.6.16, sendfile() silently limits all operations to MAX_RW_COUNT, defined as (INT_MAX & PAGE_MASK). This incorrectly triggered the interrupt check, and resulted in 0-sized writev() on the next loop iteration. Fix is to make sure the limit is always checked, so we will return from the loop if the limit is already reached even if number of bytes sent is not exactly equal to the number of bytes we've tried to send. diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c +++ b/src/os/unix/ngx_linux_sendfile_chain.c @@ -216,7 +216,6 @@ ngx_linux_sendfile_chain(ngx_connection_ */ send = prev_send + sent; - continue; } if (send >= limit || in == NULL) { -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Wed Oct 27 19:54:56 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 27 Oct 2021 22:54:56 +0300 Subject: [PATCH 3 of 4] Upstream: sendfile_max_chunk support In-Reply-To: References: Message-ID: <5A1436EB-41AA-4BF2-8FFE-92352984D6E9@nginx.com> > On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1633978615 -10800 > # Mon Oct 11 21:56:55 2021 +0300 > # Node ID c7ef6ce9455b01ee1fdcfd7288c4ac5b3ef0de41 > # Parent 489323e194e4c3b1a7937c51bd4e1671c70f52f8 > Upstream: sendfile_max_chunk support. > > Previously, connections to upstream servers used sendfile() if it was > enabled, but never honored sendfile_max_chunk. This might result > in worker monopolization for a long time if large request bodies > are allowed. > > diff --git a/src/core/ngx_output_chain.c b/src/core/ngx_output_chain.c > --- a/src/core/ngx_output_chain.c > +++ b/src/core/ngx_output_chain.c > @@ -803,6 +803,10 @@ ngx_chain_writer(void *data, ngx_chain_t > return NGX_ERROR; > } > > + if (chain && c->write->ready) { > + ngx_post_event(c->write, &ngx_posted_next_events); > + } > + > for (cl = ctx->out; cl && cl != chain; /* void */) { > ln = cl; > cl = cl->next; > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -1511,8 +1511,9 @@ ngx_http_upstream_check_broken_connectio > static void > ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) > { > - ngx_int_t rc; > - ngx_connection_t *c; > + ngx_int_t rc; > + ngx_connection_t *c; > + ngx_http_core_loc_conf_t *clcf; > > r->connection->log->action = "connecting to upstream"; > > @@ -1599,10 +1600,12 @@ ngx_http_upstream_connect(ngx_http_reque > > /* init or reinit the ngx_output_chain() and ngx_chain_writer() contexts */ > > + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); > + > u->writer.out = NULL; > u->writer.last = &u->writer.out; > u->writer.connection = c; > - u->writer.limit = 0; > + u->writer.limit = clcf->sendfile_max_chunk; > > if (u->request_sent) { > if (ngx_http_upstream_reinit(r, u) != NGX_OK) { > Just to reiterate, while this has most positive effect with Linux sendfile() on fast networks, it doesn't seem to help with typical proxying of request body buffered on disk, with sendfile disabled. In this case, request body is read in ngx_output_chain_copy_buf() iteratively in small client_body_buffer_size parts usually below the sendfile_max_chunk limit. (Yet, this can be limited with client_body_buffer_size configured above sendfile_max_chunk.) Also, it would barely effect sending body with chunked encoding, such as in HTTP/1.1, FastCGI, and with limited frame size in gRPC. Anyway, the change looks quite useful and natural to have. -- Sergey Kandaurov From pluknet at nginx.com Wed Oct 27 19:55:54 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 27 Oct 2021 22:55:54 +0300 Subject: [PATCH 4 of 4] Changed default value of sendfile_max_chunk to 2m In-Reply-To: References: Message-ID: <258F4AB3-1E40-48F4-94AA-878272ECC1C3@nginx.com> > On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1633978667 -10800 > # Mon Oct 11 21:57:47 2021 +0300 > # Node ID a6426f166fa41d23040e5b3aefb2d6340c10a53c > # Parent c7ef6ce9455b01ee1fdcfd7288c4ac5b3ef0de41 > Changed default value of sendfile_max_chunk to 2m. > > The "sendfile_max_chunk" directive is important to prevent worker > monopolization by fast connections. The 2m value implies maximum 200ms > delay with 100 Mbps links, 20ms delay with 1 Gbps links, and 2ms on > 10 Gbps links. It is also seems to be a good value for disks. > > diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c > --- a/src/http/ngx_http_core_module.c > +++ b/src/http/ngx_http_core_module.c > @@ -3720,7 +3720,7 @@ ngx_http_core_merge_loc_conf(ngx_conf_t > ngx_conf_merge_value(conf->internal, prev->internal, 0); > ngx_conf_merge_value(conf->sendfile, prev->sendfile, 0); > ngx_conf_merge_size_value(conf->sendfile_max_chunk, > - prev->sendfile_max_chunk, 0); > + prev->sendfile_max_chunk, 2 * 1024 * 1024); > ngx_conf_merge_size_value(conf->subrequest_output_buffer_size, > prev->subrequest_output_buffer_size, > (size_t) ngx_pagesize); > Looks good. -- Sergey Kandaurov From mdounin at mdounin.ru Wed Oct 27 20:53:09 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Oct 2021 23:53:09 +0300 Subject: [PATCH 3 of 4] Upstream: sendfile_max_chunk support In-Reply-To: <5A1436EB-41AA-4BF2-8FFE-92352984D6E9@nginx.com> References: <5A1436EB-41AA-4BF2-8FFE-92352984D6E9@nginx.com> Message-ID: Hello! On Wed, Oct 27, 2021 at 10:54:56PM +0300, Sergey Kandaurov wrote: > > > On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1633978615 -10800 > > # Mon Oct 11 21:56:55 2021 +0300 > > # Node ID c7ef6ce9455b01ee1fdcfd7288c4ac5b3ef0de41 > > # Parent 489323e194e4c3b1a7937c51bd4e1671c70f52f8 > > Upstream: sendfile_max_chunk support. > > > > Previously, connections to upstream servers used sendfile() if it was > > enabled, but never honored sendfile_max_chunk. This might result > > in worker monopolization for a long time if large request bodies > > are allowed. > > > > diff --git a/src/core/ngx_output_chain.c b/src/core/ngx_output_chain.c > > --- a/src/core/ngx_output_chain.c > > +++ b/src/core/ngx_output_chain.c > > @@ -803,6 +803,10 @@ ngx_chain_writer(void *data, ngx_chain_t > > return NGX_ERROR; > > } > > > > + if (chain && c->write->ready) { > > + ngx_post_event(c->write, &ngx_posted_next_events); > > + } > > + > > for (cl = ctx->out; cl && cl != chain; /* void */) { > > ln = cl; > > cl = cl->next; > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > > --- a/src/http/ngx_http_upstream.c > > +++ b/src/http/ngx_http_upstream.c > > @@ -1511,8 +1511,9 @@ ngx_http_upstream_check_broken_connectio > > static void > > ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) > > { > > - ngx_int_t rc; > > - ngx_connection_t *c; > > + ngx_int_t rc; > > + ngx_connection_t *c; > > + ngx_http_core_loc_conf_t *clcf; > > > > r->connection->log->action = "connecting to upstream"; > > > > @@ -1599,10 +1600,12 @@ ngx_http_upstream_connect(ngx_http_reque > > > > /* init or reinit the ngx_output_chain() and ngx_chain_writer() contexts */ > > > > + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); > > + > > u->writer.out = NULL; > > u->writer.last = &u->writer.out; > > u->writer.connection = c; > > - u->writer.limit = 0; > > + u->writer.limit = clcf->sendfile_max_chunk; > > > > if (u->request_sent) { > > if (ngx_http_upstream_reinit(r, u) != NGX_OK) { > > > > Just to reiterate, while this has most positive effect with Linux > sendfile() on fast networks, it doesn't seem to help with typical > proxying of request body buffered on disk, with sendfile disabled. > In this case, request body is read in ngx_output_chain_copy_buf() > iteratively in small client_body_buffer_size parts usually below > the sendfile_max_chunk limit. (Yet, this can be limited with > client_body_buffer_size configured above sendfile_max_chunk.) Similarly to the identical sendfile_max_chunk code in the write filter, this change is not expected to prevent worker monopolization in all cases, notably if sendfile is not used and the buffer configured is smaller than sendfile_max_chunk. It can, however, prevent worker monopolization with sendfile. Also, it will naturally prevent hangs on c->send_chain() internal limits, such as sendfile()'s 2G limit on Linux (while unlikely, it can happen if very large request bodies are allowed), since these internal limits are now handled identically to external ones. > Also, it would barely effect sending body with chunked encoding, > such as in HTTP/1.1, FastCGI, and with limited frame size in gRPC. As long as long enough body is readily available on disk, it actually will prevent sending more than sendfile_max_chunk in total. With request buffering switched off sending is anyway limited with c->read->available on client's connection. > Anyway, the change looks quite useful and natural to have. Sure. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Wed Oct 27 21:50:25 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 28 Oct 2021 00:50:25 +0300 Subject: [PATCH 2 of 4] Simplified sendfile_max_chunk handling In-Reply-To: References: <489323e194e4c3b1a793.1633978701@vm-bsd.mdounin.ru> <918E7FCB-2172-4DEC-B19A-A5B71BF97B14@nginx.com> Message-ID: <50C1B54C-2390-4611-9238-905066E5EC07@nginx.com> > On 27 Oct 2021, at 22:19, Maxim Dounin wrote: > > Hello! > > On Wed, Oct 27, 2021 at 05:19:19PM +0300, Sergey Kandaurov wrote: > >>> On 11 Oct 2021, at 21:58, Maxim Dounin wrote: >>> >>> # HG changeset patch >>> # User Maxim Dounin >>> # Date 1633978587 -10800 >>> # Mon Oct 11 21:56:27 2021 +0300 >>> # Node ID 489323e194e4c3b1a7937c51bd4e1671c70f52f8 >>> # Parent d175cd09ac9d2bab7f7226eac3bfce196a296cc0 >>> Simplified sendfile_max_chunk handling. >>> >>> Previously, it was checked that sendfile_max_chunk was enabled and >>> almost whole sendfile_max_chunk was sent (see e67ef50c3176), to avoid >>> delaying connections where sendfile_max_chunk wasn't reached (for example, >>> when sending responses smaller than sendfile_max_chunk). Now we instead >>> check if there are unsent data, and the connection is still ready for writing. >>> Additionally we also check c->write->delayed to ignore connections already >>> delayed by limit_rate. >>> >>> This approach is believed to be more robust, and correctly handles >>> not only sendfile_max_chunk, but also internal limits of c->send_chain(), >>> such as sendfile() maximum supported length (ticket #1870). >>> >>> diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c >>> --- a/src/http/ngx_http_write_filter_module.c >>> +++ b/src/http/ngx_http_write_filter_module.c >>> @@ -321,16 +321,12 @@ ngx_http_write_filter(ngx_http_request_t >>> delay = (ngx_msec_t) ((nsent - sent) * 1000 / r->limit_rate); >>> >>> if (delay > 0) { >>> - limit = 0; >>> c->write->delayed = 1; >>> ngx_add_timer(c->write, delay); >>> } >>> } >>> >>> - if (limit >>> - && c->write->ready >>> - && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) >>> - { >>> + if (chain && c->write->ready && !c->write->delayed) { >>> ngx_post_event(c->write, &ngx_posted_next_events); >>> } >>> >> >> Looks good. >> >> Not strictly related to this change, so FYI. I noticed a stray writev() >> after Linux sendfile(), when it writes more than its internal limits. >> >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 write old buf t:0 f:1 0000000000000000, >> pos 0000000000000000, size: 0 file: 416072437, size: 3878894859 >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter: l:1 f:0 s:3878894859 >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter limit 0 >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: @416072437 2147482891 >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: 2147479552 of 2147482891 @416072437 >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 writev: 0 of 0 >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter 0000561528695820 >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 post event 00005615289C2310 >> >> Here sendfile() partially sent 2147479552, which is above its internal >> limit NGX_SENDFILE_MAXSIZE - ngx_pagesize. On the second iteration, >> due to this, it falls back to writev() with zero-size headers. >> Then, with the patch applied, it posts the next write event, as designed >> (previously, it would seemingly stuck instead, such as in ticket #1870). > > Interesting. > > Overall it looks harmless, but we may want to look further why > sendfile() only sent 2147479552 instead of 2147482891. It seems > that 2147479552 is in pages (524287 x 4096) despite the fact that > the initial offset is not page-aligned. We expect sendfile() to > send page-aligned ranges instead (416072437 + 2147482891 == 625868 x 4096). > > Looking into Linux sendfile() manpage suggests that 2,147,479,552 > is a documented limit: > > sendfile() will transfer at most 0x7ffff000 (2,147,479,552) > bytes, returning the number of bytes actually transferred. > (This is true on both 32-bit and 64-bit systems.) > > This seems to be mostly arbitrary limitation appeared in Linux > kernel 2.6.16 > (https://github.com/torvalds/linux/commit/e28cc71572da38a5a12c1cfe4d7032017adccf69). > > Interesting enough, the actual limitation is not 0x7ffff000 as > documented, but instead MAX_RW_COUNT, which is defined as > (INT_MAX & PAGE_MASK). This suggests that the behaviour will be > actually different on platforms with larger pages. > > Something as simple as: > > diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c > --- a/src/os/unix/ngx_linux_sendfile_chain.c > +++ b/src/os/unix/ngx_linux_sendfile_chain.c > @@ -216,7 +216,6 @@ ngx_linux_sendfile_chain(ngx_connection_ > */ > > send = prev_send + sent; > - continue; > } > > if (send >= limit || in == NULL) { > > should be enough to resolve this additional 0-sized writev(). > Untested though, as I don't have a test playground on hand where > 2G sendfile() can be reached. It would be great if you'll test > it. > That seems to help: 2021/10/27 20:36:31 [debug] 1498568#0: *1 write old buf t:1 f:0 000055D8D328FDB0, pos 000055D8D328FDB0, size: 252 file: 0, size: 0 2021/10/27 20:36:31 [debug] 1498568#0: *1 write new buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 4294967296 2021/10/27 20:36:31 [debug] 1498568#0: *1 http write filter: l:1 f:0 s:4294967548 2021/10/27 20:36:31 [debug] 1498568#0: *1 http write filter limit 0 2021/10/27 20:36:31 [debug] 1498568#0: *1 writev: 252 of 252 [.. next ngx_linux_sendfile_chain() loop iteration ..] 2021/10/27 20:36:31 [debug] 1498568#0: *1 sendfile: @0 2147479552 2021/10/27 20:36:31 [debug] 1498568#0: *1 sendfile: 2147479552 of 2147479552 @0 [.. return from ngx_linux_sendfile_chain() on exceeded limit ..] 2021/10/27 20:36:31 [debug] 1498568#0: *1 http write filter 000055D8D329C8D0 2021/10/27 20:36:31 [debug] 1498568#0: *1 post event 000055D8D35CC660 > Full patch: > > # HG changeset patch > # User Maxim Dounin > # Date 1635361800 -10800 > # Wed Oct 27 22:10:00 2021 +0300 > # Node ID 859447c7b7076b676a3421597514b324b708658d > # Parent 2a7155733855d1c2ea1c1ded8d1a4ba654b533cb > Fixed sendfile() limit handling on Linux. > > On Linux starting with 2.6.16, sendfile() silently limits all operations > to MAX_RW_COUNT, defined as (INT_MAX & PAGE_MASK). This incorrectly > triggered the interrupt check, and resulted in 0-sized writev() on the > next loop iteration. > > Fix is to make sure the limit is always checked, so we will return from > the loop if the limit is already reached even if number of bytes sent is > not exactly equal to the number of bytes we've tried to send. > > diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c > --- a/src/os/unix/ngx_linux_sendfile_chain.c > +++ b/src/os/unix/ngx_linux_sendfile_chain.c > @@ -216,7 +216,6 @@ ngx_linux_sendfile_chain(ngx_connection_ > */ > > send = prev_send + sent; > - continue; > } > > if (send >= limit || in == NULL) { > The change looks good to me. Btw, this should also stop exceeding the limit after several sendfile() calls each interrupted, on Linux 4.3+ (which is rather theoretical). It probably deserves updating comments in this file about the count parameter constraints. -- Sergey Kandaurov From mdounin at mdounin.ru Wed Oct 27 22:56:25 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Oct 2021 01:56:25 +0300 Subject: [PATCH 2 of 4] Simplified sendfile_max_chunk handling In-Reply-To: <50C1B54C-2390-4611-9238-905066E5EC07@nginx.com> References: <489323e194e4c3b1a793.1633978701@vm-bsd.mdounin.ru> <918E7FCB-2172-4DEC-B19A-A5B71BF97B14@nginx.com> <50C1B54C-2390-4611-9238-905066E5EC07@nginx.com> Message-ID: Hello! On Thu, Oct 28, 2021 at 12:50:25AM +0300, Sergey Kandaurov wrote: > > On 27 Oct 2021, at 22:19, Maxim Dounin wrote: > > > > On Wed, Oct 27, 2021 at 05:19:19PM +0300, Sergey Kandaurov wrote: > > > >>> On 11 Oct 2021, at 21:58, Maxim Dounin wrote: > >>> > >>> # HG changeset patch > >>> # User Maxim Dounin > >>> # Date 1633978587 -10800 > >>> # Mon Oct 11 21:56:27 2021 +0300 > >>> # Node ID 489323e194e4c3b1a7937c51bd4e1671c70f52f8 > >>> # Parent d175cd09ac9d2bab7f7226eac3bfce196a296cc0 > >>> Simplified sendfile_max_chunk handling. > >>> > >>> Previously, it was checked that sendfile_max_chunk was enabled and > >>> almost whole sendfile_max_chunk was sent (see e67ef50c3176), to avoid > >>> delaying connections where sendfile_max_chunk wasn't reached (for example, > >>> when sending responses smaller than sendfile_max_chunk). Now we instead > >>> check if there are unsent data, and the connection is still ready for writing. > >>> Additionally we also check c->write->delayed to ignore connections already > >>> delayed by limit_rate. > >>> > >>> This approach is believed to be more robust, and correctly handles > >>> not only sendfile_max_chunk, but also internal limits of c->send_chain(), > >>> such as sendfile() maximum supported length (ticket #1870). > >>> > >>> diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c > >>> --- a/src/http/ngx_http_write_filter_module.c > >>> +++ b/src/http/ngx_http_write_filter_module.c > >>> @@ -321,16 +321,12 @@ ngx_http_write_filter(ngx_http_request_t > >>> delay = (ngx_msec_t) ((nsent - sent) * 1000 / r->limit_rate); > >>> > >>> if (delay > 0) { > >>> - limit = 0; > >>> c->write->delayed = 1; > >>> ngx_add_timer(c->write, delay); > >>> } > >>> } > >>> > >>> - if (limit > >>> - && c->write->ready > >>> - && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) > >>> - { > >>> + if (chain && c->write->ready && !c->write->delayed) { > >>> ngx_post_event(c->write, &ngx_posted_next_events); > >>> } > >>> > >> > >> Looks good. > >> > >> Not strictly related to this change, so FYI. I noticed a stray writev() > >> after Linux sendfile(), when it writes more than its internal limits. > >> > >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 write old buf t:0 f:1 0000000000000000, > >> pos 0000000000000000, size: 0 file: 416072437, size: 3878894859 > >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter: l:1 f:0 s:3878894859 > >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter limit 0 > >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: @416072437 2147482891 > >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: 2147479552 of 2147482891 @416072437 > >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 writev: 0 of 0 > >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter 0000561528695820 > >> 2021/10/27 12:44:34 [debug] 1462058#0: *1 post event 00005615289C2310 > >> > >> Here sendfile() partially sent 2147479552, which is above its internal > >> limit NGX_SENDFILE_MAXSIZE - ngx_pagesize. On the second iteration, > >> due to this, it falls back to writev() with zero-size headers. > >> Then, with the patch applied, it posts the next write event, as designed > >> (previously, it would seemingly stuck instead, such as in ticket #1870). > > > > Interesting. > > > > Overall it looks harmless, but we may want to look further why > > sendfile() only sent 2147479552 instead of 2147482891. It seems > > that 2147479552 is in pages (524287 x 4096) despite the fact that > > the initial offset is not page-aligned. We expect sendfile() to > > send page-aligned ranges instead (416072437 + 2147482891 == 625868 x 4096). > > > > Looking into Linux sendfile() manpage suggests that 2,147,479,552 > > is a documented limit: > > > > sendfile() will transfer at most 0x7ffff000 (2,147,479,552) > > bytes, returning the number of bytes actually transferred. > > (This is true on both 32-bit and 64-bit systems.) > > > > This seems to be mostly arbitrary limitation appeared in Linux > > kernel 2.6.16 > > (https://github.com/torvalds/linux/commit/e28cc71572da38a5a12c1cfe4d7032017adccf69). > > > > Interesting enough, the actual limitation is not 0x7ffff000 as > > documented, but instead MAX_RW_COUNT, which is defined as > > (INT_MAX & PAGE_MASK). This suggests that the behaviour will be > > actually different on platforms with larger pages. > > > > Something as simple as: > > > > diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c > > --- a/src/os/unix/ngx_linux_sendfile_chain.c > > +++ b/src/os/unix/ngx_linux_sendfile_chain.c > > @@ -216,7 +216,6 @@ ngx_linux_sendfile_chain(ngx_connection_ > > */ > > > > send = prev_send + sent; > > - continue; > > } > > > > if (send >= limit || in == NULL) { > > > > should be enough to resolve this additional 0-sized writev(). > > Untested though, as I don't have a test playground on hand where > > 2G sendfile() can be reached. It would be great if you'll test > > it. > > > > That seems to help: > > 2021/10/27 20:36:31 [debug] 1498568#0: *1 write old buf t:1 f:0 000055D8D328FDB0, > pos 000055D8D328FDB0, size: 252 file: 0, size: 0 > 2021/10/27 20:36:31 [debug] 1498568#0: *1 write new buf t:0 f:1 0000000000000000, > pos 0000000000000000, size: 0 file: 0, size: 4294967296 > 2021/10/27 20:36:31 [debug] 1498568#0: *1 http write filter: l:1 f:0 s:4294967548 > 2021/10/27 20:36:31 [debug] 1498568#0: *1 http write filter limit 0 > 2021/10/27 20:36:31 [debug] 1498568#0: *1 writev: 252 of 252 > [.. next ngx_linux_sendfile_chain() loop iteration ..] > 2021/10/27 20:36:31 [debug] 1498568#0: *1 sendfile: @0 2147479552 > 2021/10/27 20:36:31 [debug] 1498568#0: *1 sendfile: 2147479552 of 2147479552 @0 > [.. return from ngx_linux_sendfile_chain() on exceeded limit ..] > 2021/10/27 20:36:31 [debug] 1498568#0: *1 http write filter 000055D8D329C8D0 > 2021/10/27 20:36:31 [debug] 1498568#0: *1 post event 000055D8D35CC660 Thanks for testing. > > Full patch: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1635361800 -10800 > > # Wed Oct 27 22:10:00 2021 +0300 > > # Node ID 859447c7b7076b676a3421597514b324b708658d > > # Parent 2a7155733855d1c2ea1c1ded8d1a4ba654b533cb > > Fixed sendfile() limit handling on Linux. > > > > On Linux starting with 2.6.16, sendfile() silently limits all operations > > to MAX_RW_COUNT, defined as (INT_MAX & PAGE_MASK). This incorrectly > > triggered the interrupt check, and resulted in 0-sized writev() on the > > next loop iteration. > > > > Fix is to make sure the limit is always checked, so we will return from > > the loop if the limit is already reached even if number of bytes sent is > > not exactly equal to the number of bytes we've tried to send. > > > > diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c > > --- a/src/os/unix/ngx_linux_sendfile_chain.c > > +++ b/src/os/unix/ngx_linux_sendfile_chain.c > > @@ -216,7 +216,6 @@ ngx_linux_sendfile_chain(ngx_connection_ > > */ > > > > send = prev_send + sent; > > - continue; > > } > > > > if (send >= limit || in == NULL) { > > > > The change looks good to me. > > Btw, this should also stop exceeding the limit after several sendfile() > calls each interrupted, on Linux 4.3+ (which is rather theoretical). The limiting takes "send" into account, so I don't see how the limit can be exceeded. > It probably deserves updating comments in this file about the count > parameter constraints. The exact behaviour does not seem to be relevant to the resulting code (in particular, the patch does not change the NGX_SENDFILE_MAXSIZE limit). On the other hand, I agree that it might make sense to update the comment anyway, in particular, to make it clear that the 2G limit is still relevant to current kernels. I've added the following to the patch: @@ -38,6 +38,9 @@ static void ngx_linux_sendfile_thread_ha * On Linux up to 2.6.16 sendfile() does not allow to pass the count parameter * more than 2G-1 bytes even on 64-bit platforms: it returns EINVAL, * so we limit it to 2G-1 bytes. + * + * On Linux 2.6.16 and later, sendfile() silently limits the count parameter + * to 2G minus the page size, even on 64-bit platforms. */ #define NGX_SENDFILE_MAXSIZE 2147483647L Full patch: # HG changeset patch # User Maxim Dounin # Date 1635374871 -10800 # Thu Oct 28 01:47:51 2021 +0300 # Node ID 3c5679dfe561e3087a96acabe4cf73ef232acabb # Parent 2a7155733855d1c2ea1c1ded8d1a4ba654b533cb Fixed sendfile() limit handling on Linux. On Linux starting with 2.6.16, sendfile() silently limits all operations to MAX_RW_COUNT, defined as (INT_MAX & PAGE_MASK). This incorrectly triggered the interrupt check, and resulted in 0-sized writev() on the next loop iteration. Fix is to make sure the limit is always checked, so we will return from the loop if the limit is already reached even if number of bytes sent is not exactly equal to the number of bytes we've tried to send. diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c +++ b/src/os/unix/ngx_linux_sendfile_chain.c @@ -38,6 +38,9 @@ static void ngx_linux_sendfile_thread_ha * On Linux up to 2.6.16 sendfile() does not allow to pass the count parameter * more than 2G-1 bytes even on 64-bit platforms: it returns EINVAL, * so we limit it to 2G-1 bytes. + * + * On Linux 2.6.16 and later, sendfile() silently limits the count parameter + * to 2G minus the page size, even on 64-bit platforms. */ #define NGX_SENDFILE_MAXSIZE 2147483647L @@ -216,7 +219,6 @@ ngx_linux_sendfile_chain(ngx_connection_ */ send = prev_send + sent; - continue; } if (send >= limit || in == NULL) { -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Thu Oct 28 09:50:05 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 28 Oct 2021 12:50:05 +0300 Subject: [PATCH 2 of 4] Simplified sendfile_max_chunk handling In-Reply-To: References: <489323e194e4c3b1a793.1633978701@vm-bsd.mdounin.ru> <918E7FCB-2172-4DEC-B19A-A5B71BF97B14@nginx.com> <50C1B54C-2390-4611-9238-905066E5EC07@nginx.com> Message-ID: <751461C4-DF57-407F-BA3B-B9F03AC28D33@nginx.com> > On 28 Oct 2021, at 01:56, Maxim Dounin wrote: > > Hello! > > On Thu, Oct 28, 2021 at 12:50:25AM +0300, Sergey Kandaurov wrote: > >>> On 27 Oct 2021, at 22:19, Maxim Dounin wrote: >>> >>> On Wed, Oct 27, 2021 at 05:19:19PM +0300, Sergey Kandaurov wrote: >>> >>>>> On 11 Oct 2021, at 21:58, Maxim Dounin wrote: >>>>> >>>>> # HG changeset patch >>>>> # User Maxim Dounin >>>>> # Date 1633978587 -10800 >>>>> # Mon Oct 11 21:56:27 2021 +0300 >>>>> # Node ID 489323e194e4c3b1a7937c51bd4e1671c70f52f8 >>>>> # Parent d175cd09ac9d2bab7f7226eac3bfce196a296cc0 >>>>> Simplified sendfile_max_chunk handling. >>>>> >>>>> Previously, it was checked that sendfile_max_chunk was enabled and >>>>> almost whole sendfile_max_chunk was sent (see e67ef50c3176), to avoid >>>>> delaying connections where sendfile_max_chunk wasn't reached (for example, >>>>> when sending responses smaller than sendfile_max_chunk). Now we instead >>>>> check if there are unsent data, and the connection is still ready for writing. >>>>> Additionally we also check c->write->delayed to ignore connections already >>>>> delayed by limit_rate. >>>>> >>>>> This approach is believed to be more robust, and correctly handles >>>>> not only sendfile_max_chunk, but also internal limits of c->send_chain(), >>>>> such as sendfile() maximum supported length (ticket #1870). >>>>> >>>>> diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c >>>>> --- a/src/http/ngx_http_write_filter_module.c >>>>> +++ b/src/http/ngx_http_write_filter_module.c >>>>> @@ -321,16 +321,12 @@ ngx_http_write_filter(ngx_http_request_t >>>>> delay = (ngx_msec_t) ((nsent - sent) * 1000 / r->limit_rate); >>>>> >>>>> if (delay > 0) { >>>>> - limit = 0; >>>>> c->write->delayed = 1; >>>>> ngx_add_timer(c->write, delay); >>>>> } >>>>> } >>>>> >>>>> - if (limit >>>>> - && c->write->ready >>>>> - && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) >>>>> - { >>>>> + if (chain && c->write->ready && !c->write->delayed) { >>>>> ngx_post_event(c->write, &ngx_posted_next_events); >>>>> } >>>>> >>>> >>>> Looks good. >>>> >>>> Not strictly related to this change, so FYI. I noticed a stray writev() >>>> after Linux sendfile(), when it writes more than its internal limits. >>>> >>>> 2021/10/27 12:44:34 [debug] 1462058#0: *1 write old buf t:0 f:1 0000000000000000, >>>> pos 0000000000000000, size: 0 file: 416072437, size: 3878894859 >>>> 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter: l:1 f:0 s:3878894859 >>>> 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter limit 0 >>>> 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: @416072437 2147482891 >>>> 2021/10/27 12:44:34 [debug] 1462058#0: *1 sendfile: 2147479552 of 2147482891 @416072437 >>>> 2021/10/27 12:44:34 [debug] 1462058#0: *1 writev: 0 of 0 >>>> 2021/10/27 12:44:34 [debug] 1462058#0: *1 http write filter 0000561528695820 >>>> 2021/10/27 12:44:34 [debug] 1462058#0: *1 post event 00005615289C2310 >>>> >>>> Here sendfile() partially sent 2147479552, which is above its internal >>>> limit NGX_SENDFILE_MAXSIZE - ngx_pagesize. On the second iteration, >>>> due to this, it falls back to writev() with zero-size headers. >>>> Then, with the patch applied, it posts the next write event, as designed >>>> (previously, it would seemingly stuck instead, such as in ticket #1870). >>> >>> Interesting. >>> >>> Overall it looks harmless, but we may want to look further why >>> sendfile() only sent 2147479552 instead of 2147482891. It seems >>> that 2147479552 is in pages (524287 x 4096) despite the fact that >>> the initial offset is not page-aligned. We expect sendfile() to >>> send page-aligned ranges instead (416072437 + 2147482891 == 625868 x 4096). >>> >>> Looking into Linux sendfile() manpage suggests that 2,147,479,552 >>> is a documented limit: >>> >>> sendfile() will transfer at most 0x7ffff000 (2,147,479,552) >>> bytes, returning the number of bytes actually transferred. >>> (This is true on both 32-bit and 64-bit systems.) >>> >>> This seems to be mostly arbitrary limitation appeared in Linux >>> kernel 2.6.16 >>> (https://github.com/torvalds/linux/commit/e28cc71572da38a5a12c1cfe4d7032017adccf69). >>> >>> Interesting enough, the actual limitation is not 0x7ffff000 as >>> documented, but instead MAX_RW_COUNT, which is defined as >>> (INT_MAX & PAGE_MASK). This suggests that the behaviour will be >>> actually different on platforms with larger pages. >>> >>> Something as simple as: >>> >>> diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c >>> --- a/src/os/unix/ngx_linux_sendfile_chain.c >>> +++ b/src/os/unix/ngx_linux_sendfile_chain.c >>> @@ -216,7 +216,6 @@ ngx_linux_sendfile_chain(ngx_connection_ >>> */ >>> >>> send = prev_send + sent; >>> - continue; >>> } >>> >>> if (send >= limit || in == NULL) { >>> >>> should be enough to resolve this additional 0-sized writev(). >>> Untested though, as I don't have a test playground on hand where >>> 2G sendfile() can be reached. It would be great if you'll test >>> it. >>> >> >> That seems to help: >> >> 2021/10/27 20:36:31 [debug] 1498568#0: *1 write old buf t:1 f:0 000055D8D328FDB0, >> pos 000055D8D328FDB0, size: 252 file: 0, size: 0 >> 2021/10/27 20:36:31 [debug] 1498568#0: *1 write new buf t:0 f:1 0000000000000000, >> pos 0000000000000000, size: 0 file: 0, size: 4294967296 >> 2021/10/27 20:36:31 [debug] 1498568#0: *1 http write filter: l:1 f:0 s:4294967548 >> 2021/10/27 20:36:31 [debug] 1498568#0: *1 http write filter limit 0 >> 2021/10/27 20:36:31 [debug] 1498568#0: *1 writev: 252 of 252 >> [.. next ngx_linux_sendfile_chain() loop iteration ..] >> 2021/10/27 20:36:31 [debug] 1498568#0: *1 sendfile: @0 2147479552 >> 2021/10/27 20:36:31 [debug] 1498568#0: *1 sendfile: 2147479552 of 2147479552 @0 >> [.. return from ngx_linux_sendfile_chain() on exceeded limit ..] >> 2021/10/27 20:36:31 [debug] 1498568#0: *1 http write filter 000055D8D329C8D0 >> 2021/10/27 20:36:31 [debug] 1498568#0: *1 post event 000055D8D35CC660 > > Thanks for testing. > >>> Full patch: >>> >>> # HG changeset patch >>> # User Maxim Dounin >>> # Date 1635361800 -10800 >>> # Wed Oct 27 22:10:00 2021 +0300 >>> # Node ID 859447c7b7076b676a3421597514b324b708658d >>> # Parent 2a7155733855d1c2ea1c1ded8d1a4ba654b533cb >>> Fixed sendfile() limit handling on Linux. >>> >>> On Linux starting with 2.6.16, sendfile() silently limits all operations >>> to MAX_RW_COUNT, defined as (INT_MAX & PAGE_MASK). This incorrectly >>> triggered the interrupt check, and resulted in 0-sized writev() on the >>> next loop iteration. >>> >>> Fix is to make sure the limit is always checked, so we will return from >>> the loop if the limit is already reached even if number of bytes sent is >>> not exactly equal to the number of bytes we've tried to send. >>> >>> diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c >>> --- a/src/os/unix/ngx_linux_sendfile_chain.c >>> +++ b/src/os/unix/ngx_linux_sendfile_chain.c >>> @@ -216,7 +216,6 @@ ngx_linux_sendfile_chain(ngx_connection_ >>> */ >>> >>> send = prev_send + sent; >>> - continue; >>> } >>> >>> if (send >= limit || in == NULL) { >>> >> >> The change looks good to me. >> >> Btw, this should also stop exceeding the limit after several sendfile() >> calls each interrupted, on Linux 4.3+ (which is rather theoretical). > > The limiting takes "send" into account, so I don't see how the > limit can be exceeded. > >> It probably deserves updating comments in this file about the count >> parameter constraints. > > The exact behaviour does not seem to be relevant to the resulting > code (in particular, the patch does not change the > NGX_SENDFILE_MAXSIZE limit). On the other hand, I agree that it > might make sense to update the comment anyway, in particular, to > make it clear that the 2G limit is still relevant to current > kernels. I've added the following to the patch: > > @@ -38,6 +38,9 @@ static void ngx_linux_sendfile_thread_ha > * On Linux up to 2.6.16 sendfile() does not allow to pass the count parameter > * more than 2G-1 bytes even on 64-bit platforms: it returns EINVAL, > * so we limit it to 2G-1 bytes. > + * > + * On Linux 2.6.16 and later, sendfile() silently limits the count parameter > + * to 2G minus the page size, even on 64-bit platforms. > */ > > #define NGX_SENDFILE_MAXSIZE 2147483647L > > > Full patch: > > # HG changeset patch > # User Maxim Dounin > # Date 1635374871 -10800 > # Thu Oct 28 01:47:51 2021 +0300 > # Node ID 3c5679dfe561e3087a96acabe4cf73ef232acabb > # Parent 2a7155733855d1c2ea1c1ded8d1a4ba654b533cb > Fixed sendfile() limit handling on Linux. > > On Linux starting with 2.6.16, sendfile() silently limits all operations > to MAX_RW_COUNT, defined as (INT_MAX & PAGE_MASK). This incorrectly > triggered the interrupt check, and resulted in 0-sized writev() on the > next loop iteration. > > Fix is to make sure the limit is always checked, so we will return from > the loop if the limit is already reached even if number of bytes sent is > not exactly equal to the number of bytes we've tried to send. > > diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c > --- a/src/os/unix/ngx_linux_sendfile_chain.c > +++ b/src/os/unix/ngx_linux_sendfile_chain.c > @@ -38,6 +38,9 @@ static void ngx_linux_sendfile_thread_ha > * On Linux up to 2.6.16 sendfile() does not allow to pass the count parameter > * more than 2G-1 bytes even on 64-bit platforms: it returns EINVAL, > * so we limit it to 2G-1 bytes. > + * > + * On Linux 2.6.16 and later, sendfile() silently limits the count parameter > + * to 2G minus the page size, even on 64-bit platforms. > */ > > #define NGX_SENDFILE_MAXSIZE 2147483647L > @@ -216,7 +219,6 @@ ngx_linux_sendfile_chain(ngx_connection_ > */ > > send = prev_send + sent; > - continue; > } > > if (send >= limit || in == NULL) { > Looks fine. -- Sergey Kandaurov From awiens at mail.uni-paderborn.de Thu Oct 28 15:03:32 2021 From: awiens at mail.uni-paderborn.de (Alex Wiens) Date: Thu, 28 Oct 2021 17:03:32 +0200 Subject: [PATCH] http_image_filter_module: Add HEIC and AVIF support / Add output format option In-Reply-To: References: Message-ID: Yes, as it turns out, there are actually two shortcuts. Here is a patch. (The parent revision is an old one, but the file was not changed in the meantime.) Thanks for finding the bug. On 30.09.21 00:24, bes wrote: > Hi, Alex. > > With your patch and this configuration: > image_filter resize 10000000 -; (something wider than input image) > image_filter convert webp; > no conversion occurs but return headers contain content-type: image/webp > > I expect that when specifying the conversion filter it will apply anyway. > Most likely there is a shortcut somewhere in the algorithm, where after the > condition code goes directly to the output, but I could not find it. > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- A non-text attachment was scrubbed... Name: image_filter_3.patch Type: text/x-patch Size: 1167 bytes Desc: not available URL: From xeioex at nginx.com Fri Oct 29 14:26:33 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 29 Oct 2021 14:26:33 +0000 Subject: [njs] Fixed decodeURI() and decodeURIComponent() with invalid byte strings. Message-ID: details: https://hg.nginx.org/njs/rev/d2e23f936214 branches: changeset: 1731:d2e23f936214 user: Dmitry Volyntsev date: Fri Oct 29 13:57:26 2021 +0000 description: Fixed decodeURI() and decodeURIComponent() with invalid byte strings. The issue was introduced in 855edd76bdb6 (0.4.3). This closes #435 issue on Github. diffstat: src/njs_string.c | 13 +++++++------ src/test/njs_unit_test.c | 3 +++ 2 files changed, 10 insertions(+), 6 deletions(-) diffs (66 lines): diff -r 264fb92817cd -r d2e23f936214 src/njs_string.c --- a/src/njs_string.c Tue Oct 26 16:14:07 2021 +0300 +++ b/src/njs_string.c Fri Oct 29 13:57:26 2021 +0000 @@ -4496,26 +4496,27 @@ njs_string_decode_uri_cp(const int8_t *h cp = njs_utf8_decode(&ctx, start, end); if (njs_fast_path(cp != '%')) { - return expect_percent ? 0xFFFFFFFF: cp; + return expect_percent ? NJS_UNICODE_ERROR : cp; } p = *start; if (njs_slow_path((p + 1) >= end)) { - return 0xFFFFFFFF; + return NJS_UNICODE_ERROR; } d0 = hex[*p++]; if (njs_slow_path(d0 < 0)) { - return 0xFFFFFFFF; + return NJS_UNICODE_ERROR; } d1 = hex[*p++]; if (njs_slow_path(d1 < 0)) { - return 0xFFFFFFFF; + return NJS_UNICODE_ERROR; } *start += 2; + return (d0 << 4) + d1; } @@ -4631,7 +4632,7 @@ njs_string_decode_uri(njs_vm_t *vm, njs_ while (src < end) { percent = (src[0] == '%'); cp = njs_string_decode_uri_cp(hex, &src, end, 0); - if (njs_slow_path(cp == 0xFFFFFFFF)) { + if (njs_slow_path(cp > NJS_UNICODE_MAX_CODEPOINT)) { goto uri_error; } @@ -4677,7 +4678,7 @@ njs_string_decode_uri(njs_vm_t *vm, njs_ for (i = 1; i < n; i++) { cp = njs_string_decode_uri_cp(hex, &src, end, 1); - if (njs_slow_path(cp == 0xFFFFFFFF)) { + if (njs_slow_path(cp > NJS_UNICODE_MAX_CODEPOINT)) { goto uri_error; } diff -r 264fb92817cd -r d2e23f936214 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Tue Oct 26 16:14:07 2021 +0300 +++ b/src/test/njs_unit_test.c Fri Oct 29 13:57:26 2021 +0000 @@ -9209,6 +9209,9 @@ static njs_unit_test_t njs_test[] = { njs_str("decodeURI('%D0%B0%D0%B1%D0%B2').length"), njs_str("3")}, + { njs_str("decodeURI(String.bytesFrom([0x80,0x80]))"), + njs_str("URIError: malformed URI")}, + { njs_str("[" " '%'," " '%0'," From arut at nginx.com Fri Oct 29 15:58:34 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 29 Oct 2021 15:58:34 +0000 Subject: [nginx] Mp4: added ngx_http_mp4_update_mdhd_atom() function. Message-ID: details: https://hg.nginx.org/nginx/rev/24f7904dbfa0 branches: changeset: 7944:24f7904dbfa0 user: Roman Arutyunyan date: Thu Oct 28 13:11:31 2021 +0300 description: Mp4: added ngx_http_mp4_update_mdhd_atom() function. The function updates the duration field of mdhd atom. Previously it was updated in ngx_http_mp4_read_mdhd_atom(). The change makes it possible to alter track duration as a result of processing track frames. diffstat: src/http/modules/ngx_http_mp4_module.c | 40 +++++++++++++++++++++++++++------ 1 files changed, 32 insertions(+), 8 deletions(-) diffs (81 lines): diff -r 2a7155733855 -r 24f7904dbfa0 src/http/modules/ngx_http_mp4_module.c --- a/src/http/modules/ngx_http_mp4_module.c Thu Aug 19 20:51:27 2021 +0300 +++ b/src/http/modules/ngx_http_mp4_module.c Thu Oct 28 13:11:31 2021 +0300 @@ -70,6 +70,7 @@ typedef struct { ngx_uint_t end_chunk_samples; uint64_t start_chunk_samples_size; uint64_t end_chunk_samples_size; + uint64_t duration; off_t start_offset; off_t end_offset; @@ -253,6 +254,8 @@ static void ngx_http_mp4_update_mdia_ato ngx_http_mp4_trak_t *trak); static ngx_int_t ngx_http_mp4_read_mdhd_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size); +static void ngx_http_mp4_update_mdhd_atom(ngx_http_mp4_file_t *mp4, + ngx_http_mp4_trak_t *trak); static ngx_int_t ngx_http_mp4_read_hdlr_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size); static ngx_int_t ngx_http_mp4_read_minf_atom(ngx_http_mp4_file_t *mp4, @@ -822,7 +825,7 @@ ngx_http_mp4_process(ngx_http_mp4_file_t ngx_http_mp4_update_stbl_atom(mp4, &trak[i]); ngx_http_mp4_update_minf_atom(mp4, &trak[i]); - trak[i].size += trak[i].mdhd_size; + ngx_http_mp4_update_mdhd_atom(mp4, &trak[i]); trak[i].size += trak[i].hdlr_size; ngx_http_mp4_update_mdia_atom(mp4, &trak[i]); trak[i].size += trak[i].tkhd_size; @@ -1749,16 +1752,10 @@ ngx_http_mp4_read_mdhd_atom(ngx_http_mp4 trak = ngx_mp4_last_trak(mp4); trak->mdhd_size = atom_size; trak->timescale = timescale; + trak->duration = duration; ngx_mp4_set_32value(mdhd_atom->size, atom_size); - if (mdhd_atom->version[0] == 0) { - ngx_mp4_set_32value(mdhd_atom->duration, duration); - - } else { - ngx_mp4_set_64value(mdhd64_atom->duration, duration); - } - atom = &trak->mdhd_atom_buf; atom->temporary = 1; atom->pos = atom_header; @@ -1772,6 +1769,33 @@ ngx_http_mp4_read_mdhd_atom(ngx_http_mp4 } +static void +ngx_http_mp4_update_mdhd_atom(ngx_http_mp4_file_t *mp4, + ngx_http_mp4_trak_t *trak) +{ + ngx_buf_t *atom; + ngx_mp4_mdhd_atom_t *mdhd_atom; + ngx_mp4_mdhd64_atom_t *mdhd64_atom; + + atom = trak->out[NGX_HTTP_MP4_MDHD_ATOM].buf; + if (atom == NULL) { + return; + } + + mdhd_atom = (ngx_mp4_mdhd_atom_t *) atom->pos; + mdhd64_atom = (ngx_mp4_mdhd64_atom_t *) atom->pos; + + if (mdhd_atom->version[0] == 0) { + ngx_mp4_set_32value(mdhd_atom->duration, trak->duration); + + } else { + ngx_mp4_set_64value(mdhd64_atom->duration, trak->duration); + } + + trak->size += trak->mdhd_size; +} + + static ngx_int_t ngx_http_mp4_read_hdlr_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size) { From arut at nginx.com Fri Oct 29 15:58:37 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 29 Oct 2021 15:58:37 +0000 Subject: [nginx] Mp4: mp4_start_key_frame directive. Message-ID: details: https://hg.nginx.org/nginx/rev/f17ba8ecaaf0 branches: changeset: 7945:f17ba8ecaaf0 user: Roman Arutyunyan date: Thu Oct 28 14:14:25 2021 +0300 description: Mp4: mp4_start_key_frame directive. The directive enables including all frames from start time to the most recent key frame in the result. Those frames are removed from presentation timeline using mp4 edit lists. Edit lists are currently supported by popular players and browsers such as Chrome, Safari, QuickTime and ffmpeg. Among those not supporting them properly is Firefox[1]. Based on a patch by Tracey Jaquith, Internet Archive. [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1735300 diffstat: src/http/modules/ngx_http_mp4_module.c | 221 ++++++++++++++++++++++++++++---- 1 files changed, 194 insertions(+), 27 deletions(-) diffs (357 lines): diff -r 24f7904dbfa0 -r f17ba8ecaaf0 src/http/modules/ngx_http_mp4_module.c --- a/src/http/modules/ngx_http_mp4_module.c Thu Oct 28 13:11:31 2021 +0300 +++ b/src/http/modules/ngx_http_mp4_module.c Thu Oct 28 14:14:25 2021 +0300 @@ -11,31 +11,33 @@ #define NGX_HTTP_MP4_TRAK_ATOM 0 #define NGX_HTTP_MP4_TKHD_ATOM 1 -#define NGX_HTTP_MP4_MDIA_ATOM 2 -#define NGX_HTTP_MP4_MDHD_ATOM 3 -#define NGX_HTTP_MP4_HDLR_ATOM 4 -#define NGX_HTTP_MP4_MINF_ATOM 5 -#define NGX_HTTP_MP4_VMHD_ATOM 6 -#define NGX_HTTP_MP4_SMHD_ATOM 7 -#define NGX_HTTP_MP4_DINF_ATOM 8 -#define NGX_HTTP_MP4_STBL_ATOM 9 -#define NGX_HTTP_MP4_STSD_ATOM 10 -#define NGX_HTTP_MP4_STTS_ATOM 11 -#define NGX_HTTP_MP4_STTS_DATA 12 -#define NGX_HTTP_MP4_STSS_ATOM 13 -#define NGX_HTTP_MP4_STSS_DATA 14 -#define NGX_HTTP_MP4_CTTS_ATOM 15 -#define NGX_HTTP_MP4_CTTS_DATA 16 -#define NGX_HTTP_MP4_STSC_ATOM 17 -#define NGX_HTTP_MP4_STSC_START 18 -#define NGX_HTTP_MP4_STSC_DATA 19 -#define NGX_HTTP_MP4_STSC_END 20 -#define NGX_HTTP_MP4_STSZ_ATOM 21 -#define NGX_HTTP_MP4_STSZ_DATA 22 -#define NGX_HTTP_MP4_STCO_ATOM 23 -#define NGX_HTTP_MP4_STCO_DATA 24 -#define NGX_HTTP_MP4_CO64_ATOM 25 -#define NGX_HTTP_MP4_CO64_DATA 26 +#define NGX_HTTP_MP4_EDTS_ATOM 2 +#define NGX_HTTP_MP4_ELST_ATOM 3 +#define NGX_HTTP_MP4_MDIA_ATOM 4 +#define NGX_HTTP_MP4_MDHD_ATOM 5 +#define NGX_HTTP_MP4_HDLR_ATOM 6 +#define NGX_HTTP_MP4_MINF_ATOM 7 +#define NGX_HTTP_MP4_VMHD_ATOM 8 +#define NGX_HTTP_MP4_SMHD_ATOM 9 +#define NGX_HTTP_MP4_DINF_ATOM 10 +#define NGX_HTTP_MP4_STBL_ATOM 11 +#define NGX_HTTP_MP4_STSD_ATOM 12 +#define NGX_HTTP_MP4_STTS_ATOM 13 +#define NGX_HTTP_MP4_STTS_DATA 14 +#define NGX_HTTP_MP4_STSS_ATOM 15 +#define NGX_HTTP_MP4_STSS_DATA 16 +#define NGX_HTTP_MP4_CTTS_ATOM 17 +#define NGX_HTTP_MP4_CTTS_DATA 18 +#define NGX_HTTP_MP4_STSC_ATOM 19 +#define NGX_HTTP_MP4_STSC_START 20 +#define NGX_HTTP_MP4_STSC_DATA 21 +#define NGX_HTTP_MP4_STSC_END 22 +#define NGX_HTTP_MP4_STSZ_ATOM 23 +#define NGX_HTTP_MP4_STSZ_DATA 24 +#define NGX_HTTP_MP4_STCO_ATOM 25 +#define NGX_HTTP_MP4_STCO_DATA 26 +#define NGX_HTTP_MP4_CO64_ATOM 27 +#define NGX_HTTP_MP4_CO64_DATA 28 #define NGX_HTTP_MP4_LAST_ATOM NGX_HTTP_MP4_CO64_DATA @@ -43,6 +45,7 @@ typedef struct { size_t buffer_size; size_t max_buffer_size; + ngx_flag_t start_key_frame; } ngx_http_mp4_conf_t; @@ -54,6 +57,25 @@ typedef struct { typedef struct { + u_char size[4]; + u_char name[4]; +} ngx_mp4_edts_atom_t; + + +typedef struct { + u_char size[4]; + u_char name[4]; + u_char version[1]; + u_char flags[3]; + u_char entries[4]; + u_char duration[8]; + u_char media_time[8]; + u_char media_rate[2]; + u_char reserved[2]; +} ngx_mp4_elst_atom_t; + + +typedef struct { uint32_t timescale; uint32_t time_to_sample_entries; uint32_t sample_to_chunk_entries; @@ -71,6 +93,8 @@ typedef struct { uint64_t start_chunk_samples_size; uint64_t end_chunk_samples_size; uint64_t duration; + uint64_t prefix; + uint64_t movie_duration; off_t start_offset; off_t end_offset; @@ -86,6 +110,8 @@ typedef struct { ngx_buf_t trak_atom_buf; ngx_buf_t tkhd_atom_buf; + ngx_buf_t edts_atom_buf; + ngx_buf_t elst_atom_buf; ngx_buf_t mdia_atom_buf; ngx_buf_t mdhd_atom_buf; ngx_buf_t hdlr_atom_buf; @@ -112,6 +138,8 @@ typedef struct { ngx_buf_t co64_atom_buf; ngx_buf_t co64_data_buf; + ngx_mp4_edts_atom_t edts_atom; + ngx_mp4_elst_atom_t elst_atom; ngx_mp4_stsc_entry_t stsc_start_chunk_entry; ngx_mp4_stsc_entry_t stsc_end_chunk_entry; } ngx_http_mp4_trak_t; @@ -187,6 +215,14 @@ typedef struct { ((u_char *) (p))[6] = n3; \ ((u_char *) (p))[7] = n4 +#define ngx_mp4_get_16value(p) \ + ( ((uint16_t) ((u_char *) (p))[0] << 8) \ + + ( ((u_char *) (p))[1]) ) + +#define ngx_mp4_set_16value(p, n) \ + ((u_char *) (p))[0] = (u_char) ((n) >> 8); \ + ((u_char *) (p))[1] = (u_char) (n) + #define ngx_mp4_get_32value(p) \ ( ((uint32_t) ((u_char *) (p))[0] << 24) \ + ( ((u_char *) (p))[1] << 16) \ @@ -270,6 +306,8 @@ static ngx_int_t ngx_http_mp4_read_smhd_ uint64_t atom_data_size); static ngx_int_t ngx_http_mp4_read_stbl_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size); +static void ngx_http_mp4_update_edts_atom(ngx_http_mp4_file_t *mp4, + ngx_http_mp4_trak_t *trak); static void ngx_http_mp4_update_stbl_atom(ngx_http_mp4_file_t *mp4, ngx_http_mp4_trak_t *trak); static ngx_int_t ngx_http_mp4_read_stsd_atom(ngx_http_mp4_file_t *mp4, @@ -280,6 +318,8 @@ static ngx_int_t ngx_http_mp4_update_stt ngx_http_mp4_trak_t *trak); static ngx_int_t ngx_http_mp4_crop_stts_data(ngx_http_mp4_file_t *mp4, ngx_http_mp4_trak_t *trak, ngx_uint_t start); +static uint32_t ngx_http_mp4_seek_key_frame(ngx_http_mp4_file_t *mp4, + ngx_http_mp4_trak_t *trak, uint32_t start_sample); static ngx_int_t ngx_http_mp4_read_stss_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size); static ngx_int_t ngx_http_mp4_update_stss_atom(ngx_http_mp4_file_t *mp4, @@ -343,6 +383,13 @@ static ngx_command_t ngx_http_mp4_comma offsetof(ngx_http_mp4_conf_t, max_buffer_size), NULL }, + { ngx_string("mp4_start_key_frame"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_mp4_conf_t, start_key_frame), + NULL }, + ngx_null_command }; @@ -829,6 +876,7 @@ ngx_http_mp4_process(ngx_http_mp4_file_t trak[i].size += trak[i].hdlr_size; ngx_http_mp4_update_mdia_atom(mp4, &trak[i]); trak[i].size += trak[i].tkhd_size; + ngx_http_mp4_update_edts_atom(mp4, &trak[i]); ngx_http_mp4_update_trak_atom(mp4, &trak[i]); mp4->moov_size += trak[i].size; @@ -1590,6 +1638,7 @@ ngx_http_mp4_read_tkhd_atom(ngx_http_mp4 trak = ngx_mp4_last_trak(mp4); trak->tkhd_size = atom_size; + trak->movie_duration = duration; ngx_mp4_set_32value(tkhd_atom->size, atom_size); @@ -1986,6 +2035,59 @@ ngx_http_mp4_read_stbl_atom(ngx_http_mp4 static void +ngx_http_mp4_update_edts_atom(ngx_http_mp4_file_t *mp4, + ngx_http_mp4_trak_t *trak) +{ + ngx_buf_t *atom; + ngx_mp4_elst_atom_t *elst_atom; + ngx_mp4_edts_atom_t *edts_atom; + + if (trak->prefix == 0) { + return; + } + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0, + "mp4 edts atom update prefix:%uL", trak->prefix); + + edts_atom = &trak->edts_atom; + ngx_mp4_set_32value(edts_atom->size, sizeof(ngx_mp4_edts_atom_t) + + sizeof(ngx_mp4_elst_atom_t)); + ngx_mp4_set_atom_name(edts_atom, 'e', 'd', 't', 's'); + + atom = &trak->edts_atom_buf; + atom->temporary = 1; + atom->pos = (u_char *) edts_atom; + atom->last = (u_char *) edts_atom + sizeof(ngx_mp4_edts_atom_t); + + trak->out[NGX_HTTP_MP4_EDTS_ATOM].buf = atom; + + elst_atom = &trak->elst_atom; + ngx_mp4_set_32value(elst_atom->size, sizeof(ngx_mp4_elst_atom_t)); + ngx_mp4_set_atom_name(elst_atom, 'e', 'l', 's', 't'); + + elst_atom->version[0] = 1; + elst_atom->flags[0] = 0; + elst_atom->flags[1] = 0; + elst_atom->flags[2] = 0; + + ngx_mp4_set_32value(elst_atom->entries, 1); + ngx_mp4_set_64value(elst_atom->duration, trak->movie_duration); + ngx_mp4_set_64value(elst_atom->media_time, trak->prefix); + ngx_mp4_set_16value(elst_atom->media_rate, 1); + ngx_mp4_set_16value(elst_atom->reserved, 0); + + atom = &trak->elst_atom_buf; + atom->temporary = 1; + atom->pos = (u_char *) elst_atom; + atom->last = (u_char *) elst_atom + sizeof(ngx_mp4_elst_atom_t); + + trak->out[NGX_HTTP_MP4_ELST_ATOM].buf = atom; + + trak->size += sizeof(ngx_mp4_edts_atom_t) + sizeof(ngx_mp4_elst_atom_t); +} + + +static void ngx_http_mp4_update_stbl_atom(ngx_http_mp4_file_t *mp4, ngx_http_mp4_trak_t *trak) { @@ -2183,7 +2285,7 @@ static ngx_int_t ngx_http_mp4_crop_stts_data(ngx_http_mp4_file_t *mp4, ngx_http_mp4_trak_t *trak, ngx_uint_t start) { - uint32_t count, duration, rest; + uint32_t count, duration, rest, key_prefix; uint64_t start_time; ngx_buf_t *data; ngx_uint_t start_sample, entries, start_sec; @@ -2207,7 +2309,7 @@ ngx_http_mp4_crop_stts_data(ngx_http_mp4 data = trak->out[NGX_HTTP_MP4_STTS_DATA].buf; - start_time = (uint64_t) start_sec * trak->timescale / 1000; + start_time = (uint64_t) start_sec * trak->timescale / 1000 + trak->prefix; entries = trak->time_to_sample_entries; start_sample = 0; @@ -2253,6 +2355,26 @@ ngx_http_mp4_crop_stts_data(ngx_http_mp4 found: if (start) { + key_prefix = ngx_http_mp4_seek_key_frame(mp4, trak, start_sample); + + start_sample -= key_prefix; + + while (rest < key_prefix) { + trak->prefix += rest * duration; + key_prefix -= rest; + + entry--; + entries++; + + count = ngx_mp4_get_32value(entry->count); + duration = ngx_mp4_get_32value(entry->duration); + rest = count; + } + + trak->prefix += key_prefix * duration; + trak->duration += trak->prefix; + rest -= key_prefix; + ngx_mp4_set_32value(entry->count, count - rest); data->pos = (u_char *) entry; trak->time_to_sample_entries = entries; @@ -2277,6 +2399,49 @@ found: } +static uint32_t +ngx_http_mp4_seek_key_frame(ngx_http_mp4_file_t *mp4, ngx_http_mp4_trak_t *trak, + uint32_t start_sample) +{ + uint32_t key_prefix, sample, *entry, *end; + ngx_buf_t *data; + ngx_http_mp4_conf_t *conf; + + conf = ngx_http_get_module_loc_conf(mp4->request, ngx_http_mp4_module); + if (!conf->start_key_frame) { + return 0; + } + + data = trak->out[NGX_HTTP_MP4_STSS_DATA].buf; + if (data == NULL) { + return 0; + } + + entry = (uint32_t *) data->pos; + end = (uint32_t *) data->last; + + /* sync samples starts from 1 */ + start_sample++; + + key_prefix = 0; + + while (entry < end) { + sample = ngx_mp4_get_32value(entry); + if (sample > start_sample) { + break; + } + + key_prefix = start_sample - sample; + entry++; + } + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0, + "mp4 key frame prefix:%uD", key_prefix); + + return key_prefix; +} + + typedef struct { u_char size[4]; u_char name[4]; @@ -3614,6 +3779,7 @@ ngx_http_mp4_create_conf(ngx_conf_t *cf) conf->buffer_size = NGX_CONF_UNSET_SIZE; conf->max_buffer_size = NGX_CONF_UNSET_SIZE; + conf->start_key_frame = NGX_CONF_UNSET; return conf; } @@ -3628,6 +3794,7 @@ ngx_http_mp4_merge_conf(ngx_conf_t *cf, ngx_conf_merge_size_value(conf->buffer_size, prev->buffer_size, 512 * 1024); ngx_conf_merge_size_value(conf->max_buffer_size, prev->max_buffer_size, 10 * 1024 * 1024); + ngx_conf_merge_value(conf->start_key_frame, prev->start_key_frame, 0); return NGX_CONF_OK; } From mdounin at mdounin.ru Fri Oct 29 20:20:16 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Oct 2021 20:20:16 +0000 Subject: [nginx] Switched to using posted next events after sendfile_max_chunk. Message-ID: details: https://hg.nginx.org/nginx/rev/61e9c078ee3d branches: changeset: 7946:61e9c078ee3d user: Maxim Dounin date: Fri Oct 29 20:21:43 2021 +0300 description: Switched to using posted next events after sendfile_max_chunk. Previously, 1 millisecond delay was used instead. In certain edge cases this might result in noticeable performance degradation though, notably on Linux with typical CONFIG_HZ=250 (so 1ms delay becomes 4ms), sendfile_max_chunk 2m, and link speed above 2.5 Gbps. Using posted next events removes the artificial delay and makes processing fast in all cases. diffstat: src/http/ngx_http_write_filter_module.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diffs (13 lines): diff -r f17ba8ecaaf0 -r 61e9c078ee3d src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c Thu Oct 28 14:14:25 2021 +0300 +++ b/src/http/ngx_http_write_filter_module.c Fri Oct 29 20:21:43 2021 +0300 @@ -331,8 +331,7 @@ ngx_http_write_filter(ngx_http_request_t && c->write->ready && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) { - c->write->delayed = 1; - ngx_add_timer(c->write, 1); + ngx_post_event(c->write, &ngx_posted_next_events); } for (cl = r->out; cl && cl != chain; /* void */) { From mdounin at mdounin.ru Fri Oct 29 20:20:20 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Oct 2021 20:20:20 +0000 Subject: [nginx] Simplified sendfile_max_chunk handling. Message-ID: details: https://hg.nginx.org/nginx/rev/51a260276425 branches: changeset: 7947:51a260276425 user: Maxim Dounin date: Fri Oct 29 20:21:48 2021 +0300 description: Simplified sendfile_max_chunk handling. Previously, it was checked that sendfile_max_chunk was enabled and almost whole sendfile_max_chunk was sent (see e67ef50c3176), to avoid delaying connections where sendfile_max_chunk wasn't reached (for example, when sending responses smaller than sendfile_max_chunk). Now we instead check if there are unsent data, and the connection is still ready for writing. Additionally we also check c->write->delayed to ignore connections already delayed by limit_rate. This approach is believed to be more robust, and correctly handles not only sendfile_max_chunk, but also internal limits of c->send_chain(), such as sendfile() maximum supported length (ticket #1870). diffstat: src/http/ngx_http_write_filter_module.c | 6 +----- 1 files changed, 1 insertions(+), 5 deletions(-) diffs (21 lines): diff -r 61e9c078ee3d -r 51a260276425 src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c Fri Oct 29 20:21:43 2021 +0300 +++ b/src/http/ngx_http_write_filter_module.c Fri Oct 29 20:21:48 2021 +0300 @@ -321,16 +321,12 @@ ngx_http_write_filter(ngx_http_request_t delay = (ngx_msec_t) ((nsent - sent) * 1000 / r->limit_rate); if (delay > 0) { - limit = 0; c->write->delayed = 1; ngx_add_timer(c->write, delay); } } - if (limit - && c->write->ready - && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) - { + if (chain && c->write->ready && !c->write->delayed) { ngx_post_event(c->write, &ngx_posted_next_events); } From mdounin at mdounin.ru Fri Oct 29 20:20:22 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Oct 2021 20:20:22 +0000 Subject: [nginx] Fixed sendfile() limit handling on Linux. Message-ID: details: https://hg.nginx.org/nginx/rev/a2613fc1bce5 branches: changeset: 7948:a2613fc1bce5 user: Maxim Dounin date: Fri Oct 29 20:21:51 2021 +0300 description: Fixed sendfile() limit handling on Linux. On Linux starting with 2.6.16, sendfile() silently limits all operations to MAX_RW_COUNT, defined as (INT_MAX & PAGE_MASK). This incorrectly triggered the interrupt check, and resulted in 0-sized writev() on the next loop iteration. Fix is to make sure the limit is always checked, so we will return from the loop if the limit is already reached even if number of bytes sent is not exactly equal to the number of bytes we've tried to send. diffstat: src/os/unix/ngx_linux_sendfile_chain.c | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diffs (21 lines): diff -r 51a260276425 -r a2613fc1bce5 src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Fri Oct 29 20:21:48 2021 +0300 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Fri Oct 29 20:21:51 2021 +0300 @@ -38,6 +38,9 @@ static void ngx_linux_sendfile_thread_ha * On Linux up to 2.6.16 sendfile() does not allow to pass the count parameter * more than 2G-1 bytes even on 64-bit platforms: it returns EINVAL, * so we limit it to 2G-1 bytes. + * + * On Linux 2.6.16 and later, sendfile() silently limits the count parameter + * to 2G minus the page size, even on 64-bit platforms. */ #define NGX_SENDFILE_MAXSIZE 2147483647L @@ -216,7 +219,6 @@ ngx_linux_sendfile_chain(ngx_connection_ */ send = prev_send + sent; - continue; } if (send >= limit || in == NULL) { From mdounin at mdounin.ru Fri Oct 29 20:20:25 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Oct 2021 20:20:25 +0000 Subject: [nginx] Upstream: sendfile_max_chunk support. Message-ID: details: https://hg.nginx.org/nginx/rev/862f6130d357 branches: changeset: 7949:862f6130d357 user: Maxim Dounin date: Fri Oct 29 20:21:54 2021 +0300 description: Upstream: sendfile_max_chunk support. Previously, connections to upstream servers used sendfile() if it was enabled, but never honored sendfile_max_chunk. This might result in worker monopolization for a long time if large request bodies are allowed. diffstat: src/core/ngx_output_chain.c | 4 ++++ src/http/ngx_http_upstream.c | 9 ++++++--- 2 files changed, 10 insertions(+), 3 deletions(-) diffs (43 lines): diff -r a2613fc1bce5 -r 862f6130d357 src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c Fri Oct 29 20:21:51 2021 +0300 +++ b/src/core/ngx_output_chain.c Fri Oct 29 20:21:54 2021 +0300 @@ -803,6 +803,10 @@ ngx_chain_writer(void *data, ngx_chain_t return NGX_ERROR; } + if (chain && c->write->ready) { + ngx_post_event(c->write, &ngx_posted_next_events); + } + for (cl = ctx->out; cl && cl != chain; /* void */) { ln = cl; cl = cl->next; diff -r a2613fc1bce5 -r 862f6130d357 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Oct 29 20:21:51 2021 +0300 +++ b/src/http/ngx_http_upstream.c Fri Oct 29 20:21:54 2021 +0300 @@ -1511,8 +1511,9 @@ ngx_http_upstream_check_broken_connectio static void ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) { - ngx_int_t rc; - ngx_connection_t *c; + ngx_int_t rc; + ngx_connection_t *c; + ngx_http_core_loc_conf_t *clcf; r->connection->log->action = "connecting to upstream"; @@ -1599,10 +1600,12 @@ ngx_http_upstream_connect(ngx_http_reque /* init or reinit the ngx_output_chain() and ngx_chain_writer() contexts */ + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); + u->writer.out = NULL; u->writer.last = &u->writer.out; u->writer.connection = c; - u->writer.limit = 0; + u->writer.limit = clcf->sendfile_max_chunk; if (u->request_sent) { if (ngx_http_upstream_reinit(r, u) != NGX_OK) { From mdounin at mdounin.ru Fri Oct 29 20:20:28 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Oct 2021 20:20:28 +0000 Subject: [nginx] Changed default value of sendfile_max_chunk to 2m. Message-ID: details: https://hg.nginx.org/nginx/rev/e3dbd9449b14 branches: changeset: 7950:e3dbd9449b14 user: Maxim Dounin date: Fri Oct 29 20:21:57 2021 +0300 description: Changed default value of sendfile_max_chunk to 2m. The "sendfile_max_chunk" directive is important to prevent worker monopolization by fast connections. The 2m value implies maximum 200ms delay with 100 Mbps links, 20ms delay with 1 Gbps links, and 2ms on 10 Gbps links. It also seems to be a good value for disks. diffstat: src/http/ngx_http_core_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 862f6130d357 -r e3dbd9449b14 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Fri Oct 29 20:21:54 2021 +0300 +++ b/src/http/ngx_http_core_module.c Fri Oct 29 20:21:57 2021 +0300 @@ -3720,7 +3720,7 @@ ngx_http_core_merge_loc_conf(ngx_conf_t ngx_conf_merge_value(conf->internal, prev->internal, 0); ngx_conf_merge_value(conf->sendfile, prev->sendfile, 0); ngx_conf_merge_size_value(conf->sendfile_max_chunk, - prev->sendfile_max_chunk, 0); + prev->sendfile_max_chunk, 2 * 1024 * 1024); ngx_conf_merge_size_value(conf->subrequest_output_buffer_size, prev->subrequest_output_buffer_size, (size_t) ngx_pagesize); From mdounin at mdounin.ru Sat Oct 30 01:37:42 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 30 Oct 2021 01:37:42 +0000 Subject: [nginx] Changed ngx_chain_update_chains() to test tag first (ticket #2248). Message-ID: details: https://hg.nginx.org/nginx/rev/c7a8bdf5af55 branches: changeset: 7951:c7a8bdf5af55 user: Maxim Dounin date: Sat Oct 30 02:39:19 2021 +0300 description: Changed ngx_chain_update_chains() to test tag first (ticket #2248). Without this change, aio used with HTTP/2 can result in connection hang, as observed with "aio threads; aio_write on;" and proxying (ticket #2248). The problem is that HTTP/2 updates buffers outside of the output filters (notably, marks them as sent), and then posts a write event to call output filters. If a filter does not call the next one for some reason (for example, because of an AIO operation in progress), this might result in a state when the owner of a buffer already called ngx_chain_update_chains() and can reuse the buffer, while the same buffer is still sitting in the busy chain of some other filter. In the particular case a buffer was sitting in output chain's ctx->busy, and was reused by event pipe. Output chain's ctx->busy was permanently blocked by it, and this resulted in connection hang. Fix is to change ngx_chain_update_chains() to skip buffers from other modules unconditionally, without trying to wait for these buffers to become empty. diffstat: src/core/ngx_buf.c | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diffs (24 lines): diff -r e3dbd9449b14 -r c7a8bdf5af55 src/core/ngx_buf.c --- a/src/core/ngx_buf.c Fri Oct 29 20:21:57 2021 +0300 +++ b/src/core/ngx_buf.c Sat Oct 30 02:39:19 2021 +0300 @@ -203,16 +203,16 @@ ngx_chain_update_chains(ngx_pool_t *p, n while (*busy) { cl = *busy; - if (ngx_buf_size(cl->buf) != 0) { - break; - } - if (cl->buf->tag != tag) { *busy = cl->next; ngx_free_chain(p, cl); continue; } + if (ngx_buf_size(cl->buf) != 0) { + break; + } + cl->buf->pos = cl->buf->start; cl->buf->last = cl->buf->start; From christian at roessner.email Sat Oct 30 09:25:23 2021 From: christian at roessner.email (=?utf-8?B?Q2hyaXN0aWFuIFLDtsOfbmVy?=) Date: Sat, 30 Oct 2021 11:25:23 +0200 Subject: Feature requests Message-ID: Hello, some pre information: I started using Nginx as proxy for mail. Currently I have the following setup: ngin-conf: mail { } ----------------------------------------------------------------------------- server_name mail.roessner-net.de; auth_http http://localhost.localdomain:8180/authmail; proxy_pass_error_message on; ssl_certificate /etc/ssl/letsencrypt/cert/star.roessner-net.de-fullchain.crt; ssl_certificate_key /etc/ssl/letsencrypt/private/star.roessner-net.de.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; imap_capabilities "IMAP4rev1" "LITERAL+" "SASL-IR" "LOGIN-REFERRALS" "ID" "ENABLE" "IDLE" "NAMESPACE"; smtp_capabilities "SIZE 104857600" ENHANCEDSTATUSCODES 8BITMIME DSN SMTPUTF8 CHUNKING; resolver 127.0.0.1; server { listen 127.0.0.1:465 ssl; listen 192.168.0.2:465 ssl; listen 134.255.226.248:465 ssl; listen [::1]:465 ssl; listen [2a05:bec0:28:1:134:255:226:248]:465 ssl; protocol smtp; xclient on; smtp_auth login plain; error_log /var/log/nginx/smtp.log info; auth_http_header X-Auth-Port "465"; } server { listen 127.0.0.1:587; listen 192.168.0.2:587; listen 134.255.226.248:587; listen [::1]:587; listen [2a05:bec0:28:1:134:255:226:248]:587; protocol smtp; xclient on; smtp_auth login plain; starttls on; error_log /var/log/nginx/smtp.log info; auth_http_header X-Auth-Port "587"; } server { listen 127.0.0.1:143; listen 192.168.0.2:143; listen 134.255.226.248:143; listen [::1]:143; listen [2a05:bec0:28:1:134:255:226:248]:143; protocol imap; #proxy_protocol on; imap_auth login plain; starttls on; error_log /var/log/nginx/imap.log info; auth_http_header X-Auth-Port "143"; } server { listen 127.0.0.1:993 ssl; listen 192.168.0.2:993 ssl; listen 134.255.226.248:993 ssl; listen [::1]:993 ssl; listen [2a05:bec0:28:1:134:255:226:248]:993 ssl; protocol imap; #proxy_protocol on; imap_auth login plain; error_log /var/log/nginx/imap.log info; auth_http_header X-Auth-Port "993"; } ----------------------------------------------------------------------------- I started an open source proof of concept auth server project here: https://gitlab.roessner-net.de/croessner/authserv It uses the auth header to authenticate to an OpenLDAP server and replies the required server and port stuff. This works very nicles by using a stunnel.conf between Nginx and the main mail server backends: ----------------------------------------------------------------------------- [imaps] accept = 127.0.0.1:9931 client = yes connect = mail.roessner-net.de:9932 cert = /etc/ssl/letsencrypt/cert/star.roessner-net.de-fullchain.crt key = /etc/ssl/letsencrypt/private/star.roessner-net.de.key CAfile = /etc/pki/tls/certs/ca-bundle.crt verify = 2 [submission] accept = 127.0.0.1:5871 client = yes connect = 127.0.0.1:5872 cert = /etc/ssl/letsencrypt/cert/star.roessner-net.de-fullchain.crt key = /etc/ssl/letsencrypt/private/star.roessner-net.de.key CAfile = /etc/pki/tls/certs/ca-bundle.crt verify = 2 ----------------------------------------------------------------------------- So the mechanism is: Clients connect to Nginx. That authenticates plain with the help of the authserv process. Retrieves information and connects plain again to stunnel which in turn connects TLS to the backends. (I know that I currently use all this on a single system, but I evaluate it for distributed systems in different firewall zones). Now come my questions :-) 1. It would really be awsome, if someone could implment auth_http in that way that it also accepts https. 2. It would also be very nice, if the server {} blocks could use a client SSL certificate to there backends. Why is this important? If I want to speak haproxy with Dovecot, the connection must be secured. Else the Dovecot server does not accept the login process from Nginx. With the current implementation, it is not possible to keep the original source address and source port from the clients outside. For Postfix servers it works in plain, but it requires to set sasl security options in that way that it accept plain text auth. Even if using XCLIENT here. Summary for the request: 1. Having SSL for auth_http, example: auth_http https://some.auth.serv:443/authmail 2. Have client SSL from server{} to backend to have HAproxy protocol working. I know that there are not many people yet, using Nginx as proxy for mail, but I guess that might change, if the missing security features would exist. My part is to enhance the auth serv project, so people can use it, if they want. It's really at the beginning. Thanks a lot for reading and I thank you in advance. Christian -- R??ner-Network-Solutions Zertifizierter ITSiBe / CISO Karl-Br?ger-Str. 10, 36304 Alsfeld Fax: +49 6631 78823409, Mobil: +49 171 9905345 USt-IdNr.: DE225643613, https://roessner.website PGP fingerprint: 658D 1342 B762 F484 2DDF 1E88 38A5 4346 D727 94E5 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2132 bytes Desc: not available URL: