From nginx-forum at forum.nginx.org Wed Jan 1 12:19:48 2020 From: nginx-forum at forum.nginx.org (ohbarye) Date: Wed, 01 Jan 2020 07:19:48 -0500 Subject: nginx removes strong etags on gzip compression Message-ID: Hi, I'm using nginx as a reverse proxy and found a behavior that I wouldn't expect. According to some references below, I assume nginx would downgrade strong etags to weak ones when it modifies response content (e.g. gzip compression). But nginx removes strong etags on gzip compression instead of a downgrade. - http://nginx.org/en/CHANGES (See "Changes with nginx 1.7.3 08 Jul 2014") - > *) Feature: weak entity tags are now preserved on response modifications, and strong ones are changed to weak. - https://github.com/nginx/nginx/commit/def16742a1ec22ece8279185eb2b798eb5ffa031 - > Entity tags: downgrade strong etags to weak ones as needed. I created a gist to reproduce the behavior with minimum requirements, please see https://gist.github.com/ohbarye/86f2d5b464f5e88821133c43a9cf4956 So my question is: Is it expected behavior that nginx removes strong etags on gzip compression? Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286645,286645#msg-286645 From francis at daoine.org Thu Jan 2 10:27:16 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 2 Jan 2020 10:27:16 +0000 Subject: Multiple host in request In-Reply-To: <6623e7db382f61f304e98e10029b136c.NginxMailingListEnglish@forum.nginx.org> References: <20191230130711.GQ26683@daoine.org> <6623e7db382f61f304e98e10029b136c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200102102716.GT26683@daoine.org> On Tue, Dec 31, 2019 at 06:24:18AM -0500, hmahajan21 wrote: Hi there, > Yes my question is why ngnix override the header and append double host in > host header Thanks. What is the nginx config that is used? Your test request will be handled in one server{} block, and eventually in one location{} block, that presumably uses "proxy_pass". There may also be "proxy_set_header" directives, possibly from an "include", and possibly inherited into this location{}. The config may show where the unwanted part is being introduced. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jan 2 11:31:10 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 2 Jan 2020 11:31:10 +0000 Subject: nginx removes strong etags on gzip compression In-Reply-To: References: Message-ID: <20200102113110.GU26683@daoine.org> On Wed, Jan 01, 2020 at 07:19:48AM -0500, ohbarye wrote: Hi there, > Hi, I'm using nginx as a reverse proxy and found a behavior that I wouldn't > expect. > So my question is: Is it expected behavior that nginx removes strong etags > on gzip compression? No, but: the thing that your upstream sends is not a thing that nginx recognizes as a strong etag. The HTTP/1.1 RFC (https://tools.ietf.org/html/rfc7232#section-2.3) says that the etag header must be of the form ETag: "abc" or ETag: W/"abc" while your example sends something of the form ETag: abc and current nginx recognizes a weak etag when the first two characters are W/, and a strong etag when the first character is ". (Arguably: nginx could become more strict, and insist on W/" at the start and " at the end of a weak etag; and insist on " at the start and end of a strong etag; but I suspect that that is unnecessary.) The best fix in your case is probably to change your upstream to send valid headers. If that is not doable, then possibly you could patch your nginx to accept this invalid header; or possibly you could try some other config-based manipulation to make things work the way that you want. I suspect that either of those is likely to be more work in the long run than fixing the upstream. Cheers, f -- Francis Daly francis at daoine.org From themadbeaker at gmail.com Thu Jan 2 14:49:23 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 2 Jan 2020 08:49:23 -0600 Subject: nginx removes strong etags on gzip compression Message-ID: > If that is not doable, then possibly you could patch your nginx to accept > this invalid header; or possibly you could try some other config-based > manipulation to make things work the way that you want. I suspect that > either of those is likely to be more work in the long run than fixing > the upstream. Depending on the order of how everything runs, *maybe* it can be captured with the "headers more" module? https://github.com/openresty/headers-more-nginx-module Just a thought... No idea if it would work. From nginx-forum at forum.nginx.org Thu Jan 2 17:04:16 2020 From: nginx-forum at forum.nginx.org (ohbarye) Date: Thu, 02 Jan 2020 12:04:16 -0500 Subject: nginx removes strong etags on gzip compression In-Reply-To: <20200102113110.GU26683@daoine.org> References: <20200102113110.GU26683@daoine.org> Message-ID: <93818b84ccb46cf6d03b77c2c1a1c5fa.NginxMailingListEnglish@forum.nginx.org> To francis Thank you for your answer. > the thing that your upstream sends is not a thing that nginx recognizes as a strong etag. > The HTTP/1.1 RFC (https://tools.ietf.org/html/rfc7232#section-2.3) says that the etag header must be of the form Oh, I wasn't aware of the thing. > The best fix in your case is probably to change your upstream to send valid headers. I tried making my upstream's header comply with the form that nginx recognize as a strong etag like below, then nginx got working to downgrade the strong etag to a weak one as I expected. ```diff location /strong_etag { - add_header Etag d41d8cd98f00b204e9800998ecf8427e; + add_header Etag '"d41d8cd98f00b204e9800998ecf8427e"'; default_type application/json; return 200 '{"message": "Hello, this is from upstream!"}'; } ``` ```shell $ curl http://localhost:80/strong_etag -i -H "Accept-Encoding: gzip" HTTP/1.1 200 OK Server: nginx/1.17.6 Date: Thu, 02 Jan 2020 16:49:06 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: keep-alive Etag: W/"d41d8cd98f00b204e9800998ecf8427e" Content-Encoding: gzip ?V?M-.NLOU?RP?H????Q(??,V????\?????\E?Z5[,% ``` The riddle was resolved. --- To J.R. Thanks for introducing https://github.com/openresty/headers-more-nginx-module I'll check it. --- Again, many thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286645,286650#msg-286650 From gfrankliu at gmail.com Tue Jan 7 00:32:24 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 6 Jan 2020 16:32:24 -0800 Subject: pre-existing data on a connection Message-ID: Hi, When using nginx as a reverse proxy, how does it handle the pre-existing data on a keepalive connection to the backend? eg: for a request, the backend has a bug that sends 2 identical responses. I assume nginx will take the first response and send it to client. What will happen to the extra data (duplicate response)? Now when nginx gets a second request and re-uses the same keepalive connection to backend, will nginx take the pre-existing data (the duplicate response for the first request) on that connection and send it to second client or will it drop those and read the new response from backend to send to client? If nginx uses the pre-existing data, all the subsequent requests will get the response shifted. Thanks! Frank From nginx-forum at forum.nginx.org Tue Jan 7 16:12:37 2020 From: nginx-forum at forum.nginx.org (ak638) Date: Tue, 07 Jan 2020 11:12:37 -0500 Subject: ngx_http_discard_request_body may make keepalive connection hang in CLOSE_WAIT Message-ID: Hi, I wonder if it's a bug. It confused me. recv in ngx_http_discard_request_body return 0, but ignored(suppose to close conntection soon). So the connection will stay in keepalive timer untill timeout, while client already closed and server stay in CLOSE_WAIT. nginx version: nginx/1.17.6 built by gcc 4.4.6 20110731 (Red Hat 4.4.6-4) (GCC) built with OpenSSL 1.1.1c 28 May 2019 TLS SNI support enabled configure arguments: --with-debug --prefix=/home/qspace/nginx --user=qspace --group=users --with-http_ssl_module --with-http_v2_module --with-http_gzip_static_module --with-http_stub_status_module --with-openssl=../openssl-1.1.1c --with-pcre=../pcre-4.3 --with-stream nginx.conf: keepalive_timeout 65; keepalive_requests 2048; limit_conn_zone $binary_remote_addr zone=connperip:10m; limit_conn connperip 1; (make it redirect to /503.html) request: body not empty, i.e, content_len != 0 e.g. curl "http://127.0.0.1" -F "file=@some_file" (close conntection after curl exit) ngx_http_core_generic_phase -> ngx_http_finalize_request-> ngx_http_special_response_handler -> ngx_http_discard_request_body ------------------------------------------------------------------------------------------------------------------ nginx debug log: 125812:2020/01/07 16:04:18 [debug] 12255#0: *102094 **http run request: "/50x.html?"** 125815:2020/01/07 16:04:18 [debug] 12255#0: *102094 **http read discarded body** 125820:2020/01/07 16:04:18 [debug] 12255#0: *102094 **recv: eof:1, avail:-1** 125827:2020/01/07 16:04:18 [debug] 12255#0: *102094 **recv: fd:14 0 of 4096** 125831:2020/01/07 16:04:18 [debug] 12255#0: *102094 http finalize request: -4, "/50x.html?" a:1, c:1 125834:2020/01/07 16:04:18 [debug] 12255#0: *102094 **set http keepalive handler** 125837:2020/01/07 16:04:18 [debug] 12255#0: *102094 http close request 125841:2020/01/07 16:04:18 [debug] 12255#0: *102094 http log handler 125862:2020/01/07 16:04:18 [debug] 12255#0: *102094 run cleanup: 00000000028D3F10 125867:2020/01/07 16:04:18 [debug] 12255#0: *102094 file cleanup: fd:15 125872:2020/01/07 16:04:18 [debug] 12255#0: *102094 free: 00000000028D3190, unused: 48 125877:2020/01/07 16:04:18 [debug] 12255#0: *102094 free: 00000000028D41A0, unused: 2655 125881:2020/01/07 16:04:18 [debug] 12255#0: *102094 free: 0000000002883620 125883:2020/01/07 16:04:18 [debug] 12255#0: *102094 hc free: 0000000000000000 125887:2020/01/07 16:04:18 [debug] 12255#0: *102094 hc busy: 0000000000000000 0 125890:2020/01/07 16:04:18 [debug] 12255#0: *102094 reusable connection: 1 125896:2020/01/07 16:04:18 [debug] 12255#0: *102094 event timer del: 14: 8799496244 125900:2020/01/07 16:04:18 [debug] 12255#0: *102094 event timer add: 14: 65000:8799556448 **...... after 65 second (keepalive_timeout)** 3928328:2020/01/07 16:05:23 [debug] 12255#0: *102094 event timer del: 14: 8799556448 3928329:2020/01/07 16:05:23 [debug] 12255#0: *102094 http keepalive handler 3928330:2020/01/07 16:05:23 [debug] 12255#0: *102094 **close http connection: 14** 3928331:2020/01/07 16:05:23 [debug] 12255#0: *102094 reusable connection: 0 3928332:2020/01/07 16:05:23 [debug] 12255#0: *102094 free: 0000000000000000 3928333:2020/01/07 16:05:23 [debug] 12255#0: *102094 free: 00000000028D1250, unused: 136 Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286665,286665#msg-286665 From brendan.doyle at oracle.com Wed Jan 8 22:55:54 2020 From: brendan.doyle at oracle.com (Brendan Doyle) Date: Wed, 8 Jan 2020 22:55:54 +0000 Subject: using nginx open source to tunnel https requests to backend set Message-ID: <9b02d4f8-b338-3a82-ebe9-8ddef0d67912@oracle.com> Hi, So I want to use nginx open source as a load balancer to forward https requests to a backend set where the TLS is terminated by the application on the backend servers. i.e I want to tunnel the TLS traffic. And I'm wondering about the best approach. What I'm thinking is that I use the streams module to load balance the TCP traffic to the backend set. But my concern is that I need session persistence, else the TLS handshake might fall between two different backend hosts. So I'm thinking that I need to use something like: a) ?upstream backend_hosts { ??? ip_hash ; ??? server host1.example.com; ??? server host2.example.com; ??? server host3.example.com; } b) ?upstream backend_hosts { ??? hash $remote_addr$remote_port consistent; ??? server host1.example.com; ??? server host2.example.com; ??? server host3.example.com; } To ensure session persistence, the disadvantage of a) is that all traffic from a given IP will always go to the same server, so it is not load balancing per session per say. With b) I guess there is more chance of a unique tcp src port per TCP session, so there will be a better persistent spread. Thoughts Thanks From nginx-forum at forum.nginx.org Thu Jan 9 02:51:07 2020 From: nginx-forum at forum.nginx.org (junly) Date: Wed, 08 Jan 2020 21:51:07 -0500 Subject: $bytes_received variable not working Message-ID: <074d2332af45ad33696ae3c05691983d.NginxMailingListEnglish@forum.nginx.org> My nginx is compiled and installed, and the version installed is Nginx version: nginx / 1.14.0 The parameters for compilation are: Configure the arguments:--prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log- HTTP - log - path = / var/log/nginx/access. The log - pid - path = / var/run/nginx pid - lock - path = / var/run/nginx. Lock - HTTP client - body - temp - path = / var/cache/nginx/client_temp- HTTP proxy - temp - path = / var/cache/nginx/proxy_temp - HTTP - fastcgi - temp - path = / var/cache/nginx/fastcgi_temp - HTTP - uwsgi - temp - path = / var/cache/nginx/uwsgi_temp--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module--with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module--with-http_secure_link_module --with-http_slice_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module--with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt=' -g-o2-fstack-protector -- wformat-werror =format-security-wp, -d_fortify_source = 2-fpic '--with-ld-opt=' -wl, -bsymbolic-functions-wl,-z, relro-wl,-z, now-wl,-- as-demand-pie' nginx log format? log_format main'[$time_iso8601] $remote_addr - $remote_user "$scheme $host $request $cookie_group_id" $status $body_bytes_sent "$http_referer" "$bytes_received" "$http_user_agent" "$http_x_forwarded_for"'; When I use nginx -t detection configure file, nginx emerg is prompted with unknown bytes_received variable I now want to use the bytes_received variable, how do I fix it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286675,286675#msg-286675 From osa at freebsd.org.ru Thu Jan 9 03:23:01 2020 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 9 Jan 2020 06:23:01 +0300 Subject: $bytes_received variable not working In-Reply-To: <074d2332af45ad33696ae3c05691983d.NginxMailingListEnglish@forum.nginx.org> References: <074d2332af45ad33696ae3c05691983d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200109032301.GD70594@FreeBSD.org.ru> Hi there, hope you're doing well. The $bytes_received embedded variable is a part of ngx_stream_core_module, please see the following link for details, http://nginx.org/en/docs/stream/ngx_stream_core_module.html#var_bytes_received My guess is the mentioned log_format directive was defined outside of the stream level. -- Sergey Osokin On Wed, Jan 08, 2020 at 09:51:07PM -0500, junly wrote: > My nginx is compiled and installed, and the version installed is > Nginx version: nginx / 1.14.0 > > The parameters for compilation are: > Configure the arguments:--prefix=/etc/nginx > --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log- > HTTP - log - path = / var/log/nginx/access. The log - pid - path = / > var/run/nginx pid - lock - path = / var/run/nginx. Lock - HTTP client - body > - temp - path = / var/cache/nginx/client_temp- HTTP proxy - temp - path = / > var/cache/nginx/proxy_temp - HTTP - fastcgi - temp - path = / > var/cache/nginx/fastcgi_temp - HTTP - uwsgi - temp - path = / > var/cache/nginx/uwsgi_temp--http-scgi-temp-path=/var/cache/nginx/scgi_temp > --user=nginx --group=nginx --with-compat --with-file-aio --with-threads > --with-http_addition_module > --with-http_auth_request_module--with-http_dav_module --with-http_flv_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_mp4_module --with-http_random_index_module > --with-http_realip_module--with-http_secure_link_module > --with-http_slice_module --with-http_stub_status_module > --with-http_sub_module --with-http_v2_module --with-mail > --with-mail_ssl_module--with-stream --with-stream_realip_module > --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt=' > -g-o2-fstack-protector -- wformat-werror =format-security-wp, > -d_fortify_source = 2-fpic '--with-ld-opt=' -wl, -bsymbolic-functions-wl,-z, > relro-wl,-z, now-wl,-- as-demand-pie' > > nginx log format??? > log_format main'[$time_iso8601] $remote_addr - $remote_user "$scheme > $host $request $cookie_group_id" $status $body_bytes_sent "$http_referer" > "$bytes_received" "$http_user_agent" "$http_x_forwarded_for"'; > > When I use nginx -t detection configure file, nginx emerg is prompted with > unknown bytes_received variable > > I now want to use the bytes_received variable, how do I fix it? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286675,286675#msg-286675 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From 15521068423 at 163.com Thu Jan 9 13:49:42 2020 From: 15521068423 at 163.com (=?GBK?B?wbrOrM6w?=) Date: Thu, 9 Jan 2020 21:49:42 +0800 (CST) Subject: Nginx Load Balancing repsond 404 Message-ID: <7878d8b9.e98c.16f8a91665d.Coremail.15521068423@163.com> Hi. I have this config below And I requested upstream resource and got 404 response. However, when I modified 'listen 80' to 'listen 8081', I requested upstream resource again, I got 200 response. I wonder why. Thanks. snippet of my nginx config http { upstream backend { server IP:10000; server IP:10001; } server { listen 80; location / { proxy_pass http://backend; } } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 9 14:56:52 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2020 17:56:52 +0300 Subject: pre-existing data on a connection In-Reply-To: References: Message-ID: <20200109145652.GB12894@mdounin.ru> Hello! On Mon, Jan 06, 2020 at 04:32:24PM -0800, Frank Liu wrote: > When using nginx as a reverse proxy, how does it handle the > pre-existing data on a keepalive connection to the backend? > > eg: for a request, the backend has a bug that sends 2 identical > responses. I assume nginx will take the first response and send it to > client. What will happen to the extra data (duplicate response)? Now > when nginx gets a second request and re-uses the same keepalive > connection to backend, will nginx take the pre-existing data (the > duplicate response for the first request) on that connection and send > it to second client or will it drop those and read the new response > from backend to send to client? > If nginx uses the pre-existing data, all the subsequent requests will > get the response shifted. The behaviour heavily depends on the timing. As long as nginx will be able to detect there are additional data after the response is already sent, nginx will close the connection (and will use another one for the next request to the same upstream server). It might not be able to detect there are additional data though, and will only read the duplicate response after it will sent the next request to the connection, so the duplicate response will be sent to the second client. -- Maxim Dounin http://mdounin.ru/ From me at mheard.com Fri Jan 10 02:29:33 2020 From: me at mheard.com (Mathew Heard) Date: Fri, 10 Jan 2020 13:29:33 +1100 Subject: Fwd: Google QUIC support in nginx In-Reply-To: References: <423d86fdb50880a10d4a8312ce7072c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hey nginx team, How does the roadmap now look given the Cloudflare Quiche "experiment" release? Is QUIC/HTTP3 still scheduled for mainline? On Fri, 31 May 2019 at 16:54, George wrote: > Roadmap suggests it is in Nginx 1.17 mainline QUIC = HTTP/3 > https://trac.nginx.org/nginx/roadmap :) > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,256352,284367#msg-284367 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Fri Jan 10 18:00:21 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 10 Jan 2020 21:00:21 +0300 Subject: ngx_http_discard_request_body may make keepalive connection hang in CLOSE_WAIT In-Reply-To: References: Message-ID: > On 7 Jan 2020, at 19:12, ak638 wrote: > > Hi, > > I wonder if it's a bug. It confused me. > recv in ngx_http_discard_request_body return 0, but ignored(suppose to close > conntection soon). So the connection will stay in keepalive timer untill > timeout, while client already closed and server stay in CLOSE_WAIT. Hello. This is a known issue. It would be nice if you could try and report back if this patch helped you. # HG changeset patch # User Sergey Kandaurov # Date 1534236841 -10800 # Tue Aug 14 11:54:01 2018 +0300 # Node ID b71df78c7dd02f0adf817a5af1931e8e4e9365d0 # Parent 70c6b08973a02551612da4a4273757dc77c70ae2 Cancel keepalive and lingering close on EOF better. Unlike in 75e908236701, which added the logic to ngx_http_finalize_request(), this change moves it to a more generic routine ngx_http_finalize_connection() to cover cases when a request is finalized with NGX_DONE. In particular, this fixes unwanted connection transition into the keepalive state after receiving EOF while discarding request body. diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2497,11 +2497,6 @@ ngx_http_finalize_request(ngx_http_reque ngx_del_timer(c->write); } - if (c->read->eof) { - ngx_http_close_request(r, 0); - return; - } - ngx_http_finalize_connection(r); } @@ -2600,6 +2595,11 @@ ngx_http_finalize_connection(ngx_http_re r = r->main; + if (r->connection->read->eof) { + ngx_http_close_request(r, 0); + return; + } + if (r->reading_body) { r->keepalive = 0; r->lingering_close = 1; -- Sergey Kandaurov From nginx-forum at forum.nginx.org Fri Jan 10 20:03:18 2020 From: nginx-forum at forum.nginx.org (tconlon) Date: Fri, 10 Jan 2020 15:03:18 -0500 Subject: Force 302 to 301 redirect Message-ID: <191a59a657c28b372bad94ddf76c24ab.NginxMailingListEnglish@forum.nginx.org> Hi Team, I have a funny situation that I can't seem to get around. Here are the details. We had this naming convention to indicate a specific location: https://myurl/location/index.php?id=235 This naming convention is still out in the internet ether We changed to go to a slug based operation: https://myurl/location/newyorkcity We do the work to determine that id=235 is really the newyorkcity location in the code We return back to the client, the correct url (https://myurl/location/newyorkcity) When we review the SEO around this old url, we are getting back a 302, that it's temporary. how can I tweak the nginx engine to force a 301. thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286700,286700#msg-286700 From nginx-forum at forum.nginx.org Fri Jan 10 20:11:33 2020 From: nginx-forum at forum.nginx.org (cooley.josh@gmail.com) Date: Fri, 10 Jan 2020 15:11:33 -0500 Subject: Force 302 to 301 redirect In-Reply-To: <191a59a657c28b372bad94ddf76c24ab.NginxMailingListEnglish@forum.nginx.org> References: <191a59a657c28b372bad94ddf76c24ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: Can you share the relevant parts of your config? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286700,286701#msg-286701 From nginx-forum at forum.nginx.org Fri Jan 10 20:17:21 2020 From: nginx-forum at forum.nginx.org (tconlon) Date: Fri, 10 Jan 2020 15:17:21 -0500 Subject: Force 302 to 301 redirect In-Reply-To: References: <191a59a657c28b372bad94ddf76c24ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <54d47d14b8c0ac4d67d4435d51218960.NginxMailingListEnglish@forum.nginx.org> Absolutely..and thanks Running this on Forge / Digital Ocean ############www.myurl.com # FORGE CONFIG (DO NOT REMOVE!) include forge-conf/www.myurl.com/before/*; server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name www.myurl.com; root /home/forge/www.myurl.com/current/public; # FORGE SSL (DO NOT REMOVE!) ssl_certificate /etc/nginx/ssl/www.myurl.com/676408/server.crt; ssl_certificate_key /etc/nginx/ssl/www.myurl.com/676408/server.key; ssl_protocols TLSv1.2; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384; ssl_prefer_server_ciphers on; ssl_dhparam /etc/nginx/dhparams.pem; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options "nosniff"; index index.html index.htm index.php; charset utf-8; # FORGE CONFIG (DO NOT REMOVE!) include forge-conf/www.myurl.com/server/*; location / { try_files $uri $uri/ /index.php?$query_string; } # Expire rules for static content - 2019-05-24 # cache.appcache, your document html and data location ~* \.(?:manifest|appcache|html?|xml|json)$ { expires -1; } # Feed location ~* \.(?:rss|atom)$ { expires 1h; add_header Pragma public; add_header Cache-Control "public"; } # Media: images, icons, video, audio, HTC location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ { expires 7d; access_log off; add_header Pragma public; add_header Cache-Control "public"; } # CSS and Javascript location ~* \.(?:css|js)$ { expires 7d; access_log off; add_header Pragma public; add_header Cache-Control "public"; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } access_log off; error_log /var/log/nginx/www.myurl.com-error.log error; error_page 404 /index.php; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.(?!well-known).* { deny all; } } # FORGE CONFIG (DO NOT REMOVE!) include forge-conf/www.myurl.com/after/*; # Redirect every request to HTTPS... server { listen 80; listen [::]:80; server_name .myurl.com; return 301 https://$host$request_uri; } ############ssl_redirect.conf # Redirect SSL to primary domain SSL... server { listen 443 ssl http2; listen [::]:443 ssl http2; # FORGE SSL (DO NOT REMOVE!) ssl_certificate /etc/nginx/ssl/www.myurl.com/676408/server.crt; ssl_certificate_key /etc/nginx/ssl/www.myurl.com/676408/server.key; ssl_protocols TLSv1.2; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384; ssl_prefer_server_ciphers on; ssl_dhparam /etc/nginx/dhparams.pem; server_name myurl.com; return 301 https://www.myurl.com$request_uri; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286700,286702#msg-286702 From nginx-forum at forum.nginx.org Fri Jan 10 21:29:11 2020 From: nginx-forum at forum.nginx.org (clintmiller) Date: Fri, 10 Jan 2020 16:29:11 -0500 Subject: proxy_pass in post_action location does not send any http request In-Reply-To: References: Message-ID: <4cb8854913606bc6ca2c3c30020bf5ae.NginxMailingListEnglish@forum.nginx.org> Hi, jacks. I use post_action for something similar to this for keeping track of users who download files. I've got a location for the /download entry point like this: location ~ /download/ { proxy_pass http://app_pool; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; post_action @finished; } # for mod_zip and x-accel-redirect requests, proxying the # request to S3 to fulfill the zip manifest or the x-accel-redirect URI location ~ "^/s3-proxy/(?.[a-z0-9][a-z0-9-.]*.s3.amazonaws.com)/(?.*)$" { internal; resolver 8.8.8.8 valid=30s; # Google DNS resolver_timeout 10s; proxy_http_version 1.1; proxy_set_header Host $s3_bucket; proxy_set_header Authorization ''; # remove amazon headers proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header Set-Cookie; proxy_ignore_headers "Set-Cookie"; # bubble errors up proxy_intercept_errors on; proxy_pass https://$s3_bucket/$path?$args; } location @finished { internal; rewrite ^ /download/finish/$sent_http_x_download_log_id?bytes=$body_bytes_sent&status=$request_completion; } location ^~ /download/finish { proxy_pass http://$download_postback_hostname; # variable map declared elsewhere } This does work for sending the post_action response after the /download request is served- with one notable caveat! It does not work for X-Accel-Redirect responses from my app server. As far as I can tell, the post_action is either (1) never called in that case, or (2) has some other issue I have been able to figure out. I've dug around in the C source for Nginx, but it gets to a spot pretty quick where I'm in over my head. Although I've been living with this since 2017, here's my mailing list post regarding the issue from 2018: https://forum.nginx.org/read.php?2,278529 I've considered trying to engage Nginx for commercial support on this one issue, but I'm not sure what kind of appetite they may have for these types of issues. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286384,286703#msg-286703 From francis at daoine.org Fri Jan 10 23:02:07 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Jan 2020 23:02:07 +0000 Subject: Force 302 to 301 redirect In-Reply-To: <54d47d14b8c0ac4d67d4435d51218960.NginxMailingListEnglish@forum.nginx.org> References: <191a59a657c28b372bad94ddf76c24ab.NginxMailingListEnglish@forum.nginx.org> <54d47d14b8c0ac4d67d4435d51218960.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200110230207.GB26683@daoine.org> On Fri, Jan 10, 2020 at 03:17:21PM -0500, tconlon wrote: Hi there, > Absolutely..and thanks > > Running this on Forge / Digital Ocean I think you indicated that your index.php takes id=235 and decides that it will return a redirect to https://myurl/location/newyorkcity Does *that* php code say "send a 301" or "send a 302"? Can you change it to say "send a 301", if you want a 301? f -- Francis Daly francis at daoine.org From roger at netskrt.io Sat Jan 11 00:57:32 2020 From: roger at netskrt.io (Roger Fischer) Date: Fri, 10 Jan 2020 16:57:32 -0800 Subject: filter to modify upstream response before it is cached Message-ID: <943F6CC3-70CB-4FCE-9CBB-A18CE4D1AB59@netskrt.io> Hello, is there a hook into the nginx processing to modify the response body (and headers) before they are cached when using with proxy_pass? I am aware of the body filters (http://nginx.org/en/docs/dev/development_guide.html#http_body_filters ), running before the response is delivered to the client. But I would prefer to run the filter just once, when the upstream response is cached. I am assuming, if nginx does not offer such a hook, OpenResty will not be of any help either. Thanks? Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Jan 11 13:29:24 2020 From: nginx-forum at forum.nginx.org (tconlon) Date: Sat, 11 Jan 2020 08:29:24 -0500 Subject: Force 302 to 301 redirect In-Reply-To: <20200110230207.GB26683@daoine.org> References: <20200110230207.GB26683@daoine.org> Message-ID: <4199304f59d1caf3ab6d3cd208ddb9e3.NginxMailingListEnglish@forum.nginx.org> Hi, Digging into the code will get back to you Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286700,286707#msg-286707 From nginx-forum at forum.nginx.org Sat Jan 11 13:35:02 2020 From: nginx-forum at forum.nginx.org (tconlon) Date: Sat, 11 Jan 2020 08:35:02 -0500 Subject: Force 302 to 301 redirect In-Reply-To: <4199304f59d1caf3ab6d3cd208ddb9e3.NginxMailingListEnglish@forum.nginx.org> References: <20200110230207.GB26683@daoine.org> <4199304f59d1caf3ab6d3cd208ddb9e3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <95bdc95bd0d7602f97d690be38fd4b8a.NginxMailingListEnglish@forum.nginx.org> Hi, Found it, if ($page == 'index.php') { header("Location: ". $toUrl); probably need something like this References: <20200110230207.GB26683@daoine.org> <4199304f59d1caf3ab6d3cd208ddb9e3.NginxMailingListEnglish@forum.nginx.org> <95bdc95bd0d7602f97d690be38fd4b8a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200111142006.GC26683@daoine.org> On Sat, Jan 11, 2020 at 08:35:02AM -0500, tconlon wrote: Hi there, > Found it, > > if ($page == 'index.php') { > header("Location: ". $toUrl); > > probably need something like this > > // 301 Moved Permanently > header("Location: ",TRUE,301); Good that you have found a straightforward solution. An alternative, which would involve different changes, and would depend on the actual urls that have been advertised, could be to make a list of id/city pairs once, and use nginx's "map" to do the translation without touching the index.php. Something like, in the "http" block: map $request_uri $slug_city { /location/index.php?id=235 newyorkcity; # more lines like that } and then inside the location that normally handles that request (which I think is "location ~ \.php$ {" add if ($slug_city) { return 301 /location/$slug_city; } That may or may not be clearer to whoever is going to maintain the system in the future. Cheers, f -- Francis Daly francis at daoine.org From db388696 at gmail.com Mon Jan 13 04:31:39 2020 From: db388696 at gmail.com (David Breeding) Date: Sun, 12 Jan 2020 22:31:39 -0600 Subject: php-fpm - Cannot display web page in user directory Message-ID: I don't believe this is an nginx problem, (I think it's somewhere in the php end) but don't know where to turn. Hopefully since nginx seems to be tied closely with php-fpm, someone her might be able to point me in the right direction. I'm experimenting with nginx on an Arch system. I'm able to get pages from document-Root for both static and php pages. I can get static page from User directory, but cannot get php page from there. Here is the location block for this part. --------------------- Begin Code ----------------------- location ~ ^/~(.+)(/.+\.php)(.*)? { alias /home/$1/public_html; fastcgi_split_path_info (/[^/]+\.php)(.*)?$; # Shows /home/user/public_html add_header X-Debug-Document-Root $document_root; #next two = /~user/public_html add_header X-Debug-Document-Uri $document_uri; add_header X-Debug-Request-uri $request_uri; # Shows /hello.php add_header X-Debug-script-name $fastcgi_script_name; # Does not display add_header X-Debug-path-info $fastcgi_path_info; # This displays exactly correct path. I can copy and # paste this path after "ls -l" on command line and list the file add_header X-Debug-Script_Filename $document_root$fastcgi_script_name; # Displays /~user/hello.php add_header X-Debug-uri $uri; # This doesn't return error error if (!-f $document_root$fastcgi_script_name) { return 404; } # This line causes Error 404 "File Not Found". # Without it - ReturnsStatus code 200 with blank page (expected result) # Hard-coding path here still produces Error 404 # Hard-coding path to file in Document Root and it displays correctly. fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_pass unix:/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #include fastcgi_params; return 200; # To cause headers to show } ------------------------------------ End code -------------------------- It almost seems to be some kind of permission problem, but everything from /home down to the files in 'public_html' have read permissions for all entitities set, and again, static files in this folder are read and rendered. I tried to find out if php needed any settings to allow reading outside document_root, but could not find anything. Probably something stupidly simple, but if anyone can tell me what I'm doing wrong or missing, I'd appreciate it greatly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jan 13 10:03:09 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 13 Jan 2020 10:03:09 +0000 Subject: filter to modify upstream response before it is cached In-Reply-To: <943F6CC3-70CB-4FCE-9CBB-A18CE4D1AB59@netskrt.io> References: <943F6CC3-70CB-4FCE-9CBB-A18CE4D1AB59@netskrt.io> Message-ID: <20200113100309.GD26683@daoine.org> On Fri, Jan 10, 2020 at 04:57:32PM -0800, Roger Fischer wrote: Hi there, > is there a hook into the nginx processing to modify the response body (and headers) before they are cached when using with proxy_pass? > I don't know if there is such a hook; but I do know that you can introduce another nginx-step to achieve the same end result. That is: right now you have something like: nginx: proxy_pass with cache -> upstream Instead you can have nginx: proxy_pass with cache -> nginx: proxy_pass with body filter -> upstream so that the output of the one run of your body filter is cached by the first nginx. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jan 14 00:08:49 2020 From: nginx-forum at forum.nginx.org (bengalih) Date: Mon, 13 Jan 2020 19:08:49 -0500 Subject: NGINX stripping websocket "Upgrade" headers Message-ID: I have the following in my site.conf file: ------------------------------------------------------------ #Websockets proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; ------------------------------------------------------------ My understanding is that this will set the proper headers for websocket communications. Specifically, it will add a "Connection" header with value "Upgrade" and add an "Upgrade" header with whatever value the client passes to it in the "Upgrade" header (i.e. add the header on for hop-to-hop to upstream server). In the case of websockets, the Upgrade header will be "websocket." I am executing the following curl command to my server directly on my LAN (no NGINX involved) ------------------------------------------------------------ curl -k --include --no-buffer --header "Connection: Upgrade" --header "Upgrade: websocket" --header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" --header "Sec-WebSocket-Version: 13" --header "X-Plex-Token: 87LJshwQwRxT24jaY8Be" https://10.10.10.102:32400/:/websockets/notifications ------------------------------------------------------------ This results in the following headers being passed to the server (via packet capture on the server): ------------------------------------------------------------ Frame 1268: 307 bytes on wire (2456 bits), 307 bytes captured (2456 bits) on interface 0 Ethernet II, Src: IntelCor_2f:f6:e4 (24:77:03:2f:f6:e4), Dst: AsrockIn_7c:fe:cf (70:85:c2:7c:fe:cf) Internet Protocol Version 4, Src: 10.10.10.201, Dst: 10.10.10.102 Transmission Control Protocol, Src Port: 53019, Dst Port: 32400, Seq: 1, Ack: 1, Len: 253 Hypertext Transfer Protocol GET /:/websockets/notifications HTTP/1.1\r\n Host: 10.10.10.102:32400\r\n User-Agent: curl/7.68.0\r\n Accept: */*\r\n Connection: Upgrade\r\n Upgrade: websocket\r\n Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==\r\n Sec-WebSocket-Version: 13\r\n X-Plex-Token: 87LJshwQwRxT24jaY8Be\r\n \r\n [Full request URI: http://10.10.10.102:32400/:/websockets/notifications] [HTTP request 1/1] [Response in frame: 1269] ------------------------------------------------------------ This appears as normal, and is met with a proper "Switching Protocols" result. However, when I send this request out to my public IP through NGINX, my server sees this in the headers: ------------------------------------------------------------ Frame 2494: 381 bytes on wire (3048 bits), 381 bytes captured (3048 bits) on interface 0 Ethernet II, Src: AsustekC_c6:4c:30 (1c:b7:2c:c6:4c:30), Dst: AsrockIn_7c:fe:cf (70:85:c2:7c:fe:cf) Internet Protocol Version 4, Src: 10.10.10.1, Dst: 10.10.10.102 Transmission Control Protocol, Src Port: 40461, Dst Port: 32400, Seq: 1, Ack: 1, Len: 315 Hypertext Transfer Protocol GET /:/websockets/notifications HTTP/1.1\r\n Host: plex.mydomain.com\r\n X-Real-IP: 10.10.10.201\r\n X-Forwarded-For: 10.10.10.201\r\n X-Forwarded-Proto: https\r\n Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==\r\n Sec-WebSocket-Version: 13\r\n Connection: Upgrade\r\n user-agent: curl/7.68.0\r\n accept: */*\r\n x-plex-token: 87LJshwQwRxT24jaY8Be\r\n \r\n [Full request URI: http://plex.mydomain.com/:/websockets/notifications] [HTTP request 1/1] ------------------------------------------------------------ Note that while the "Connection" header has been added (presumably because it is added specifically in the conf file via text and not passed header variable), the "Upgrade" header is missing. If I change my /conf file to look like this: ------------------------------------------------------------ #Websockets proxy_http_version 1.1; proxy_set_header Upgrade websocket; proxy_set_header Connection "Upgrade"; ------------------------------------------------------------ Then the header is properly added. My understanding is NGINX strips/doesn't pass along any empty headers. Based on my my local IP test, it is clear that the appropriate header is set (you can see it in my sent curl statement and in the packet that the server receives). However, due to the fact that it is not passed on to the upstream server via NGINX, it would appear that NGINX thinks the $http_upgrade is empty from the client and therefore not passing it on. Can anyone explain this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286719,286719#msg-286719 From nginx-forum at forum.nginx.org Tue Jan 14 02:29:15 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Mon, 13 Jan 2020 21:29:15 -0500 Subject: what happy when nginx cannot request certificate status using ssl_stapling_verify Message-ID: Hello, I enable "ssl_stapling" and "ssl_stapling_verify", it can work fine. But sometime, I can find a few error messages in error.log, ".....Operation timed out) while requesting certificate status....", it seem the OCSP server of my SSL provider cannot be connected at that time. I want to know, what happy when nginx cannot request certificate status? the user can visit website correctly? thank you so much. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286720,286720#msg-286720 From nginx-forum at forum.nginx.org Tue Jan 14 06:20:41 2020 From: nginx-forum at forum.nginx.org (bengalih) Date: Tue, 14 Jan 2020 01:20:41 -0500 Subject: NGINX stripping websocket "Upgrade" headers In-Reply-To: References: Message-ID: <99e2fc17c11c99eb192cb93e92a25d3e.NginxMailingListEnglish@forum.nginx.org> Figured it out! https://www.reddit.com/r/nginx/comments/eodrjc/nginx_stripping_websocket_upgrade_headers/ fireye quote: ---------------------------------------- Got it figured out, this is a quirk of HTTP/2.0 vs 1.1. Per RFC-2616: The Upgrade header field is intended to provide a simple mechanism for transition from HTTP/1.1 to some other, incompatible protocol. It looks like nginx discards the Upgrade header, when presented by a client, if the client communicates via HTTP2.0 already. You can confirm this by using the --http1.1 flag with curl and looking at the headers being transferred. ----------------------------------------- Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286719,286721#msg-286721 From chateau.xiao at gmail.com Tue Jan 14 12:06:35 2020 From: chateau.xiao at gmail.com (chateau Xiao) Date: Tue, 14 Jan 2020 20:06:35 +0800 Subject: Nginx Load Balancing repsond 404 In-Reply-To: <7878d8b9.e98c.16f8a91665d.Coremail.15521068423@163.com> References: <7878d8b9.e98c.16f8a91665d.Coremail.15521068423@163.com> Message-ID: <7700031C-DD4D-4586-ABB7-9DA179A89A76@gmail.com> What?s the result of curl when make you request direct to the backend port 10000 and 10001? > ? 2020?1?9????9:49???? <15521068423 at 163.com> ??? > > Hi. > > I have this config below And I requested upstream resource and got 404 response. However, when I modified 'listen 80' to 'listen 8081', I requested upstream resource again, I got 200 response. I wonder why. > > Thanks. > > > snippet of my nginx config > > http { > > > upstream backend { > > > server IP:10000; > > > server IP:10001; > > } > > > server { > > listen 80; > > > location / { > > proxy_pass http://backend; > > } > > > } > > } > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From themadbeaker at gmail.com Tue Jan 14 13:43:42 2020 From: themadbeaker at gmail.com (J.R.) Date: Tue, 14 Jan 2020 07:43:42 -0600 Subject: what happy when nginx cannot request certificate status using ssl_stapling_verify Message-ID: > I enable "ssl_stapling" and "ssl_stapling_verify", it can work fine. But > sometime, I can find a few error messages in error.log, ".....Operation > timed out) while requesting certificate status....", it seem the OCSP server > of my SSL provider cannot be connected at that time. > > I want to know, what happy when nginx cannot request certificate status? the > user can visit website correctly? thank you so much. 1. The OCSP certificate is valid for much longer than the intervals your server renews it at, so even if you can't connect for a while it should still be valid. 2. The client will contact the certificate's OCSP server directly if you don't send the OCSP cert (or it's expired) for verification. 3. The above #2 statement assumes your SSL Cert was NOT generated with "Must Staple". If it is, then you would definitely need a valid ocsp cert copy to send to clients, otherwise they will get an error. I see several failed attempts in my error log every day, it happens... Unless you have dozens & dozens of them from the same IP, then I wouldn't worry about it. From themadbeaker at gmail.com Tue Jan 14 13:53:16 2020 From: themadbeaker at gmail.com (J.R.) Date: Tue, 14 Jan 2020 07:53:16 -0600 Subject: NGINX stripping websocket "Upgrade" headers Message-ID: > Got it figured out, this is a quirk of HTTP/2.0 vs 1.1. Per RFC-2616: I tried to follow all your comments on reddit & plex, but I'm not really sure if you resolved this issue or just decided it was impossible... Have you tried using the nginx stream module? http://nginx.org/en/docs/stream/ngx_stream_core_module.html As for your issue about needing to include both the LE cert and your own in one file... Most people figure that out when they enable OCSP stapling... hehe... But yeah, I was scratching my head too for a bit at first then I found some posts mentioning what to do... From nginx-forum at forum.nginx.org Tue Jan 14 15:10:13 2020 From: nginx-forum at forum.nginx.org (bengalih) Date: Tue, 14 Jan 2020 10:10:13 -0500 Subject: NGINX stripping websocket "Upgrade" headers In-Reply-To: References: Message-ID: <1ff85401231edd5fba14632846a698d5.NginxMailingListEnglish@forum.nginx.org> Yes, it is solved with the proper cert configuration. I hadn't fully validated some advanced SSL options like OCSP stapling as I was first trying to get all the basics working. Since my browsers all validated the certificate chain without issue, I had assumed they were installed properly. I've had the issue with including the full chain (or at least the intermediate) in the cert file before with some other web server products, but in those cases none of the clients would properly connect without it. I've looked at the link you send on the ngx_stream_core_module and googled a bit more about it. While I see some documentation talking about configuring, I don't really see much in the way of exactly explaining its use cases. Best I can tell it is used for load balancing options - but I'm not sure how they would apply in this case? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286719,286730#msg-286730 From nginx-forum at forum.nginx.org Wed Jan 15 01:41:02 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Tue, 14 Jan 2020 20:41:02 -0500 Subject: what happy when nginx cannot request certificate status using ssl_stapling_verify In-Reply-To: References: Message-ID: <41e6b499b9603750f5ad6e2a1a542fd1.NginxMailingListEnglish@forum.nginx.org> Hello J.R., thank you, thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286720,286736#msg-286736 From nginx-forum at forum.nginx.org Thu Jan 16 16:47:17 2020 From: nginx-forum at forum.nginx.org (xrd) Date: Thu, 16 Jan 2020 11:47:17 -0500 Subject: rewrite rule with consistent document structure Message-ID: I have a directory containing several directories with the exact same structure. I'll be adding more and more directories with the exact same structure. I want to make an NGINX rewrite rule that makes the URL as simple as possible. For example, here is two of those directories. Notice each has a directory called "public" and then has a bunch of files and other directories. * directoryA (dir) *** public (dir) ***** index.html (file) ***** images (dir) ******* image.png (file) ******* image.jpg (file) * directoryB (dir) *** public (dir) ***** index.html (file) ***** readme.html (file) ******* a (dir) ********* b (dir) *********** c (dir) ************* somefile.html ************* nested.jpg I want to make it so that NGINX serves up the HTML and images from the parent directory and omits the public directory from the URI. More explicitly, if the server is https://example.com I would like someone to be able to get https://example/directoryB/readme.html. And, if readme.html has an tag inside it, that that gets loaded correctly as well. I've been trying various combinations of rewrite plus try_files but I'm getting confused by the last, break, etc. And, I think a key distinction is that I don't want to "rewrite" the URI. If I understand things correctly, I don't want to have http://example.com/directoryB/public/index.html to rewrite to https://example.com/directoryB/index.html (I would prefer not to have the public directory available as a possible URI, but I suppose it wouldn't hurt). I'm struggling with this because I don't know how to "test" my rules without tweaking the configuration, then restarting, tailing the log files, etc. It's been very slow and I would love a better way, so if someone has a suggestion on how , I'm happy to hear it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286755,286755#msg-286755 From themadbeaker at gmail.com Thu Jan 16 18:16:27 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 16 Jan 2020 12:16:27 -0600 Subject: rewrite rule with consistent document structure Message-ID: > I want to make it so that NGINX serves up the HTML and images from the > parent directory and omits the public directory from the URI. In your case, using "alias" would be the way to go... http://nginx.org/en/docs/http/ngx_http_core_module.html#alias From mdounin at mdounin.ru Tue Jan 21 13:54:46 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Jan 2020 16:54:46 +0300 Subject: nginx-1.17.8 Message-ID: <20200121135446.GX12894@mdounin.ru> Changes with nginx 1.17.8 21 Jan 2020 *) Feature: variables support in the "grpc_pass" directive. *) Bugfix: a timeout might occur while handling pipelined requests in an SSL connection; the bug had appeared in 1.17.5. *) Bugfix: in the "debug_points" directive when using HTTP/2. Thanks to Daniil Bondarev. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Jan 21 15:51:50 2020 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 21 Jan 2020 10:51:50 -0500 Subject: [nginx-announce] nginx-1.17.8 In-Reply-To: <20200121135453.GY12894@mdounin.ru> References: <20200121135453.GY12894@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.17.8 for Windows https:// kevinworthington.com/nginxwin1178 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington On Tue, Jan 21, 2020 at 8:55 AM Maxim Dounin wrote: > Changes with nginx 1.17.8 21 Jan > 2020 > > *) Feature: variables support in the "grpc_pass" directive. > > *) Bugfix: a timeout might occur while handling pipelined requests in > an > SSL connection; the bug had appeared in 1.17.5. > > *) Bugfix: in the "debug_points" directive when using HTTP/2. > Thanks to Daniil Bondarev. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Jan 21 16:49:10 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 21 Jan 2020 19:49:10 +0300 Subject: njs-0.3.8 Message-ID: <877a6c70-7f09-fdc2-2657-ab9d36686b3f@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specifications. This release adds Promise object support and typed-arrays from ES6. Notable new features: - Promise support in r.subrequest(): : r.subrequest(r, '/auth') : .then(reply => JSON.parse(reply.responseBody)) : .then(response => { : if (!response['token']) { : throw new Error("token is not available"); : } : return token; : }) : .then(token => { : r.subrequest('/backend', `token=${token}`) : .then(reply => r.return(reply.status, reply.responseBody)); : }) : .catch(_ => r.return(500)); You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.3.8 21 Jan 2020 nginx modules: *) Feature: added Promise support for r.subrequest(). If callback is not provided r.subrequest() returns an ordinary Promise object that resolves to subrequest response object. *) Change: r.parent property handler now returns "undefined" instead of throwing exception if parent object is not available. Core: *) Feature: added Promise support. Implemented according to the specification without: Promise.all(), Promise.allSettled(), Promise.race(). *) Feature: added initial Typed-arrays support. Thanks to Tiago Natel de Moura. *) Feature: added ArrayBuffer support. Thanks to Tiago Natel de Moura. *) Feature: added initial Symbol support. Thanks to Artem S. Povalyukhin. *) Feature: added externals supopor for JSON.stringify(). *) Feature: added Object.is(). Thanks to Artem S. Povalyukhin. *) Feature: added Object.setPrototypeOf(). Thanks to Artem S. Povalyukhin. *) Feature: introduced nullish coalescing operator. Thanks to Valentin Bartenev. *) Bugfix: fixed Object.getPrototypeOf() according to the specification. *) Bugfix: fixed Object.prototype.valueOf() according to the specification. *) Bugfix: fixed JSON.stringify() with unprintable values and replacer function. *) Bugfix: fixed operator "in" according to the specification. *) Bugfix: fixed Object.defineProperties() according to the specification. *) Bugfix: fixed Object.create() according to the specification. Thanks to Artem S. Povalyukhin. *) Bugfix: fixed Number.prototype.toString(radix) when fast-math is enabled. *) Bugfix: fixed RegExp() instance properties. *) Bugfix: fixed import segfault. Thanks to ??? (Hong Zhi Dao). From nginx-forum at forum.nginx.org Wed Jan 22 15:38:42 2020 From: nginx-forum at forum.nginx.org (xt3627216) Date: Wed, 22 Jan 2020 10:38:42 -0500 Subject: http2 request log in accurate $request_time ? Message-ID: nginx version: nginx-1.9.5 hi?nginx compute $request_time in log phase, which is in ngx_http_free_request (r) -> ngx_http_log_request(r) -> log_handler(r) but in http2 world, I see every request closed through ngx_http_v2_close_stream(stream, rc) in ngx_http_v2_close_stream have code below? ``` void ngx_http_v2_close_stream(ngx_http_v2_stream_t *stream, ngx_int_t rc) { ngx_event_t *ev; ngx_connection_t *fc; ngx_http_v2_node_t *node; ngx_http_v2_connection_t *h2c; h2c = stream->connection; node = stream->node; ngx_log_debug3(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, "http2 close stream %ui, queued %ui, processing %ui", node->id, stream->queued, h2c->processing); fc = stream->request->connection; if (stream->queued) { fc->write->handler = ngx_http_v2_close_stream_handler; return; } if (!stream->out_closed) { if (ngx_http_v2_send_rst_stream(h2c, node->id, NGX_HTTP_V2_INTERNAL_ERROR) != NGX_OK) { h2c->connection->error = 1; } } node->stream = NULL; ngx_queue_insert_tail(&h2c->closed, &node->reuse); h2c->closed_nodes++; ngx_http_free_request(stream->request, rc); xxx } ``` if stream->queued is true, then ngx_http_free_request will not be called immediately, which will result $request_time larger then real request time? any body can affirm this ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286797,286797#msg-286797 From nginx-forum at forum.nginx.org Wed Jan 22 15:41:00 2020 From: nginx-forum at forum.nginx.org (xt3627216) Date: Wed, 22 Jan 2020 10:41:00 -0500 Subject: http2 request log in accurate $request_time ? In-Reply-To: References: Message-ID: <3436cde5740579c6f581739f169c1b7a.NginxMailingListEnglish@forum.nginx.org> In our production env? I found that $request_time is very large under http2 protocol. by the way, some requests, not every one. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286797,286798#msg-286798 From themadbeaker at gmail.com Wed Jan 22 15:57:53 2020 From: themadbeaker at gmail.com (J.R.) Date: Wed, 22 Jan 2020 09:57:53 -0600 Subject: http2 request log in accurate $request_time ? Message-ID: > nginx version: nginx-1.9.5 Have you tried updating to a newer version of nginx? The 1.9 branch is probably 5 years old... It looks like the code you mention has changed somewhat, though I don't know if it has any effect on $request_time. https://github.com/nginx/nginx/blob/60f648f035fa05667b9ccbbea1b3a60d83534d9a/src/http/v2/ngx_http_v2.c From nginx-forum at forum.nginx.org Wed Jan 22 16:38:52 2020 From: nginx-forum at forum.nginx.org (yousufcse06) Date: Wed, 22 Jan 2020 11:38:52 -0500 Subject: Server Message-ID: <26f1f22c7e897fe664fa1ba50090a55a.NginxMailingListEnglish@forum.nginx.org> 2020/01/22 16:24:38 [error] 15197#15197: *17 connect() failed (111: Connection refused) while connecting to upstream, client: 202.133.89.135, server: coinscheckout.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "coinscheckout.com" Why i am getting this error in react js application Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286800,286800#msg-286800 From mdounin at mdounin.ru Wed Jan 22 17:45:32 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2020 20:45:32 +0300 Subject: http2 request log in accurate $request_time ? In-Reply-To: References: Message-ID: <20200122174532.GF12894@mdounin.ru> Hello! On Wed, Jan 22, 2020 at 10:38:42AM -0500, xt3627216 wrote: [...] > if stream->queued is true, then ngx_http_free_request will not be called > immediately, which will result $request_time larger then real request time? If stream->queued is true, this means that there are unsent frames in the stream, and the request is not yet complete. As such, queue "larger" $request_time is correct, as it is expected to account sending the response to client as well. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Jan 23 06:37:42 2020 From: nginx-forum at forum.nginx.org (xt3627216) Date: Thu, 23 Jan 2020 01:37:42 -0500 Subject: http2 request log in accurate $request_time ? In-Reply-To: <20200122174532.GF12894@mdounin.ru> References: <20200122174532.GF12894@mdounin.ru> Message-ID: <12978a580f778631f3395e3759fb3477.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Jan 22, 2020 at 10:38:42AM -0500, xt3627216 wrote: > > [...] > > > if stream->queued is true, then ngx_http_free_request will not > be called > > immediately, which will result $request_time larger then real > request time? > > If stream->queued is true, this means that there are unsent frames > in the stream, and the request is not yet complete. As such, > queue "larger" $request_time is correct, as it is expected to > account sending the response to client as well. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx thanks Maxim, I see your point. Anyway, I found some http2 requests have large $request_time in my logs( which is unreasonable large), will you be sure that the $request_time in http2 protocol compute correctly. ? the value of $request_time denote the actual time of RT. Will the multiplexing mechanism affect ?mis-computing" the value of RT. thanks many. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286797,286806#msg-286806 From mdounin at mdounin.ru Thu Jan 23 12:50:53 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Jan 2020 15:50:53 +0300 Subject: http2 request log in accurate $request_time ? In-Reply-To: <12978a580f778631f3395e3759fb3477.NginxMailingListEnglish@forum.nginx.org> References: <20200122174532.GF12894@mdounin.ru> <12978a580f778631f3395e3759fb3477.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200123125053.GG12894@mdounin.ru> Hello! On Thu, Jan 23, 2020 at 01:37:42AM -0500, xt3627216 wrote: [...] > > If stream->queued is true, this means that there are unsent frames > > in the stream, and the request is not yet complete. As such, > > queue "larger" $request_time is correct, as it is expected to > > account sending the response to client as well. > > thanks Maxim, I see your point. > > Anyway, I found some http2 requests have large $request_time in my logs( > which is unreasonable large), will you be sure that the $request_time in > http2 protocol > compute correctly. ? the value of $request_time denote the actual time of > RT. > > Will the multiplexing mechanism affect ?mis-computing" the value of RT. The $request_time variable is defined to show time of the request, since the first byte received till the actual logging. That is, it is certainly correct. The question is why you are seeing "unreasonable large" values in logs. This may be due to client behaviour, as HTTP/2 allows clients to easily delay individual streams using the flow control mechanism, or due to a bug in nginx which might fail to properly send the stream for some reason. Either way, given that nginx 1.9.5 is the first nginx version where expirimental support for HTTP/2 was introduced, it is a really bad idea to use nginx 1.9.5 with HTTP/2 enabled. There were a lot of bug fixes since then, including security ones. If you want to use HTTP/2, consider upgrading to a modern nginx version, such as nginx 1.17.8 or 1.16.1. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Jan 24 18:02:57 2020 From: nginx-forum at forum.nginx.org (andreios) Date: Fri, 24 Jan 2020 13:02:57 -0500 Subject: client sent header field with too long length value while processing HTTP/2 connection Message-ID: "client sent header field with too long length value while processing HTTP/2 connection" I get this error message in conjunction with nextcloud client. The client displays this: "error transferring /path/to/file/". Couldn't find much information about this or a valid answer with google. Mostly it is suggested to raise one or more of: fastcgi_buffer_size, fastcgi_buffers, fastcgi_busy_buffers_size, http2_max_header_size, http2_max_field_size. I have tried all of that. Nothing changed, even with very high values. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286817,286817#msg-286817 From pluknet at nginx.com Fri Jan 24 21:39:04 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Sat, 25 Jan 2020 00:39:04 +0300 Subject: client sent header field with too long length value while processing HTTP/2 connection In-Reply-To: References: Message-ID: <8C40A403-2C5E-4922-8917-C4DDF92D3EF8@nginx.com> > On 24 Jan 2020, at 21:02, andreios wrote: > > "client sent header field with too long length value while processing HTTP/2 > connection" This means that a header field name or value was detected as if it was represented as a string literal with length encoded as an integer using of more than 4 bytes (allows to represent values above 2097278). Although the integer representation used by HPACK allows for values of indefinite size, this is not supported by nginx. See RFC 7541, section 5.2 for some details. You could try to see what's actually gets sent, for debugging purpose (you might want to decrypt it first). -- Sergey Kandaurov From nginx-forum at forum.nginx.org Sat Jan 25 16:08:48 2020 From: nginx-forum at forum.nginx.org (andreios) Date: Sat, 25 Jan 2020 11:08:48 -0500 Subject: client sent header field with too long length value while processing HTTP/2 connection In-Reply-To: <8C40A403-2C5E-4922-8917-C4DDF92D3EF8@nginx.com> References: <8C40A403-2C5E-4922-8917-C4DDF92D3EF8@nginx.com> Message-ID: Sergey Kandaurov Wrote: ------------------------------------------------------- > > On 24 Jan 2020, at 21:02, andreios > wrote: > > > > "client sent header field with too long length value while > processing HTTP/2 > > connection" > > This means that a header field name or value was detected as if it > was represented as a string literal with length encoded as an integer > using of more than 4 bytes (allows to represent values above 2097278). > Although the integer representation used by HPACK allows for values > of indefinite size, this is not supported by nginx. > > See RFC 7541, section 5.2 for some details. > > You could try to see what's actually gets sent, for debugging purpose > (you might want to decrypt it first). > > -- > Sergey Kandaurov > Is there any workaround possible? Is there any howto on how to decrypt an see what actually gets send? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286817,286820#msg-286820 From francis at daoine.org Sat Jan 25 23:19:32 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 25 Jan 2020 23:19:32 +0000 Subject: Server In-Reply-To: <26f1f22c7e897fe664fa1ba50090a55a.NginxMailingListEnglish@forum.nginx.org> References: <26f1f22c7e897fe664fa1ba50090a55a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200125231932.GI26683@daoine.org> On Wed, Jan 22, 2020 at 11:38:52AM -0500, yousufcse06 wrote: Hi there, > 2020/01/22 16:24:38 [error] 15197#15197: *17 connect() failed (111: > Connection refused) while connecting to upstream, client: 202.133.89.135, > server: coinscheckout.com, request: "GET / HTTP/1.1", upstream: > "http://127.0.0.1:3000/", host: "coinscheckout.com" > > Why i am getting this error in react js application Some part of your nginx config does proxy_pass to 127.0.0.1:3000. At the time of this request, nginx was not able to connect to that service. Perhaps the service was not running; perhaps it was blocked by a firewall; perhaps it was overloaded and protected itself by blocking the connect attempt. The logs for the port-3000 service might give more a hint of why nginx failed to connect then. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Jan 27 11:51:33 2020 From: nginx-forum at forum.nginx.org (janakackv) Date: Mon, 27 Jan 2020 06:51:33 -0500 Subject: Two internal ports on same host in Single Web App. Message-ID: <31fe40dca9be2856c31e54f441b6fd1b.NginxMailingListEnglish@forum.nginx.org> I have an application runs on port 8080. Ex: 192.168.1.10:8080/Index.html. This landing page has basic username and password authentication to access it. After login, it changes the port automatically to 8088. Ex: 192.168.1.10:8088/#/monitor. I need external users to access this application using only port 80 and Nginx to rewrite between both ports. Ex. app.domain.com How can I configure this. please advice. Regards Janaka Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286825,286825#msg-286825 From themadbeaker at gmail.com Mon Jan 27 13:00:39 2020 From: themadbeaker at gmail.com (J.R.) Date: Mon, 27 Jan 2020 07:00:39 -0600 Subject: Two internal ports on same host in Single Web App. Message-ID: > I have an application runs on port 8080. > Ex: 192.168.1.10:8080/Index.html. > > This landing page has basic username and password authentication to access > it. After login, it changes the port automatically to 8088. > Ex: 192.168.1.10:8088/#/monitor. > > I need external users to access this application using only port 80 and > Nginx to rewrite between both ports. > Ex. app.domain.com > > How can I configure this. please advice. As it looks like your example they are different directories, you just set a different proxy for each location... Just make sure you pay attention to the order of operations when specifying locations. From jamesread5737 at gmail.com Tue Jan 28 00:41:21 2020 From: jamesread5737 at gmail.com (James Read) Date: Tue, 28 Jan 2020 00:41:21 +0000 Subject: stress testing nginx server Message-ID: Hi, does anyone know of a way to stress test a nginx server? For example an epoll based web crawler that can make c10k connections with the web server? Thanks, James Read -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Tue Jan 28 00:48:43 2020 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 27 Jan 2020 16:48:43 -0800 Subject: stress testing nginx server In-Reply-To: References: Message-ID: <4C6ED553-C0AF-4F30-9F4E-C78739E65BD9@fearnothingproductions.net> wrk is our go-to: https://github.com/wg/wrk Really any http load tester (ab, httperf, etc) should suffice > On Jan 27, 2020, at 4:41 PM, James Read wrote: > > ? > Hi, > > does anyone know of a way to stress test a nginx server? For example an epoll based web crawler that can make c10k connections with the web server? > > Thanks, > James Read > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Tue Jan 28 08:11:50 2020 From: mailinglist at unix-solution.de (basti) Date: Tue, 28 Jan 2020 09:11:50 +0100 Subject: stress testing nginx server In-Reply-To: References: Message-ID: <433dc4f4-a0dc-e663-6420-05362d8f29f0@unix-solution.de> In the past I have used "siege". I have grep the access.log for 200 Status code and create a list. This list I used for input in siege to have a very close realistic stress test. Best Regards On 28.01.20 01:41, James Read wrote: > Hi, > > does anyone know of a way to stress test a nginx server? For example an > epoll based web crawler that can make c10k connections with the web server? > > Thanks, > James Read > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From jmedina at mardom.com Tue Jan 28 12:33:19 2020 From: jmedina at mardom.com (Johan Gabriel Medina Capois) Date: Tue, 28 Jan 2020 12:33:19 +0000 Subject: Help please Message-ID: Morning We are new using nginx as reverse proxy, we are having trouble with a site in IIS getting this logs Access logs "GET /wfc HTTP/1.1" 404 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfc/logon HTTP/1.1" 200 7496 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 HTTP/1.1" 200 2534 "http://kronos.mardom.com/wfc/logon" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" Config server { listen 80; server_name kronos.mardom.com; location / { proxy_pass http://10.228.20.97; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Can you help us please? Regard Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Web:www.mardom.com [Facebook icon] [Instagram icon] [Linkedin icon] [Youtube icon] [Banner] Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. -------------- next part -------------- An HTML attachment was scrubbed... URL: From themadbeaker at gmail.com Tue Jan 28 13:33:33 2020 From: themadbeaker at gmail.com (J.R.) Date: Tue, 28 Jan 2020 07:33:33 -0600 Subject: Help please Message-ID: > Can you help us please? You're going to have to be a *bit* more specific what your problem is... From nginx-forum at forum.nginx.org Tue Jan 28 13:52:53 2020 From: nginx-forum at forum.nginx.org (arigatox) Date: Tue, 28 Jan 2020 08:52:53 -0500 Subject: UDP Load balancing Message-ID: <425399703f6dbaf1e6a16beef6349e2c.NginxMailingListEnglish@forum.nginx.org> Hi, I am testing nginx as a reverse proxy/load balancer for UDP. I have configured the upstream servers, and it is working fine, except one issue that is driving me crazy. It seems that nginx does not keep the udp source ports between requests. It changes the source port on every request. So I can't use it to load balance udp protocols that needs several packets (for example, wireguard). I am using version 1.14.0 on Ubuntu and this is my config file: stream { upstream udp_backend { hash $remote_addr; server 10.0.0.3:5180; server 10.0.0.4:5180; } server { listen 5180 udp; proxy_pass udp_backend; proxy_bind 10.0.0.2:5181; } } Is there any parameter that allows me to preserve udp ports? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286837,286837#msg-286837 From arut at nginx.com Tue Jan 28 14:02:27 2020 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 28 Jan 2020 17:02:27 +0300 Subject: UDP Load balancing In-Reply-To: <425399703f6dbaf1e6a16beef6349e2c.NginxMailingListEnglish@forum.nginx.org> References: <425399703f6dbaf1e6a16beef6349e2c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200128140227.7gavlqio4wanoy73@Romans-MacBook-Pro.local> Hi, On Tue, Jan 28, 2020 at 08:52:53AM -0500, arigatox wrote: > Hi, I am testing nginx as a reverse proxy/load balancer for UDP. I have > configured the upstream servers, and it is working fine, except one issue > that is driving me crazy. > > It seems that nginx does not keep the udp source ports between requests. It > changes the source port on every request. So I can't use it to load balance > udp protocols that needs several packets (for example, wireguard). > > I am using version 1.14.0 on Ubuntu and this is my config file: Your nginx is too old. UDP session persistence has been introduced in mainline version 1.15.0 and is available in stable version 1.16.0. > stream { > upstream udp_backend { > hash $remote_addr; > server 10.0.0.3:5180; > server 10.0.0.4:5180; > } > server { > listen 5180 udp; > proxy_pass udp_backend; > proxy_bind 10.0.0.2:5181; > } > } > > Is there any parameter that allows me to preserve udp ports? > Thanks. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286837,286837#msg-286837 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From jmedina at mardom.com Tue Jan 28 14:03:00 2020 From: jmedina at mardom.com (Johan Gabriel Medina Capois) Date: Tue, 28 Jan 2020 14:03:00 +0000 Subject: Help please In-Reply-To: References: Message-ID: Sure. The problem is that we have an backend application running in HTML5, when we navigate to http://kronos.mardom.com/wfc/htmlnavigator/logon and try to login, it redirect to http://kronos.mardom.com/wfc/ and deploy error message "you have no access" , but when navigate from localhost no problem. And the nginx log "GET /wfc HTTP/1.1" 404 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfc/logon HTTP/1.1" 200 7496 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 HTTP/1.1" 200 2534 "http://kronos.mardom.com/wfc/logon" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" Configuration is server { listen 80; server_name kronos.mardom.com; location / { proxy_pass http://10.228.20.97; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Regards -----Original Message----- From: nginx On Behalf Of J.R. Sent: Tuesday, January 28, 2020 9:34 AM To: nginx at nginx.org Subject: Re: Help please > Can you help us please? You're going to have to be a *bit* more specific what your problem is... _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Web:www.mardom.com [Facebook icon] [Instagram icon] [Linkedin icon] [Youtube icon] [Banner] Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. From Richard at primarysite.net Tue Jan 28 14:11:10 2020 From: Richard at primarysite.net (Richard Paul) Date: Tue, 28 Jan 2020 14:11:10 +0000 Subject: Help please In-Reply-To: References: Message-ID: <54867d6835059d36953428375ed05ec0c139e237.camel@primarysite.net> It doesn't actually redirect to /wfc/ though, or rather your log lines show a 404 at /wfc Also, your log line says /wfc/logon not /wfc/htmlnavigator/logon GET /wfc GET /wfc/logon GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 On Tue, 2020-01-28 at 14:03 +0000, Johan Gabriel Medina Capois wrote: Sure. The problem is that we have an backend application running in HTML5, when we navigate to http://kronos.mardom.com/wfc/htmlnavigator/logon and try to login, it redirect to http://kronos.mardom.com/wfc/ and deploy error message "you have no access" , but when navigate from localhost no problem. And the nginx log "GET /wfc HTTP/1.1" 404 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfc/logon HTTP/1.1" 200 7496 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 HTTP/1.1" 200 2534 " http://kronos.mardom.com/wfc/logon " "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" Configuration is server { listen 80; server_name kronos.mardom.com; location / { proxy_pass http://10.228.20.97 ; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Regards -----Original Message----- From: nginx < nginx-bounces at nginx.org > On Behalf Of J.R. Sent: Tuesday, January 28, 2020 9:34 AM To: nginx at nginx.org Subject: Re: Help please Can you help us please? You're going to have to be a *bit* more specific what your problem is... _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Web:www.mardom.com [Facebook icon] < https://www.facebook.com/maritimadelcaribe > [Instagram icon] < https://www.instagram.com/maritimadelcaribe > [Linkedin icon] < https://www.linkedin.com/company/maritima-dominicana-sas/?viewAsMember=true > [Youtube icon] [Banner] Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 28 14:17:19 2020 From: nginx-forum at forum.nginx.org (arigatox) Date: Tue, 28 Jan 2020 09:17:19 -0500 Subject: UDP Load balancing In-Reply-To: <20200128140227.7gavlqio4wanoy73@Romans-MacBook-Pro.local> References: <20200128140227.7gavlqio4wanoy73@Romans-MacBook-Pro.local> Message-ID: Thanks Roman, I will try a newer version and let you know Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286837,286841#msg-286841 From nginx-forum at forum.nginx.org Tue Jan 28 14:29:20 2020 From: nginx-forum at forum.nginx.org (arigatox) Date: Tue, 28 Jan 2020 09:29:20 -0500 Subject: UDP Load balancing - [Solved] In-Reply-To: References: <20200128140227.7gavlqio4wanoy73@Romans-MacBook-Pro.local> Message-ID: <1b04611e510e3e3cd69fa13c756204ac.NginxMailingListEnglish@forum.nginx.org> Excellent! I upgraded to 1.16.1 and udp load balancing is working as expected. Thank you again Roman. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286837,286842#msg-286842 From Richard at primarysite.net Tue Jan 28 15:00:18 2020 From: Richard at primarysite.net (Richard Paul) Date: Tue, 28 Jan 2020 15:00:18 +0000 Subject: Help please In-Reply-To: <54867d6835059d36953428375ed05ec0c139e237.camel@primarysite.net> References: <54867d6835059d36953428375ed05ec0c139e237.camel@primarysite.net> Message-ID: <67f638cd71fe8c1c451a3680f04dd2ee9af299b1.camel@primarysite.net> By the looks of things, if the application is redirecting to /wfc that's not working, your application doesn't seem to accept that as a valid. The Squid cache is returning a miss and so it is hitting the backend and getting a 404 from there it seems. /wfc/ with a trailing slash does work however, so this looks like an issue with the IIS configuration to me. Also, this is a login form, I'd recommend that you get TLS set up on this (Let's Encrypt's certbot is free afterall). On Tue, 2020-01-28 at 14:11 +0000, Richard Paul wrote: It doesn't actually redirect to /wfc/ though, or rather your log lines show a 404 at /wfc Also, your log line says /wfc/logon not /wfc/htmlnavigator/logon GET /wfc GET /wfc/logon GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 On Tue, 2020-01-28 at 14:03 +0000, Johan Gabriel Medina Capois wrote: Sure. The problem is that we have an backend application running in HTML5, when we navigate to http://kronos.mardom.com/wfc/htmlnavigator/logon and try to login, it redirect to http://kronos.mardom.com/wfc/ and deploy error message "you have no access" , but when navigate from localhost no problem. And the nginx log "GET /wfc HTTP/1.1" 404 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfc/logon HTTP/1.1" 200 7496 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 HTTP/1.1" 200 2534 " http://kronos.mardom.com/wfc/logon " "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" Configuration is server { listen 80; server_name kronos.mardom.com; location / { proxy_pass http://10.228.20.97 ; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Regards -----Original Message----- From: nginx < nginx-bounces at nginx.org > On Behalf Of J.R. Sent: Tuesday, January 28, 2020 9:34 AM To: nginx at nginx.org Subject: Re: Help please Can you help us please? You're going to have to be a *bit* more specific what your problem is... _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Web:www.mardom.com [Facebook icon] < https://www.facebook.com/maritimadelcaribe > [Instagram icon] < https://www.instagram.com/maritimadelcaribe > [Linkedin icon] < https://www.linkedin.com/company/maritima-dominicana-sas/?viewAsMember=true > [Youtube icon] [Banner] Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 29 13:20:33 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Wed, 29 Jan 2020 08:20:33 -0500 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? Message-ID: <7a32af3b544ea0877a9a8f9dd5584c56.NginxMailingListEnglish@forum.nginx.org> I created a brand new tiny webapp with vue cli, so without adding anything, apart from what the empty vue-cli scaffolding brings: (base) marco at pc:~/vueMatters/testproject$ npm run serve > testproject at 0.1.0 serve /home/marco/vueMatters/testproject > vue-cli-service serve INFO Starting development server... 98% after emitting CopyPlugin DONE Compiled successfully in 1409ms 8:14:46 PM App running at: - Local: localhost:8080 - Network: 192.168.1.7:8080 Note that the development build is not optimized. To create a production build, run npm run build. And got this error message : https://drive.google.com/open?id=10GcVFmqNVGRjox3wklJtcrAkIWM3kOp8 "GET https://localhost/sockjs-node/info?t=1580228998416 net::ERR_CONNECTION_REFUSED" node --version v12.10.0 npm -v 6.13.6 webpack-cli at 3.3.10 Ubuntu 18.04.03 Server Edition This is the /etc/nginx/conf.d/default.conf : server { listen 443 ssl http2 default_server; server_name ggc.world; ssl_certificate /etc/ssl/certs/chained.pem; ssl_certificate_key /etc/ssl/private/domain.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; ssl_dhparam /etc/ssl/certs/dhparam.pem; #ssl_stapling on; #ssl_stapling_verify on; access_log /var/log/nginx/ggcworld-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } server { listen 80 default_server; listen [::]:80 default_server; error_page 497 https://$host:$server_port$request_uri; server_name www.ggc.world; return 301 https://$server_name$request_uri; access_log /var/log/nginx/ggcworld-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } # https://www.nginx.com/blog/nginx-nodejs-websockets-socketio/ # https://gist.github.com/uorat/10b15a32f3ffa3f240662b9b0fefe706 # http://nginx.org/en/docs/stream/ngx_stream_core_module.html upstream websocket { ip_hash; server localhost:3000; } server { listen 81; server_name ggc.world www.ggc.world; location / { proxy_pass http://websocket; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; } #location /socket.io/socket.io.js { # proxy_pass http://websocket; #} } How to solve the problem? How to correctly configure Nginx with socket.io? Marco Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286850#msg-286850 From nginx-forum at forum.nginx.org Wed Jan 29 13:39:16 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Wed, 29 Jan 2020 08:39:16 -0500 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <7a32af3b544ea0877a9a8f9dd5584c56.NginxMailingListEnglish@forum.nginx.org> References: <7a32af3b544ea0877a9a8f9dd5584c56.NginxMailingListEnglish@forum.nginx.org> Message-ID: Add-on to the previous email: using firefox as web browser, I get this error message: https://drive.google.com/open?id=1l6USIHrbHl6kBcQtormXplOgx0J653ko "Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://localhost/sockjs-node/info?t=1580304400023. (Reason: CORS request did not succeed)." Looking at Mozilla Developer explanation: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSDidNotSucceed?utm_source=devtools&utm_medium=firefox-cors-errors&utm_campaign=default "What went wrong? The HTTP request which makes use of CORS failed because the HTTP connection failed at either the network or protocol level. The error is not directly related to CORS, but is a fundamental network error of some kind. In many cases, it is caused by a browser plugin (e.g. an ad blocker or privacy protector) blocking the request. Other possible causes include: Trying to access an https resource that has an invalid certificate will cause this error. Trying to access an http resource from a page with an https origin will also cause this error. As of Firefox 68, https pages are not permitted to access http://localhost, although this may be changed by Bug 1488740. The server did not respond to the actual request (even if it responded to the Preflight request). One scenario might be an HTTP service being developed that panicked without returning any data. " Checked the TLS Certificates with https://www.digicert.com/help/ : and the result is: " TLS Certificate has not been revoked. TLS Certificate expires soon. The primary TLS Certificate expires on February 28, 2020 (30 days remaining) Certificate Name matches ggc.world TLS Certificate is correctly installed " So may be my nginx configuration has to be improved. Looking forward to your kind help. Marco Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286851#msg-286851 From nginx-forum at forum.nginx.org Wed Jan 29 15:20:44 2020 From: nginx-forum at forum.nginx.org (gagandeep) Date: Wed, 29 Jan 2020 10:20:44 -0500 Subject: Certificate Chain Validation Message-ID: Does nginx validates all the Cerificates in the certificate chain? Like the expiry date of the all the intermediate certificates. If one the intermediate certificate has expired will nginx still proceed or will it break the connection? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286852,286852#msg-286852 From jmedina at mardom.com Wed Jan 29 19:12:24 2020 From: jmedina at mardom.com (Johan Gabriel Medina Capois) Date: Wed, 29 Jan 2020 19:12:24 +0000 Subject: Help please In-Reply-To: <67f638cd71fe8c1c451a3680f04dd2ee9af299b1.camel@primarysite.net> References: <54867d6835059d36953428375ed05ec0c139e237.camel@primarysite.net> <67f638cd71fe8c1c451a3680f04dd2ee9af299b1.camel@primarysite.net> Message-ID: The issues is that nginx is not allowing authentication through, any application cant?s authenticate through nginx, is this case the backend is running in IIS, any idea? if you need more information i can send what ever you need, but please a need your help. Regards From: nginx On Behalf Of Richard Paul Sent: Tuesday, January 28, 2020 11:00 AM To: nginx at nginx.org Subject: Re: Help please By the looks of things, if the application is redirecting to /wfc that's not working, your application doesn't seem to accept that as a valid. The Squid cache is returning a miss and so it is hitting the backend and getting a 404 from there it seems. /wfc/ with a trailing slash does work however, so this looks like an issue with the IIS configuration to me. Also, this is a login form, I'd recommend that you get TLS set up on this (Let's Encrypt's certbot is free afterall). On Tue, 2020-01-28 at 14:11 +0000, Richard Paul wrote: It doesn't actually redirect to /wfc/ though, or rather your log lines show a 404 at /wfc Also, your log line says /wfc/logon not /wfc/htmlnavigator/logon GET /wfc GET /wfc/logon GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 On Tue, 2020-01-28 at 14:03 +0000, Johan Gabriel Medina Capois wrote: Sure. The problem is that we have an backend application running in HTML5, when we navigate to http://kronos.mardom.com/wfc/htmlnavigator/logon and try to login, it redirect to http://kronos.mardom.com/wfc/ and deploy error message "you have no access" , but when navigate from localhost no problem. And the nginx log "GET /wfc HTTP/1.1" 404 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfc/logon HTTP/1.1" 200 7496 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" "GET /wfcstatic/applications/wpk/html/scripts/cookie.js?version=8.1.6.2032 HTTP/1.1" 200 2534 " http://kronos.mardom.com/wfc/logon " "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0" Configuration is server { listen 80; server_name kronos.mardom.com; location / { proxy_pass http://10.228.20.97 ; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Regards -----Original Message----- From: nginx < nginx-bounces at nginx.org > On Behalf Of J.R. Sent: Tuesday, January 28, 2020 9:34 AM To: nginx at nginx.org Subject: Re: Help please Can you help us please? You're going to have to be a *bit* more specific what your problem is... _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Web:www.mardom.com [Facebook icon] < https://www.facebook.com/maritimadelcaribe > [Instagram icon] < https://www.instagram.com/maritimadelcaribe > [Linkedin icon] < https://www.linkedin.com/company/maritima-dominicana-sas/?viewAsMember=true > [Youtube icon] [Banner] Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Web:www.mardom.com [Facebook icon] [Instagram icon] [Linkedin icon] [Youtube icon] [Banner] Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jan 29 20:09:48 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 29 Jan 2020 20:09:48 +0000 Subject: Help please In-Reply-To: References: <54867d6835059d36953428375ed05ec0c139e237.camel@primarysite.net> <67f638cd71fe8c1c451a3680f04dd2ee9af299b1.camel@primarysite.net> Message-ID: <20200129200948.GN26683@daoine.org> On Wed, Jan 29, 2020 at 07:12:24PM +0000, Johan Gabriel Medina Capois wrote: Hi there, > The issues is that nginx is not allowing authentication through, any application cant?s authenticate through nginx, is this case the backend is running in IIS, any idea? if you need more information i can send what ever you need, but please a need your help. > I suspect that it will become clearer where the problem might be, if you can show one request that you make that works when you avoid nginx; and show the same request through nginx and show the corresponding failure response. If you can use "curl -v" to send the request, with whatever user/pass credentials you use obviously marked, then it may help to copy-paste the request and response. If you are using http basic authentication on IIS, then it should Just Work. If you are using ntlm authentication on IIS, then it will not work through any proxy or reverse proxy (unless it has specific ntlm support). Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jan 29 20:14:48 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 29 Jan 2020 20:14:48 +0000 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: References: <7a32af3b544ea0877a9a8f9dd5584c56.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200129201448.GO26683@daoine.org> On Wed, Jan 29, 2020 at 08:39:16AM -0500, MarcoI wrote: Hi there, > So may be my nginx configuration has to be improved. What request do you make? What response do you get? What response do you want to get, instead? If you can use something like "curl -v" to show one specific request that gets a response that you do not want, that may help make it clear where the problem is. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jan 30 02:16:58 2020 From: nginx-forum at forum.nginx.org (slowgary) Date: Wed, 29 Jan 2020 21:16:58 -0500 Subject: Certificate Chain Validation In-Reply-To: References: Message-ID: <8eed437c84947da67762c7b3bfb00911.NginxMailingListEnglish@forum.nginx.org> Nginx does not validate the expiration date of certificates. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286852,286857#msg-286857 From nginx-forum at forum.nginx.org Thu Jan 30 02:32:21 2020 From: nginx-forum at forum.nginx.org (slowgary) Date: Wed, 29 Jan 2020 21:32:21 -0500 Subject: Documentation for alias directive Message-ID: I hope this is the appropriate place to report this. I struggled with using the alias directive because I (incorrectly) assumed that it was relative to root since all other parts of my nginx configs are. This is not mentioned in the documentation, it'd be nice to see it there. If I'm completely off base, please correct me. Below are examples of what did and didn't work for me. #THIS IS WRONG server { root /var/www; location /i/ { alias /images/; } } #THIS IS RIGHT server { root /var/www; location /i/ { alias /var/www/images/; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286858,286858#msg-286858 From r at roze.lv Thu Jan 30 08:05:28 2020 From: r at roze.lv (Reinis Rozitis) Date: Thu, 30 Jan 2020 10:05:28 +0200 Subject: Documentation for alias directive In-Reply-To: References: Message-ID: <000301d5d744$08e1c760$1aa55620$@roze.lv> > I struggled with using the alias directive because I (incorrectly) assumed that it was relative to root since all other parts of my nginx configs are. This is not mentioned in the documentation, it'd be nice to see it there. Well it's not directly worded but you can (should) see from the example here http://nginx.org/en/docs/http/ngx_http_core_module.html#alias that it doesn't use the root (even with a notice that you can't use the $document_root variables): location /i/ { alias /data/w3/images/; } on request of ?/i/top.gif?, the file /data/w3/images/top.gif will be sent. rr From mdounin at mdounin.ru Thu Jan 30 12:13:22 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Jan 2020 15:13:22 +0300 Subject: Certificate Chain Validation In-Reply-To: <8eed437c84947da67762c7b3bfb00911.NginxMailingListEnglish@forum.nginx.org> References: <8eed437c84947da67762c7b3bfb00911.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200130121322.GP12894@mdounin.ru> Hello! On Wed, Jan 29, 2020 at 09:16:58PM -0500, slowgary wrote: > Nginx does not validate the expiration date of certificates. This statement is not true. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Jan 30 13:11:15 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Thu, 30 Jan 2020 08:11:15 -0500 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <20200129201448.GO26683@daoine.org> References: <20200129201448.GO26683@daoine.org> Message-ID: Hi Francis, thanks for helping. curl on PC-Server (Ubuntu 18.04.03 Server Edition): (base) marco at pc:~/vueMatters/testproject$ curl -Iki http://localhost:8080/ HTTP/1.1 200 OK X-Powered-By: Express Accept-Ranges: bytes Content-Type: text/html; charset=UTF-8 Content-Length: 774 ETag: W/"306-TZR5skx9okrXHMJbxwuiUem3Jkk" Date: Thu, 30 Jan 2020 09:32:30 GMT Connection: keep-alive But from a laptop (Ubuntu 18.04.03 Desktop): - https://drive.google.com/open?id=1r56ZApxg3gQLRakKGCwI7CriQbbmfrLh - https://drive.google.com/open?id=1Dm-PC85pjGfqIeMOS45k3hvV9PANgOH5 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286862#msg-286862 From anoopalias01 at gmail.com Thu Jan 30 13:26:43 2020 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 30 Jan 2020 18:56:43 +0530 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: References: <20200129201448.GO26683@daoine.org> Message-ID: GET https://localhost/sockjs-node/info?t=1580228998416 net::ERR_CONNECTION_REFUSED" means it is connecting to localhost:443 ( default https port) and not port 8080 On Thu, Jan 30, 2020 at 6:41 PM MarcoI wrote: > Hi Francis, > thanks for helping. > > curl on PC-Server (Ubuntu 18.04.03 Server Edition): > > (base) marco at pc:~/vueMatters/testproject$ curl -Iki > http://localhost:8080/ > HTTP/1.1 200 OK > X-Powered-By: Express > Accept-Ranges: bytes > Content-Type: text/html; charset=UTF-8 > Content-Length: 774 > ETag: W/"306-TZR5skx9okrXHMJbxwuiUem3Jkk" > Date: Thu, 30 Jan 2020 09:32:30 GMT > Connection: keep-alive > > But from a laptop (Ubuntu 18.04.03 Desktop): > - https://drive.google.com/open?id=1r56ZApxg3gQLRakKGCwI7CriQbbmfrLh > - https://drive.google.com/open?id=1Dm-PC85pjGfqIeMOS45k3hvV9PANgOH5 > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,286850,286862#msg-286862 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 30 15:01:13 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Thu, 30 Jan 2020 10:01:13 -0500 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: References: Message-ID: <55acbb691506929c04f2cf12facabd1c.NginxMailingListEnglish@forum.nginx.org> Sorry for my ignorance... how to practically modify the /etc/nginx/conf.d/default.conf ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286864#msg-286864 From nginx-forum at forum.nginx.org Thu Jan 30 15:55:03 2020 From: nginx-forum at forum.nginx.org (slowgary) Date: Thu, 30 Jan 2020 10:55:03 -0500 Subject: Certificate Chain Validation In-Reply-To: <20200130121322.GP12894@mdounin.ru> References: <20200130121322.GP12894@mdounin.ru> Message-ID: <80fdb6b374039a63428691af118d22a4.NginxMailingListEnglish@forum.nginx.org> Thanks for the correction Maxim. I tested this before posting by using an old certificate. Nginx did not throw an error but the browser did notify that the connection was insecure. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286852,286865#msg-286865 From nginx-forum at forum.nginx.org Thu Jan 30 16:08:04 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Thu, 30 Jan 2020 11:08:04 -0500 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <7a32af3b544ea0877a9a8f9dd5584c56.NginxMailingListEnglish@forum.nginx.org> References: <7a32af3b544ea0877a9a8f9dd5584c56.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9538114a55d529556c9faeb901554ec5.NginxMailingListEnglish@forum.nginx.org> With this /etc/nginx/conf.d/default.conf : server { listen 443 ssl http2 default_server; server_name ggc.world; ssl_certificate /etc/ssl/certs/chained.pem; ssl_certificate_key /etc/ssl/private/domain.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; ssl_dhparam /etc/ssl/certs/dhparam.pem; #ssl_stapling on; #ssl_stapling_verify on; access_log /var/log/nginx/ggcworld-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } server { listen 80 default_server; listen [::]:80 default_server; error_page 497 https://$host:$server_port$request_uri; server_name www.ggc.world; return 301 https://$server_name$request_uri; access_log /var/log/nginx/ggcworld-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } # https://www.nginx.com/blog/nginx-nodejs-websockets-socketio/ # https://gist.github.com/uorat/10b15a32f3ffa3f240662b9b0fefe706 # http://nginx.org/en/docs/stream/ngx_stream_core_module.html upstream websocket { ip_hash; server localhost:3000; } server { listen 81; server_name ggc.world www.ggc.world; #location / { location ~ ^/(websocket|websocket\/socket-io) { proxy_pass http://127.0.0.1:4201; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Forwared-For $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; } } # https://stackoverflow.com/questions/40516288/webpack-dev-server-with-nginx-proxy-pass with vue.config.js : module.exports = { // options... publicPath: '', devServer: { host: 'localhost', } } and with this webpack.config.js : { "mode": "development", "entry": [ "src/index.js", "webpack-dev-server/client?http://" + require("os").hostname() + ":3000/" ], "output": { "path": __dirname+'/static', "filename": "[name].[chunkhash:8].js" }, "module": { "rules": [ { "test": /\.vue$/, "exclude": /node_modules/, "use": "vue-loader" }, { "test": /\.pem$/, "use": "file-loader" } ] }, plugins: [ new BrowserSyncPlugin( { host: 'localhost', port: 3000, proxy: 'http://localhost:8080' }, { reload: false } ), ], node: { __dirname: false, __filename: false }, resolve: { extension: ['*', '.pem'] }, devServer: { watchOptions: { aggregateTimeout: 300, poll: 1000 } } } And still get this error message: GET https://localhost/sockjs-node/info?t=1580397983088 net::ERR_CONNECTION_REFUSED : https://drive.google.com/open?id=1Dm-PC85pjGfqIeMOS45k3hvV9PANgOH5 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286866#msg-286866 From jmedina at mardom.com Thu Jan 30 16:46:27 2020 From: jmedina at mardom.com (Johan Gabriel Medina Capois) Date: Thu, 30 Jan 2020 16:46:27 +0000 Subject: Help please In-Reply-To: <20200129200948.GN26683@daoine.org> References: <54867d6835059d36953428375ed05ec0c139e237.camel@primarysite.net> <67f638cd71fe8c1c451a3680f04dd2ee9af299b1.camel@primarysite.net> <20200129200948.GN26683@daoine.org> Message-ID: Good afternoon Here are two attached with required information, sorry for the time, anything else I'm available for send. Regards -----Original Message----- From: nginx On Behalf Of Francis Daly Sent: Wednesday, January 29, 2020 4:10 PM To: nginx at nginx.org Subject: Re: Help please On Wed, Jan 29, 2020 at 07:12:24PM +0000, Johan Gabriel Medina Capois wrote: Hi there, > The issues is that nginx is not allowing authentication through, any application cant?s authenticate through nginx, is this case the backend is running in IIS, any idea? if you need more information i can send what ever you need, but please a need your help. > I suspect that it will become clearer where the problem might be, if you can show one request that you make that works when you avoid nginx; and show the same request through nginx and show the corresponding failure response. If you can use "curl -v" to send the request, with whatever user/pass credentials you use obviously marked, then it may help to copy-paste the request and response. If you are using http basic authentication on IIS, then it should Just Work. If you are using ntlm authentication on IIS, then it will not work through any proxy or reverse proxy (unless it has specific ntlm support). Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Web:www.mardom.com [Facebook icon] [Instagram icon] [Linkedin icon] [Youtube icon] [Banner] Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. -------------- next part -------------- A non-text attachment was scrubbed... Name: Error Interboro.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 484065 bytes Desc: Error Interboro.docx URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: curl -v kronos.txt URL: From francis at daoine.org Thu Jan 30 18:45:14 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Jan 2020 18:45:14 +0000 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: References: <20200129201448.GO26683@daoine.org> Message-ID: <20200130184514.GP26683@daoine.org> On Thu, Jan 30, 2020 at 08:11:15AM -0500, MarcoI wrote: Hi there, > curl on PC-Server (Ubuntu 18.04.03 Server Edition): > > (base) marco at pc:~/vueMatters/testproject$ curl -Iki > http://localhost:8080/ > HTTP/1.1 200 OK So from the nginx-and-vue server, you can access vue. > But from a laptop (Ubuntu 18.04.03 Desktop): > - https://drive.google.com/open?id=1r56ZApxg3gQLRakKGCwI7CriQbbmfrLh > - https://drive.google.com/open?id=1Dm-PC85pjGfqIeMOS45k3hvV9PANgOH5 That seems to show that from a different machine, you can access nginx, which reverse-proxies to vue; and the content from vue includes links or redirects to localhost (and to localhost:8080). And those links will fail. I cannot tell from these pictures what one http request was made and what response was received -- maybe the output of "curl -vk https://ggc.world" from this machine will show something? If the issue is that vue is returning a http 301 or 302 redirect to something below localhost or localhost:8080, then either changing that in vue, or adding proxy_redirect in nginx, may be best. If the issue is that vue is returning a http 200 with content that links to localhost, then that should be changed in vue. I suspect that almost anything in the vue config that mentions localhost, should be removed. But vue people may be a better source of information there. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jan 30 19:21:22 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Jan 2020 19:21:22 +0000 Subject: Help please In-Reply-To: References: <54867d6835059d36953428375ed05ec0c139e237.camel@primarysite.net> <67f638cd71fe8c1c451a3680f04dd2ee9af299b1.camel@primarysite.net> <20200129200948.GN26683@daoine.org> Message-ID: <20200130192122.GQ26683@daoine.org> On Thu, Jan 30, 2020 at 04:46:27PM +0000, Johan Gabriel Medina Capois wrote: Hi there, > Here are two attached with required information, sorry for the time, anything else I'm available for send. > >From that, I do not see any evidence of a problem involving nginx. You say that authentication fails, but the only nginx logs you show all show http 200. And for "direct" access that works, you show logs with lots of (java?) error messages Can you show the actual login request? It looks like it should be a POST to /wfc/portal including a username= and a password= in the POST request body content. Presumably that is a request that gives a "success" indication when made directly, and a "failure" when made through nginx. A comparison of the response bodies, and maybe the back-end server logs, will probably be instructive. That comes from the following (lines removed): >
> > Thanks, f -- Francis Daly francis at daoine.org From lists-nginx at swsystem.co.uk Fri Jan 31 01:13:30 2020 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Fri, 31 Jan 2020 01:13:30 +0000 Subject: rewriting $arg into request. Message-ID: <0d6c8381-9b73-7b8b-e498-64dacb2451ac@swsystem.co.uk> I'm currently in the process of transitioning from wordpress to hugo. For anyone not familiar with these, wordpress is php based and hugo outputs static content (keeping it simple) Currently wordpress is using ugly urls for posts, so "/?p=1234" in wordpress might be "/this_nice_title" in hugo. Now hugo allows me to specify aliases too which I'd like to leverage to maintain links, but this is where I seem to be struggling with rewrite/map etc. Am I missing a way to access the arguments? What I'm currently wanting to do is rewrite "/?p=1234" to "/1234/", "/p=1234/" or even "/p1234/" but can't figure it out. Anyone got an easy way to do this, or a better way? Regards Steve. From nginx-forum at forum.nginx.org Fri Jan 31 09:40:45 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Fri, 31 Jan 2020 04:40:45 -0500 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <9538114a55d529556c9faeb901554ec5.NginxMailingListEnglish@forum.nginx.org> References: <7a32af3b544ea0877a9a8f9dd5584c56.NginxMailingListEnglish@forum.nginx.org> <9538114a55d529556c9faeb901554ec5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <88fdbed2b9e2d4cd04f226be7d478aab.NginxMailingListEnglish@forum.nginx.org> I add more information and a question: >From within the PC-Server: (base) marco at pc:~$ curl -Iki https://localhost/sockjs-node/info?t=1580397983088 HTTP/2 405 server: nginx/1.14.0 (Ubuntu) date: Fri, 31 Jan 2020 08:19:02 GMT allow: OPTIONS, GET >From the laptop: (base) marco at marco-U36SG:~$ curl -Iki https://ggc.world/sockjs-node/info?t=1580397983088 HTTP/1.1 405 Method Not Allowed Server: nginx/1.14.0 (Ubuntu) Date: Fri, 31 Jan 2020 09:34:59 GMT Connection: keep-alive Allow: OPTIONS, GET What does it mean "HTTP/1.1 405 Method Not Allowed" ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286872#msg-286872 From francis at daoine.org Fri Jan 31 11:09:45 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 31 Jan 2020 11:09:45 +0000 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <88fdbed2b9e2d4cd04f226be7d478aab.NginxMailingListEnglish@forum.nginx.org> References: <7a32af3b544ea0877a9a8f9dd5584c56.NginxMailingListEnglish@forum.nginx.org> <9538114a55d529556c9faeb901554ec5.NginxMailingListEnglish@forum.nginx.org> <88fdbed2b9e2d4cd04f226be7d478aab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200131110945.GR26683@daoine.org> On Fri, Jan 31, 2020 at 04:40:45AM -0500, MarcoI wrote: Hi there, > I add more information and a question: > (base) marco at marco-U36SG:~$ curl -Iki > https://ggc.world/sockjs-node/info?t=1580397983088 > HTTP/1.1 405 Method Not Allowed > Server: nginx/1.14.0 (Ubuntu) > Date: Fri, 31 Jan 2020 09:34:59 GMT > Connection: keep-alive > Allow: OPTIONS, GET > > What does it mean "HTTP/1.1 405 Method Not Allowed" ? Exactly what is says. "curl -I" does HEAD not GET. Some part of your system does not want to allow HEAD requests. What does "curl -vk" show? That will make a GET request. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Jan 31 11:28:43 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 31 Jan 2020 11:28:43 +0000 Subject: rewriting $arg into request. In-Reply-To: <0d6c8381-9b73-7b8b-e498-64dacb2451ac@swsystem.co.uk> References: <0d6c8381-9b73-7b8b-e498-64dacb2451ac@swsystem.co.uk> Message-ID: <20200131112843.GS26683@daoine.org> On Fri, Jan 31, 2020 at 01:13:30AM +0000, Steve Wilson wrote: Hi there, > Currently wordpress is using ugly urls for posts, so "/?p=1234" in wordpress > might be "/this_nice_title" in hugo. > Now hugo allows me to specify aliases too which I'd like to leverage to > maintain links, but this is where I seem to be struggling with rewrite/map > etc. > > Am I missing a way to access the arguments? Without knowing how hugo works, I would suggest ignoring its "alias" feature for this, and just letting nginx invite the client that requests "old", to instead request "new". Assuming that you have the list of old-and-new urls that you care about, and that the old urls are unique case-insensitively, then using a "map" reading "$request_uri" (old) and writing, say, "$hugo_url" (new), would probably be the simplest. map $request_uri $hugo_url { /?p=1234 /this_nice_title; } in http{} (add more lines as wanted), along with something like if ($hugo_url) { return 301 $hugo_url; } in the correct server{}, should work, I think. (Untested by me!) (Maybe change the "return" line to include "https://this-server$hugo_url", if you want that.) (If all of your "old" requests have the same content from the first / to the ?, then you could choose to isolate the "if" within the matching "location = /" block for efficiency; there may be extra config needed in that case.) http://nginx.org/r/map http://nginx.org/r/$request_uri Good luck with it, f -- Francis Daly francis at daoine.org From lists-nginx at swsystem.co.uk Fri Jan 31 12:11:54 2020 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Fri, 31 Jan 2020 12:11:54 +0000 Subject: rewriting $arg into request. In-Reply-To: <20200131112843.GS26683@daoine.org> References: <0d6c8381-9b73-7b8b-e498-64dacb2451ac@swsystem.co.uk> <20200131112843.GS26683@daoine.org> Message-ID: <97fb1986-41e2-2b91-032d-3e6b988950af@swsystem.co.uk> Hugo's alias basically creates a /alias/index.html file which contains a meta refresh. I managed to find something using an if which does the job, however using the map solution presented is much more elegant as it would reduce the redirects. ??????????????? if ($args ~ "^p=(\d+)") { ??????????????????????? set $page $1; ??????????????????????? set $args ""; ??????????????????????? rewrite ^.*$ /p/$page last; ??????????????????????? break; ??????????????? } I knew there'd be a simpler way and I due to the time of night I was struggling. Steve On 31/01/2020 11:28, Francis Daly wrote: > On Fri, Jan 31, 2020 at 01:13:30AM +0000, Steve Wilson wrote: > > Hi there, > >> Currently wordpress is using ugly urls for posts, so "/?p=1234" in wordpress >> might be "/this_nice_title" in hugo. >> Now hugo allows me to specify aliases too which I'd like to leverage to >> maintain links, but this is where I seem to be struggling with rewrite/map >> etc. >> >> Am I missing a way to access the arguments? > Without knowing how hugo works, I would suggest ignoring its "alias" > feature for this, and just letting nginx invite the client that requests > "old", to instead request "new". > > Assuming that you have the list of old-and-new urls that you care about, > and that the old urls are unique case-insensitively, then using a "map" > reading "$request_uri" (old) and writing, say, "$hugo_url" (new), would > probably be the simplest. > > map $request_uri $hugo_url { > /?p=1234 /this_nice_title; > } > > in http{} (add more lines as wanted), along with something like > > if ($hugo_url) { return 301 $hugo_url; } > > in the correct server{}, should work, I think. (Untested by me!) > > (Maybe change the "return" line to include "https://this-server$hugo_url", > if you want that.) > > (If all of your "old" requests have the same content from the first / > to the ?, then you could choose to isolate the "if" within the matching > "location = /" block for efficiency; there may be extra config needed > in that case.) > > http://nginx.org/r/map > http://nginx.org/r/$request_uri > > Good luck with it, > > f From r at roze.lv Fri Jan 31 12:37:43 2020 From: r at roze.lv (Reinis Rozitis) Date: Fri, 31 Jan 2020 14:37:43 +0200 Subject: rewriting $arg into request. In-Reply-To: <97fb1986-41e2-2b91-032d-3e6b988950af@swsystem.co.uk> References: <0d6c8381-9b73-7b8b-e498-64dacb2451ac@swsystem.co.uk> <20200131112843.GS26683@daoine.org> <97fb1986-41e2-2b91-032d-3e6b988950af@swsystem.co.uk> Message-ID: <000001d5d833$3bd9f850$b38de8f0$@roze.lv> > > if ($args ~ "^p=(\d+)") { > set $page $1; > set $args ""; > rewrite ^.*$ /p/$page last; > break; > } > > I knew there'd be a simpler way and I due to the time of night I was > struggling. To add to this (and the map variant by Francis) if the parameter is always 'p' you can just use $arg_p rather than regex on $args or whole $request_uri: if ($arg_p) { return 301 http://yoursite/p/$arg_p; } or map $arg_p $hugo_url { 1234 /this_nice_title; } ... rr From francis at daoine.org Fri Jan 31 12:59:58 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 31 Jan 2020 12:59:58 +0000 Subject: rewriting $arg into request. In-Reply-To: <97fb1986-41e2-2b91-032d-3e6b988950af@swsystem.co.uk> References: <0d6c8381-9b73-7b8b-e498-64dacb2451ac@swsystem.co.uk> <20200131112843.GS26683@daoine.org> <97fb1986-41e2-2b91-032d-3e6b988950af@swsystem.co.uk> Message-ID: <20200131125958.GT26683@daoine.org> On Fri, Jan 31, 2020 at 12:11:54PM +0000, Steve Wilson wrote: Hi there, > Hugo's alias basically creates a /alias/index.html file which contains a > meta refresh. Ah, ok. Presumably there would have to be some web server config needed to make the incoming request actually serve that file -- I think that the request "/?p=1234" serving the content of the file "/usr/local/nginx/html/?p=1234" is unlikely to be an initial default in many web servers. Hopefully you now have a system that works for you and needs no ongoing maintenance. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Jan 31 14:05:08 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Fri, 31 Jan 2020 09:05:08 -0500 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <20200131110945.GR26683@daoine.org> References: <20200131110945.GR26683@daoine.org> Message-ID: <31f727d7a1def09162c6345c5be3fe94.NginxMailingListEnglish@forum.nginx.org> >From within the PC-Server: (base) marco at pc:~/vueMatters/testproject$ curl -vk https://localhost/sockjs-node/info?t=1580397983088 * Trying ::1... * TCP_NODELAY set * connect to ::1 port 443 failed: Connection refused * Trying 127.0.0.1... * TCP_NODELAY set * Connected to localhost (127.0.0.1) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS Unknown, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Unknown (8): * TLSv1.3 (IN), TLS Unknown, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS Unknown, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS Unknown, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Client hello (1): * TLSv1.3 (OUT), TLS Unknown, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=ggc.world * start date: Nov 30 11:22:10 2019 GMT * expire date: Feb 28 11:22:10 2020 GMT * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * TLSv1.3 (OUT), TLS Unknown, Unknown (23): * TLSv1.3 (OUT), TLS Unknown, Unknown (23): * TLSv1.3 (OUT), TLS Unknown, Unknown (23): * Using Stream ID: 1 (easy handle 0x559bc64c5580) * TLSv1.3 (OUT), TLS Unknown, Unknown (23): > GET /sockjs-node/info?t=1580397983088 HTTP/2 > Host: localhost > User-Agent: curl/7.58.0 > Accept: */* > * TLSv1.3 (IN), TLS Unknown, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS Unknown, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS Unknown, Unknown (23): * Connection state changed (MAX_CONCURRENT_STREAMS updated)! * TLSv1.3 (OUT), TLS Unknown, Unknown (23): * TLSv1.3 (IN), TLS Unknown, Unknown (23): < HTTP/2 200 < server: nginx/1.14.0 (Ubuntu) < date: Fri, 31 Jan 2020 14:00:47 GMT < content-type: application/json; charset=UTF-8 < access-control-allow-origin: * < vary: Origin < cache-control: no-store, no-cache, no-transform, must-revalidate, max-age=0 < strict-transport-security: max-age=31536000 < >From the laptop: (base) marco at marco-U36SG:~$ curl -vk https://ggc.world/sockjs-node/info?t=1580397983088 * Trying 2.36.58.214:443... * TCP_NODELAY set * Connected to ggc.world (2.36.58.214) port 443 (#0) * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /home/marco/anaconda3/ssl/cacert.pem CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: CN=ggc.world * start date: Nov 30 11:22:10 2019 GMT * expire date: Feb 28 11:22:10 2020 GMT * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. > GET /sockjs-node/info?t=1580397983088 HTTP/1.1 > Host: ggc.world > User-Agent: curl/7.65.2 > Accept: */* > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Server: nginx/1.14.0 (Ubuntu) < Date: Fri, 31 Jan 2020 14:04:11 GMT < Content-Type: application/json; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < Access-Control-Allow-Origin: * < Vary: Origin < Cache-Control: no-store, no-cache, no-transf Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286879#msg-286879 From nginx-forum at forum.nginx.org Fri Jan 31 14:21:39 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Fri, 31 Jan 2020 09:21:39 -0500 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <31f727d7a1def09162c6345c5be3fe94.NginxMailingListEnglish@forum.nginx.org> References: <20200131110945.GR26683@daoine.org> <31f727d7a1def09162c6345c5be3fe94.NginxMailingListEnglish@forum.nginx.org> Message-ID: <12c25354b29eed8dd045dcb9ef349ef0.NginxMailingListEnglish@forum.nginx.org> Sorry I have to complete the last answer: >From the laptop: (base) marco at marco-U36SG:~$ curl -vk https://ggc.world/sockjs-node/info?t=1580397983088 * Trying 2.36.58.214:443... * TCP_NODELAY set * Connected to ggc.world (2.36.58.214) port 443 (#0) * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /home/marco/anaconda3/ssl/cacert.pem CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: CN=ggc.world * start date: Nov 30 11:22:10 2019 GMT * expire date: Feb 28 11:22:10 2020 GMT * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. > GET /sockjs-node/info?t=1580397983088 HTTP/1.1 > Host: ggc.world > User-Agent: curl/7.65.2 > Accept: */* > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Server: nginx/1.14.0 (Ubuntu) < Date: Fri, 31 Jan 2020 14:20:19 GMT < Content-Type: application/json; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < Access-Control-Allow-Origin: * < Vary: Origin < Cache-Control: no-store, no-cache, no-transform, must-revalidate, max-age=0 < Strict-Transport-Security: max-age=31536000 < * Connection #0 to host ggc.world left intact {"websocket":true,"origins":["*:*"],"cookie_needed":false,"entropy":1587194190} Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286880#msg-286880 From francis at daoine.org Fri Jan 31 14:51:44 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 31 Jan 2020 14:51:44 +0000 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <31f727d7a1def09162c6345c5be3fe94.NginxMailingListEnglish@forum.nginx.org> References: <20200131110945.GR26683@daoine.org> <31f727d7a1def09162c6345c5be3fe94.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200131145144.GU26683@daoine.org> On Fri, Jan 31, 2020 at 09:05:08AM -0500, MarcoI wrote: Hi there, Thanks for that info. Sadly, it looks like the "curl" output is not immediately-obviously useful for determining why your browser tries to access "localhost" when you tell it to access ggc.world. If you repeat the initial test in the browser, that gave you the sockejsError08.jpg picture, but look at the "Network" tab -- where does the word "localhost" first appear? You are pointing your browser at ggc.world. Either a http redirect response header, or some response body content, invites the browser to try to access localhost. *That* is the thing that needs to be changed. If you can see where it is, then maybe it will be clear how to change it. Good luck with it! f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Jan 31 15:49:26 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Fri, 31 Jan 2020 10:49:26 -0500 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <20200131145144.GU26683@daoine.org> References: <20200131145144.GU26683@daoine.org> Message-ID: <139fc4a41c8961f1e3b0f50ed3034689.NginxMailingListEnglish@forum.nginx.org> This is the output of the "Network" tab : https://drive.google.com/open?id=1QJMe8FEBrEuWacHWeJ_TQegMkF0v68AY " Either a http redirect response header, or some response body content, invites the browser to try to access localhost" : as far as I see and understand, the requested URL, or the URL to which the initial request is redirected, from the initial https://ggc.world , is: Request URL: https://localhost/sockjs-node/info?t=1580484448072 So... I ask you... am I right or wrong in thinking that this proxy_pass address has to be changed? server { listen 443 ssl http2 default_server; server_name ggc.world; ssl_certificate /etc/ssl/certs/chained.pem; ssl_certificate_key /etc/ssl/private/domain.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RS$ ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; ssl_dhparam /etc/ssl/certs/dhparam.pem; #ssl_stapling on; #ssl_stapling_verify on; access_log /var/log/nginx/ggcworld-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; // <------------------------------------------------- !!!! proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } And how to change it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286883#msg-286883 From francis at daoine.org Fri Jan 31 20:07:09 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 31 Jan 2020 20:07:09 +0000 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <139fc4a41c8961f1e3b0f50ed3034689.NginxMailingListEnglish@forum.nginx.org> References: <20200131145144.GU26683@daoine.org> <139fc4a41c8961f1e3b0f50ed3034689.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200131200709.GV26683@daoine.org> On Fri, Jan 31, 2020 at 10:49:26AM -0500, MarcoI wrote: Hi there, > This is the output of the "Network" tab : > https://drive.google.com/open?id=1QJMe8FEBrEuWacHWeJ_TQegMkF0v68AY That picture looks like the right-hand side is showing the "request details" of the fifth request down, the red "info" one. Look at the first few successful requests instead. Or maybe "view source" of the main page, and look at the html that was returned. What puts the word "localhost" into that html? > " Either a http redirect response header, or some response body content, > invites the browser to try to access localhost" : > as far as I see and understand, the requested URL, or the URL to which the > initial request is redirected, from the initial https://ggc.world , is: > Request URL: https://localhost/sockjs-node/info?t=1580484448072 I don't see that. Can you see or show the complete response to the initial request? > I ask you... am I right or wrong in thinking that this proxy_pass address > has to be changed? I think it probably does not need to be changed. I think that either you need to add some nginx proxy_redirect lines; or you need to change the vue setup to never use the word "localhost". Or maybe both. If you can show where the word "localhost" appears in the response to the request for ggc.world, it may be clearer where the change should be made. f -- Francis Daly francis at daoine.org From bagagerek at hotmail.com Fri Jan 31 21:33:31 2020 From: bagagerek at hotmail.com (bagagerek) Date: Fri, 31 Jan 2020 22:33:31 +0100 Subject: right config for letsencrypt Message-ID: Hi y'all, I want Nginx to run as a reverse proxy on my Rasberry pi with Motioneye. I followed the manual but I can't seem tot get it right. I've forwarded port 8081 on my router. My "sites-enabled" file looks like this: server { ??? listen 80; ??? server_name mydomain.com; ??? location /cams/ { ??????? proxy_pass http://192.168.178.244:8765/; ??????? proxy_read_timeout 120s; ??????? access_log off; ??? } } When I put my local ip in the browser, I see the default nginxpage but when I put my external ipadress in the browser, I can't connect. Also? if I go to mydomain.com I get an error saying the ip can't be found. However, I can see my cam on mydomain.com:8081, but letsencrypt won't let me get an certificate, so I need this solved. I've tried? to put "listen 8081" instead of "listen 80" in the sites-enabledfile, but when I reload nginx, I still can't connect. Can anybody help me out here? tx in advance From lists-nginx at swsystem.co.uk Fri Jan 31 21:57:58 2020 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Fri, 31 Jan 2020 21:57:58 +0000 Subject: rewriting $arg into request. In-Reply-To: <000001d5d833$3bd9f850$b38de8f0$@roze.lv> References: <0d6c8381-9b73-7b8b-e498-64dacb2451ac@swsystem.co.uk> <20200131112843.GS26683@daoine.org> <97fb1986-41e2-2b91-032d-3e6b988950af@swsystem.co.uk> <000001d5d833$3bd9f850$b38de8f0$@roze.lv> Message-ID: On 31/01/2020 12:37, Reinis Rozitis wrote: > if ($arg_p) { > return 301http://yoursite/p/$arg_p; > } This is what I was originally looking for, however as I've only 20 pages to manage the individual redirects via the map directive I believe will work better as it will remove a additional redirect. Steve.