From baalchina at qq.com Sat Jun 1 01:47:17 2019 From: baalchina at qq.com (=?utf-8?b?YmFhbGNoaW5h?=) Date: Sat, 1 Jun 2019 09:47:17 +0800 Subject: =?UTF-8?Q?is_nginx_support_time-based_acl=EF=BC=9F?= Message-ID: <452E1AA4-8462-46EE-9193-1790003B5179@qq.com> hi all, I have a Nginx server,which I want to setup a time-based acl, for example, during 8am to 17pm, Nginx accept all connections, during 17pm to 8am nextday, Nginx deny all connections. Different acl may be deployed in different sites. Is this possible? I looked at the ngx_http_access_module, seems cannot do this. Thanks! From al-nginx at none.at Sat Jun 1 09:18:45 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Sat, 1 Jun 2019 11:18:45 +0200 Subject: =?UTF-8?Q?Re=3A_is_nginx_support_time-based_acl=EF=BC=9F?= In-Reply-To: <452E1AA4-8462-46EE-9193-1790003B5179@qq.com> References: <452E1AA4-8462-46EE-9193-1790003B5179@qq.com> Message-ID: <38cfbcf1-50a3-1d96-55d0-8e5caf142a2e@none.at> Hi. Am 01.06.2019 um 03:47 schrieb baalchina: > hi all, I have a Nginx server,which I want to setup a time-based acl, for example, during 8am to 17pm, Nginx accept all connections, during 17pm to 8am nextday, Nginx deny all connections. > Different acl may be deployed in different sites. > > Is this possible? I looked at the ngx_http_access_module, seems cannot do this. I would use https://nginx.org/en/docs/http/ngx_http_auth_request_module.html for that. You will need a web service which checks the time and allow or deny the access. > Thanks! Regards Aleks From r at roze.lv Sat Jun 1 11:50:12 2019 From: r at roze.lv (Reinis Rozitis) Date: Sat, 1 Jun 2019 14:50:12 +0300 Subject: =?UTF-8?Q?RE=3A_is_nginx_support_time-based_acl=EF=BC=9F?= In-Reply-To: <452E1AA4-8462-46EE-9193-1790003B5179@qq.com> References: <452E1AA4-8462-46EE-9193-1790003B5179@qq.com> Message-ID: <000801d51870$2bc481c0$834d8540$@roze.lv> > hi all, I have a Nginx server,which I want to setup a time-based acl, for example, > during 8am to 17pm, Nginx accept all connections, during 17pm to 8am nextday, > Nginx deny all connections. > Different acl may be deployed in different sites. > > Is this possible? I looked at the ngx_http_access_module, seems cannot do this. Besides auth request or some lua module depending on the complexity of the acl you could just use crontab for that: - in nginx.conf in particular site/server block add: include deny.conf; (each site can have its own config) - in crontab at 17pm do: echo 'deny all;' > deny.conf; nginx -s reload - at 8am: echo '' > deny.conf; nginx -s reload (or if the rules are more complex you can just write those down in a pre-made file and then in the crontab just swap those and do a reload) rr From mat999 at gmail.com Sat Jun 1 12:53:41 2019 From: mat999 at gmail.com (Mathew Heard) Date: Sat, 1 Jun 2019 22:53:41 +1000 Subject: Google QUIC support in nginx In-Reply-To: <423d86fdb50880a10d4a8312ce7072c0.NginxMailingListEnglish@forum.nginx.org> References: <423d86fdb50880a10d4a8312ce7072c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: It is nice to see that confirmation :) On Fri, May 31, 2019 at 4:54 PM George wrote: > Roadmap suggests it is in Nginx 1.17 mainline QUIC = HTTP/3 > https://trac.nginx.org/nginx/roadmap :) > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,256352,284367#msg-284367 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Jun 1 14:47:57 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 1 Jun 2019 17:47:57 +0300 Subject: duplicate listen options for backlog directive for ip:80 and ip:443 pairs ? In-Reply-To: <7fe678b6b9d47ed9f25f6eed9969c44a.NginxMailingListEnglish@forum.nginx.org> References: <7fe678b6b9d47ed9f25f6eed9969c44a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190601144756.GS1877@mdounin.ru> Hello! On Fri, May 31, 2019 at 03:15:18AM -0400, George wrote: > I am trying to troubleshoot a duplicate listen options error that only > happens on one server and not the other. > > From docs at http://nginx.org/en/docs/http/ngx_http_core_module.html backlog > listen directive works for each ip:port pair so I should be able to set > backlog directive on listen directive once on port 80 and once on port 443. > But on one server I am not able to and can't see where the problem is coming > from ? How shall I debug this ? [...] > --- not working --- > Now on another Nginx 1.17.0 server I have 3 nginx vhosts but nginx restarts > complain of duplicate listen options once I add vhost 3 and the error is > related for vhost 2's listen directive > > nginx: [emerg] duplicate listen options for 0.0.0.0:443 in > /path/to/vhost2/vhost > > vhost 1 > listen 80 default_server backlog=4095 reuseport fastopen=256; > > vhost 2 > listen 443 ssl http2 reuseport; > > vhost 3 > listen 443 ssl http2 backlog=4095; > > if i remove vhost 3 backlog=4095 directive there's no error though ? Both listen 443 ssl http2 reuseport; and listen 443 ssl http2 backlog=4095; specify listening socket options, "reuseport" and "backlog=4095". Socket options are only allowed to be specified once for a given listening socket, and nginx complains as you try to specify time twice. You have to specify both on a single "listen" directive. > Now if I reverse it so backlog=4095 is set in vhost 2 and not vhost 3, then > it works and nginx doesn't complain of errors ? No idea why that is the case > or if it's a bug ? > > vhost 2 > listen 443 ssl http2 reuseport backlog=4095; > > vhost 3 > listen 443 ssl http2; That's exactly how it is expected to work. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat Jun 1 19:10:52 2019 From: nginx-forum at forum.nginx.org (George) Date: Sat, 01 Jun 2019 15:10:52 -0400 Subject: duplicate listen options for backlog directive for ip:80 and ip:443 pairs ? In-Reply-To: <20190601144756.GS1877@mdounin.ru> References: <20190601144756.GS1877@mdounin.ru> Message-ID: <140b31fe4674a49ba8f4efef8a376a9b.NginxMailingListEnglish@forum.nginx.org> I see. Thanks Maxim for the clarification. Much appreciated :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284368,284402#msg-284402 From mat999 at gmail.com Sat Jun 1 22:28:17 2019 From: mat999 at gmail.com (Mathew Heard) Date: Sun, 2 Jun 2019 08:28:17 +1000 Subject: Google QUIC support in nginx In-Reply-To: References: <423d86fdb50880a10d4a8312ce7072c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: Heres probably the best confirmation I could find. Development has also started on support for QUIC and HTTP/3 ? the next significant update to the transport protocols that will deliver websites, applications, and APIs. This is a significant undertaking, but likely to arrive during the NGINX 1.17 development cycle From: https://www.nginx.com/blog/nginx-1-16-1-17-released/ On Sat, Jun 1, 2019 at 10:53 PM Mathew Heard wrote: > It is nice to see that confirmation :) > > On Fri, May 31, 2019 at 4:54 PM George > wrote: > >> Roadmap suggests it is in Nginx 1.17 mainline QUIC = HTTP/3 >> https://trac.nginx.org/nginx/roadmap :) >> >> Posted at Nginx Forum: >> https://forum.nginx.org/read.php?2,256352,284367#msg-284367 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Sun Jun 2 01:31:14 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 2 Jun 2019 03:31:14 +0200 Subject: Google QUIC support in nginx In-Reply-To: References: <423d86fdb50880a10d4a8312ce7072c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7c305bdd-9df5-460b-8b8c-7241e2b4149c@none.at> Am 02.06.2019 um 00:28 schrieb Mathew Heard: > Heres probably the best confirmation I could find. > > Development has also started on support for?QUIC > ?and?HTTP/3 > ?? the next > significant update to the transport protocols that will deliver websites, > applications, and APIs. This is a significant undertaking, but likely to arrive > during the NGINX?1.17 development cycle? > > From: https://www.nginx.com/blog/nginx-1-16-1-17-released/ I think it will be added when the drafts are released, IMHO. https://datatracker.ietf.org/wg/quic/charter/ Regards Aleks > On Sat, Jun 1, 2019 at 10:53 PM Mathew Heard > wrote: > > It is nice to see that confirmation :) > > On Fri, May 31, 2019 at 4:54 PM George > wrote: > > Roadmap suggests it is in Nginx 1.17 mainline QUIC = HTTP/3 > https://trac.nginx.org/nginx/roadmap :) > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,256352,284367#msg-284367 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Sun Jun 2 13:21:08 2019 From: nginx-forum at forum.nginx.org (babaz) Date: Sun, 02 Jun 2019 09:21:08 -0400 Subject: reverse proxy nextcloud / owncloud Message-ID: <77f434a4e5428f27248f16d65e312594.NginxMailingListEnglish@forum.nginx.org> Hi guys, sorry to bother you with this topic but I've tried for two days without finding solution. Basically I have a letsencrypt installation and a nextcloud in a docker. I'm able to make the reverse proxy working loading the pages but I cannot upload any kind of file is always giving "not enough space message". This is my configuration on nginx location /nextcloud/ { include /config/nginx/proxy.conf; proxy_pass http://172.17.0.2:80/; # proxy_max_temp_file_size 2048m; client_max_body_size 0; # proxy_http_version 1.1; proxy_request_buffering off; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # proxy_set_header X-Forwarded-Proto $scheme; # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"; } Please help me I-m getting crazy. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284407,284407#msg-284407 From anoopalias01 at gmail.com Sun Jun 2 13:24:39 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sun, 2 Jun 2019 18:54:39 +0530 Subject: reverse proxy nextcloud / owncloud In-Reply-To: <77f434a4e5428f27248f16d65e312594.NginxMailingListEnglish@forum.nginx.org> References: <77f434a4e5428f27248f16d65e312594.NginxMailingListEnglish@forum.nginx.org> Message-ID: Not enough space? -- doesn't seem to be a standard Nginx error message It might be something to do with the application itself ( nextcloud)..and since you say docker..make sure the container can store the files ( data dir for nextcloud) On Sun, Jun 2, 2019 at 6:51 PM babaz wrote: > Hi guys, > sorry to bother you with this topic but I've tried for two days without > finding solution. > Basically I have a letsencrypt installation and a nextcloud in a docker. > I'm able to make the reverse proxy working loading the pages but I cannot > upload any kind of file is always giving "not enough space message". > This is my configuration on nginx > > location /nextcloud/ { > include /config/nginx/proxy.conf; > proxy_pass http://172.17.0.2:80/; > # proxy_max_temp_file_size 2048m; > client_max_body_size 0; > # proxy_http_version 1.1; > proxy_request_buffering off; > # proxy_set_header Host $host; > # proxy_set_header X-Real-IP $remote_addr; > # proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > # proxy_set_header X-Forwarded-Proto $scheme; > # add_header Strict-Transport-Security "max-age=31536000; > includeSubDomains; preload"; > } > > Please help me I-m getting crazy. > Thanks > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,284407,284407#msg-284407 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jun 3 09:42:22 2019 From: nginx-forum at forum.nginx.org (devCU) Date: Mon, 03 Jun 2019 05:42:22 -0400 Subject: ssl_trusted_certificate doesn't accept @server_name variable Message-ID: The following works as advertised in my vhost server block ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/mydomain.com/chain.pem; To better automate vhosts en mass I tried using the $server_name variable server_name mydomain.com; ssl_certificate /etc/letsencrypt/live/$server_name/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/$server_name/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/$server_name/chain.pem; Nginx failed but this works server_name mydomain.com; ssl_certificate /etc/letsencrypt/live/$server_name/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/$server_name/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/mydomain.com/chain.pem; If ssl_certificate and ssl_certificate accept the $server_name variable then how come ssl_trusted_certificate doesn't? Heres the error on Ubuntu 18.04.2 running Nginx 1.17.0 source compiled with OpenSSL 1.1.1c Jun 03 05:34:22 cloud systemd[1]: Starting The NGINX HTTP and reverse proxy server... Jun 03 05:34:22 cloud nginx[12646]: nginx: [emerg] SSL_CTX_load_verify_locations("/etc/letsencrypt/live/$server_name/chain.pem") failed (SSL: error:02001002:system library: Jun 03 05:34:22 cloud nginx[12646]: nginx: configuration file /etc/nginx/nginx.conf test failed Jun 03 05:34:22 cloud systemd[1]: nginx.service: Control process exited, code=exited status=1 Jun 03 05:34:22 cloud systemd[1]: nginx.service: Failed with result 'exit-code'. Jun 03 05:34:22 cloud systemd[1]: Failed to start The NGINX HTTP and reverse proxy server. ssl_certificate and ssl_certificate_key parse the variable $server_name and the correct path to the domain's SSL certs are validated. Seems odd to me. Thanks for any explanation ~Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284410,284410#msg-284410 From mdounin at mdounin.ru Mon Jun 3 12:46:08 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Jun 2019 15:46:08 +0300 Subject: ssl_trusted_certificate doesn't accept @server_name variable In-Reply-To: References: Message-ID: <20190603124608.GT1877@mdounin.ru> Hello! On Mon, Jun 03, 2019 at 05:42:22AM -0400, devCU wrote: > The following works as advertised in my vhost server block > > ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; > ssl_certificate_key > /etc/letsencrypt/live/mydomain.com/privkey.pem; > ssl_trusted_certificate > /etc/letsencrypt/live/mydomain.com/chain.pem; > > To better automate vhosts en mass I tried using the $server_name variable > > server_name mydomain.com; > > ssl_certificate /etc/letsencrypt/live/$server_name/fullchain.pem; > ssl_certificate_key > /etc/letsencrypt/live/$server_name/privkey.pem; This is generally a bad change. You shouldn't use variables just to save you from writing the same name in the appropriate directives. See here for a detailed explanation and suggestions: http://nginx.org/en/docs/faq/variables_in_config.html > ssl_trusted_certificate > /etc/letsencrypt/live/$server_name/chain.pem; This is not goint to work, as the ssl_trusted_certificate directive does not support variables. [...] > If ssl_certificate and ssl_certificate accept the $server_name variable then > how come ssl_trusted_certificate doesn't? Variables support in ssl_certificate and ssl_certificate_key directives address a specific use case when one cannot write a static configuration with pre-existing certificates - e.g., when certificates are added on a regular basis, and it is not possible to reload nginx configuration with such a rate. Such use case is unlikely to be applicable to ssl_trusted_certificate, and hence there are no plans to add variables support to the ssl_trusted_certificate directive. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Mon Jun 3 17:42:31 2019 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 3 Jun 2019 20:42:31 +0300 Subject: njs: how to define a global variable In-Reply-To: <654f3d8357ff11150eae90c432622598.NginxMailingListEnglish@forum.nginx.org> References: <654f3d8357ff11150eae90c432622598.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 29.05.2019 20:02, guy1976 wrote: > hi > > is it possible to define a global variable that will be persist for > different requests? Hi Guy, Currently it is not possible as all njs VM are short-lived (a VM per request). If performance is not a serious issue you can use FS API (see http://nginx.org/en/docs/njs/reference.html#njs_api_fs) > > thank you, > Guy. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284350,284350#msg-284350 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Mon Jun 3 17:55:53 2019 From: nginx-forum at forum.nginx.org (devCU) Date: Mon, 03 Jun 2019 13:55:53 -0400 Subject: ssl_trusted_certificate doesn't accept @server_name variable In-Reply-To: <20190603124608.GT1877@mdounin.ru> References: <20190603124608.GT1877@mdounin.ru> Message-ID: <541df661ce86bb4c53ae078b03534674.NginxMailingListEnglish@forum.nginx.org> Thanks for the info, I am using SED though a bash script to replace the proper domain during account user and web setup and it works fine, just thought I could take it another step. Read the link on variables and it makes sense, will continue to stick with my current setup. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284410,284420#msg-284420 From theivend at ca.ibm.com Mon Jun 3 18:55:52 2019 From: theivend at ca.ibm.com (Leonard Theivendra) Date: Mon, 3 Jun 2019 14:55:52 -0400 Subject: Reverse proxy'ing of socket.io events not working Message-ID: Hi, I've been trying to get nginx reverse proxy'ing of socket.io events working but not having any luck so would greatly appreciate any help/insight. Basically my scenario is: nginx running in a docker container (based on the nginx:stable-alpine image), a node server app serving up rest APIs as well as responses via socket.io when those APIs are called. On the client side, the aim is to call these APIs via the proxy, and also listen to socket.io events return back through the proxy. And this is all ssl secured. My nginx conf file: worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server { listen 8080 ssl; server_name localhost; ssl_certificate /etc/nginx/conf.d/cert.pem; ssl_certificate_key /etc/nginx/conf.d/key.pem; location / { proxy_pass https://apiandsocketservername:9091; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } } My simple socket.io client node app: socket = require('socket.io-client')(`https://localhost:8080`, {timeout: 5000, rejectUnauthorized: false}); // Connect to the socket socket.on('connect', () => { console.log('Connected to server'); }); // Wait on event named 'projectValidated' socket.on('projectValidated', (args) => { console.log('projectValidated'); console.log(args); }); So the scenario would be to start up the node app above, then hit the rest API endpoint on https://localhost:8080 via curl, and listen for the socket event on https://localhost:8080 originating from https://apiandsocketservername:9091 via the proxy. When I run the nginx container and socket.io client locally on my laptop everything works, and the nginx container logs has this added line when the socket.io node app client connects, and no extra logging when the socket.io events are successfully received: 172.17.0.1 - - [31/May/2019:16:31:07 +0000] "GET /socket.io/?EIO=3&transport=polling&t=MiEkmaT&b64=1&sid=viHdGa9b26g9ejGYAAAD HTTP/1.1" 200 3 "-" "nodeXMLHttpRequest" "-" When I kill the socket.io client, nginx emits this single line in the log: 172.17.0.1 - - [31/May/2019:16:34:18 +0000] "GET /socket.io/?EIO=3&transport=websocket&sid=viHdGa9b26g9ejGYAAAD HTTP/1.1" 101 379 "-" "-" "-" But when I run the same scenario on a kubernetes cluster where the nginx proxy container and the socket.io client is on one pod and the rest api and socket.io server is on a different pod: all the rest api proxy'ing works for the both the https api calls and responses, and also the node app is able to do the socket connection properly, but the socket.io event is never received successfully. Nginx also continuously emits these two lines in the log every 25 seconds: 9.37.248.25 - - [31/May/2019:17:37:26 +0000] "GET /socket.io/?EIO=3&transport=polling&t=MiEzrxs&b64=1&sid=UJnUsScsSyJSmrOiAAAA HTTP/1.1" 200 3 "-" "node-XMLHttpRequest" "-" 9.37.248.25 - - [31/May/2019:17:37:26 +0000] "POST /socket.io/?EIO=3&transport=polling&t=MiEzy2V&b64=1&sid=UJnUsScsSyJSmrOiAAAA HTTP/1.1" 200 2 "-" "node-XMLHttpRequest" "-" Would anyone be able to shed light on what could be going wrong and if I need to change anything in my nginx conf file? And what the above two lines repeatedly emitted in the nginx log mean? Thanks in advance, Len. --------------------------------------------- Len Theivendra IBM Canada Software Lab email: theivend at ca.ibm.com tel: (905) 413-3777 tie: 969-3777 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at mattern.org Mon Jun 3 20:16:20 2019 From: nginx at mattern.org (Marcus) Date: Mon, 3 Jun 2019 22:16:20 +0200 Subject: SMTP proxy with "STARTTLS only" accepts unencrypted mail Message-ID: Hello Nginx users, I try to use NGiNX 1.10.3-1+deb9u2 (Debian 9 version) as SMTP proxy in front of a postfix server. I defined one server that should accept encrypted connections only. Therefore I set "starttls only". But this server accepts plaintext mails also. If I use telnet to test the proxy it provides STARTTLS but I can relay a mail without using it. Please see my config: --- proxy_pass_error_message on; ssl_certificate /etc/ssl/private/cert.pem; ssl_certificate_key /etc/ssl/private/key.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_session_cache shared:SSL:10m;ssl_session_timeout 10m; resolver 127.0.0.1 valid=30s; smtp_capabilities "SIZE 51200000" ENHANCEDSTATUSCODES 8BITMIME DSN VRFY ETRN PIPELINING; server { server_name test.myserver.com; auth_http localhost:10080/10.1.0.1-25; listen 10.1.0.1:25; protocol smtp; smtp_auth none; starttls only; } --- What can I do to enforce STARTTLS? Or did I miss something? Greetings Marcus From nginx-forum at forum.nginx.org Mon Jun 3 21:35:05 2019 From: nginx-forum at forum.nginx.org (walt) Date: Mon, 03 Jun 2019 17:35:05 -0400 Subject: auth_request with grpc Message-ID: Hi all, I'm attempting to use ngx_http_auth_request_module with grpc. When I pass a valid Authorization header with the grpc request everything works fine. Unfortunately when auth_request fails, the grpc client doesn't seem to register the 401 response from nginx and hangs. Has anyone had success in getting auth_request to work with grpc requests? Thanks, Walt Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284427,284427#msg-284427 From nginx-forum at forum.nginx.org Tue Jun 4 06:25:39 2019 From: nginx-forum at forum.nginx.org (arashad) Date: Tue, 04 Jun 2019 02:25:39 -0400 Subject: broken header without HAProxy Message-ID: Hello, I'm trying to remove HAProxy from my setup but i keep getting "broken header while reading proxy protocol" and the website won't open at all. If i remove the "send-proxy" from HAProxy the website won't open also. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284428,284428#msg-284428 From al-nginx at none.at Tue Jun 4 08:21:34 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 4 Jun 2019 10:21:34 +0200 Subject: broken header without HAProxy In-Reply-To: References: Message-ID: <0f247274-053d-eb15-119c-52319943971c@none.at> Hi. Am 04.06.2019 um 08:25 schrieb arashad: > Hello, > > I'm trying to remove HAProxy from my setup but i keep getting "broken header > while reading proxy protocol" and the website won't open at all. > If i remove the "send-proxy" from HAProxy the website won't open also. It would help a lot when you tell us which versions you use and which configs you have. haproxy -vv nginx -v cat $HAPROXY-CONF nginx -T Just for my curiosity why do you want to replace haproxy? > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284428,284428#msg-284428 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From liuyujun at fingera.cn Tue Jun 4 08:25:29 2019 From: liuyujun at fingera.cn (=?utf-8?B?bGl1eXVqdW4=?=) Date: Tue, 4 Jun 2019 16:25:29 +0800 Subject: make ngx_pool clear Message-ID: patch? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: diff.patch Type: application/octet-stream Size: 4301 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Tue Jun 4 08:32:18 2019 From: nginx-forum at forum.nginx.org (arashad) Date: Tue, 04 Jun 2019 04:32:18 -0400 Subject: broken header without HAProxy In-Reply-To: <0f247274-053d-eb15-119c-52319943971c@none.at> References: <0f247274-053d-eb15-119c-52319943971c@none.at> Message-ID: <47b3cd3af95ef01497849382e7992f28.NginxMailingListEnglish@forum.nginx.org> haproxy -vv HA-Proxy version 1.5.18 2016/05/10 Copyright 2000-2016 Willy Tarreau Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing -DTCP_USER_TIMEOUT=18 OPTIONS = USE_LINUX_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.7 Compression algorithms supported : identity, deflate, gzip Built with OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017 Running on OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 8.32 2012-11-30 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. ############################## nginx -v nginx version: nginx/1.16.0 ############################## cat $HAPROXY-CONF produces an error seems there's no variable with this name As for nginx -T produces too much to be put here as the server has several vhosts. I need to remove the HAProxy as we're getting a new server that can handle all the requests that were split over the nodes. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284428,284431#msg-284431 From liuyujun at fingera.cn Tue Jun 4 08:38:21 2019 From: liuyujun at fingera.cn (=?utf-8?B?bGl1eXVqdW4=?=) Date: Tue, 4 Jun 2019 16:38:21 +0800 Subject: make ngx_pool clear2 Message-ID: hello. everybody here? example before: for (p = pool; p; p = p->d.next) { p->d.last = (u_char *) p + sizeof(ngx_pool_t); p->d.failed = 0; } after: for (p = &pool->d; p; p = p->next) { if (p == &pool->d) { p->last = (u_char *) p + sizeof(ngx_pool_t); } else { p->last = (u_char *) p + sizeof(ngx_pool_data_t); } p->failed = 0; } example2 before: static void * ngx_palloc_block(ngx_pool_t *pool, size_t size) { u_char *m; size_t psize; ngx_pool_t *p, *new; psize = (size_t) (pool->d.end - (u_char *) pool); m = ngx_memalign(NGX_POOL_ALIGNMENT, psize, pool->log); if (m == NULL) { return NULL; } new = (ngx_pool_t *) m; new->d.end = m + psize; new->d.next = NULL; new->d.failed = 0; m += sizeof(ngx_pool_data_t); m = ngx_align_ptr(m, NGX_ALIGNMENT); new->d.last = m + size; for (p = pool->current; p->d.next; p = p->d.next) { if (p->d.failed++ > 4) { pool->current = p->d.next; } } p->d.next = new; return m; } after: static void * ngx_palloc_block(ngx_pool_t *pool, size_t size) { u_char *m; size_t psize; ngx_pool_data_t *p, *new; psize = (size_t) (pool->d.end - (u_char *) pool); m = ngx_memalign(NGX_POOL_ALIGNMENT, psize, pool->log); if (m == NULL) { return NULL; } new = (ngx_pool_data_t *) m; new->end = m + psize; new->next = NULL; new->failed = 0; m += sizeof(ngx_pool_data_t); m = ngx_align_ptr(m, NGX_ALIGNMENT); new->last = m + size; for (p = pool->current; p->next; p = p->next) { if (p->failed++ > 4) { pool->current = p->next; } } p->next = new; return m; } -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: diff.patch Type: application/octet-stream Size: 4301 bytes Desc: not available URL: From al-nginx at none.at Tue Jun 4 09:23:33 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 4 Jun 2019 11:23:33 +0200 Subject: broken header without HAProxy In-Reply-To: <47b3cd3af95ef01497849382e7992f28.NginxMailingListEnglish@forum.nginx.org> References: <0f247274-053d-eb15-119c-52319943971c@none.at> <47b3cd3af95ef01497849382e7992f28.NginxMailingListEnglish@forum.nginx.org> Message-ID: Am 04.06.2019 um 10:32 schrieb arashad: > haproxy -vv > HA-Proxy version 1.5.18 2016/05/10 > Copyright 2000-2016 Willy Tarreau [snip] > ############################## > nginx -v > nginx version: nginx/1.16.0 > > ############################## > > cat $HAPROXY-CONF > produces an error seems there's no variable with this name of course. it was a suggestion that you show us the haproxy config. > As for nginx -T produces too much to be put here as the server has several > vhosts. Well, please strip it to a minimum where we can see what the problem could be and the error logs from nginx when the error happen. > I need to remove the HAProxy as we're getting a new server that can handle > all the requests that were split over the nodes. Ah okay. > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284428,284431#msg-284431 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Tue Jun 4 10:18:32 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Jun 2019 13:18:32 +0300 Subject: make ngx_pool clear2 In-Reply-To: References: Message-ID: <20190604101832.GY1877@mdounin.ru> Hello! On Tue, Jun 04, 2019 at 04:38:21PM +0800, liuyujun wrote: > hello. everybody here? > > example before: > for (p = pool; p; p = p->d.next) { > p->d.last = (u_char *) p + sizeof(ngx_pool_t); > p->d.failed = 0; > } > > > > > after: > for (p = &pool->d; p; p = p->next) { > if (p == &pool->d) { > p->last = (u_char *) p + sizeof(ngx_pool_t); > } else { > p->last = (u_char *) p + sizeof(ngx_pool_data_t); > } > p->failed = 0; > } https://trac.nginx.org/nginx/ticket/1594 [...] -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jun 4 10:28:03 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Jun 2019 13:28:03 +0300 Subject: broken header without HAProxy In-Reply-To: References: Message-ID: <20190604102803.GZ1877@mdounin.ru> Hello! On Tue, Jun 04, 2019 at 02:25:39AM -0400, arashad wrote: > I'm trying to remove HAProxy from my setup but i keep getting "broken header > while reading proxy protocol" and the website won't open at all. > If i remove the "send-proxy" from HAProxy the website won't open also. Likely the reason is that you have "listen ... proxy_protocol;" in your config, and removing HAProxy, as well as removing "send-proxy" option in HAProxy, makes connections invalid as they no longer have required PROXY protocol header. To accept connections without PROXY protocol header you have to remove the "proxy_protocol" flag from the listen directives. Alternatively, you can configure different listening socket without the "proxy_protocol" flag, and use this socket for connections without PROXY protocol header. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jun 4 13:49:34 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Jun 2019 16:49:34 +0300 Subject: SMTP proxy with "STARTTLS only" accepts unencrypted mail In-Reply-To: References: Message-ID: <20190604134934.GA1877@mdounin.ru> Hello! On Mon, Jun 03, 2019 at 10:16:20PM +0200, Marcus wrote: > I try to use NGiNX 1.10.3-1+deb9u2 (Debian 9 version) as SMTP proxy in > front of a postfix server. I defined one server that should accept > encrypted connections only. Therefore I set "starttls only". > > But this server accepts plaintext mails also. If I use telnet to test > the proxy it provides STARTTLS but I can relay a mail without using it. Currently, "starttls" is only enforced for authentication, but not for non-authenticated mail delivery, as with "smtp_auth none" in your config. If you want to reject all non-encrypted non-authenticated mail delivery, you can do so using an auth_http script. Recently discussed here: http://mailman.nginx.org/pipermail/nginx-devel/2019-March/011970.html -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Jun 5 00:34:10 2019 From: nginx-forum at forum.nginx.org (merlit64) Date: Tue, 04 Jun 2019 20:34:10 -0400 Subject: getting POST data Message-ID: <301fe0526ae66455fc8a96147939ee10.NginxMailingListEnglish@forum.nginx.org> I have a header/body filter module that I have created that successfully modifies HTML as it passes through while being proxied by proxy_pass. client ---- NGINX proxy with filter --- html server I need to use information in an http POST to make decisions on how to modify the body. I see how to get parameters for a GET request. That information is available to me in r->args.data. How do I access the arguments (or parameters) for a POST? I know they are encoding in the HTTP body for a POST, so it makes sense that they are not in r->args->data. How do I access POST parameters in a body/header filter module? Thanks in advance! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284456,284456#msg-284456 From satcse88 at gmail.com Wed Jun 5 12:54:06 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Wed, 5 Jun 2019 20:54:06 +0800 Subject: HTTPS Pinning Message-ID: Hi Team, We would like to fix the HTTPS pinning vulnerability on our Nginx and Mobile application Android/iOS. If I enable on Nginx, do we need to add the pinning keys on our application and have to rotate the pinning keys everytime when the SSL cert is renewed. Please advise. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Wed Jun 5 16:56:14 2019 From: sca at andreasschulze.de (A. Schulze) Date: Wed, 5 Jun 2019 18:56:14 +0200 Subject: HTTPS Pinning In-Reply-To: References: Message-ID: <295ff55c-086c-9f29-d0e1-6dca0050dfa6@andreasschulze.de> Am 05.06.19 um 14:54 schrieb Sathish Kumar: > Hi Team, > > We would like to fix the HTTPS pinning vulnerability on our Nginx and Mobile application Android/iOS. If I enable on Nginx, do we need to add the pinning keys on our application and have to rotate the pinning keys everytime when the SSL cert is renewed. > > Please advise. HPKP is more or less deprecated. I suggest to no use it anymore. Use HSTS, try to understand the implication of "includeSubDomains" and https://hstspreload.org/ Andreas From nginx at mattern.org Wed Jun 5 19:06:02 2019 From: nginx at mattern.org (Marcus) Date: Wed, 5 Jun 2019 21:06:02 +0200 Subject: SMTP proxy with "STARTTLS only" accepts unencrypted mail In-Reply-To: <20190604134934.GA1877@mdounin.ru> References: <20190604134934.GA1877@mdounin.ru> Message-ID: Thank you very much. I didn't find it. Am 04.06.19 um 15:49 schrieb Maxim Dounin: > Hello! > > On Mon, Jun 03, 2019 at 10:16:20PM +0200, Marcus wrote: > >> I try to use NGiNX 1.10.3-1+deb9u2 (Debian 9 version) as SMTP proxy in >> front of a postfix server. I defined one server that should accept >> encrypted connections only. Therefore I set "starttls only". >> >> But this server accepts plaintext mails also. If I use telnet to test >> the proxy it provides STARTTLS but I can relay a mail without using it. > Currently, "starttls" is only enforced for authentication, but not > for non-authenticated mail delivery, as with "smtp_auth none" in > your config. If you want to reject all non-encrypted > non-authenticated mail delivery, you can do so using an auth_http > script. > > Recently discussed here: > > http://mailman.nginx.org/pipermail/nginx-devel/2019-March/011970.html > From nginx-forum at forum.nginx.org Wed Jun 5 20:37:43 2019 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 05 Jun 2019 16:37:43 -0400 Subject: SMTP proxy with "STARTTLS only" accepts unencrypted mail In-Reply-To: References: Message-ID: You might be better of with nginx stream to offload (ssl/tls), all of it is then encrypted. stream { upstream backendsmtp { server 192.168.3.32:25; } server { listen 1234 ssl; ssl_certificate /nginx/crts/global1.cert; ssl_certificate_key /nginx/crts/global1.key; include /nginx/conf/sslciphers.conf; proxy_pass backendsmtp; .................... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284426,284468#msg-284468 From andre8525 at hotmail.com Thu Jun 6 23:00:13 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Thu, 6 Jun 2019 23:00:13 +0000 Subject: Securing URLs with the Secure Link Module in NGINX Message-ID: Hello, I have a project to build a caching server for HLS with nginx which is using S3 as an origin. I completed this task and everything is working as expected. Now the next task is to use secure link to secure m3u8 and ts files. I used the instructions from this URL but i was getting 403 for all files: https://www.nginx.com/blog/securing-urls-secure-link-module-nginx-plus/ However i changed it little bit and i used the following (without map) and i was able to get 200 for only the URI, so i presume that the secure link with NGINX is working: secure_link $arg_md5,$arg_expires; secure_link_md5 "enigma$uri$secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } When i try with map i always getting 403. So i am wondering if the free version of nginx doesn't support it and if i need to purchase the commercial version. Do you have a working example for HLS and map? This is the one that i was using: map $uri $hls_uri { ~^(?.*).m3u8$ $base_uri; ~^(?.*).ts$ $base_uri; default $uri; } Nginx version on FreeBSD 11.x # nginx -V nginx version: nginx/1.17.0 built by clang 6.0.0 (tags/RELEASE_600/final 326565) (based on LLVM 6.0.0) built with OpenSSL 1.0.2s 28 May 2019 TLS SNI support enabled configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --user=www --group=www --modules-path=/usr/local/libexec/nginx --with-file-aio --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx/access.log --with-http_v2_module --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-pcre --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --without-mail_imap_module --without-mail_pop3_module --without-mail_smtp_module --with-mail_ssl_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-threads --with-mail=dynamic --with-stream=dynamic Thank you in advance Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Fri Jun 7 00:29:09 2019 From: 201904-nginx at jslf.app (Patrick) Date: Fri, 7 Jun 2019 08:29:09 +0800 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: Message-ID: <20190607002909.GA29354@haller.ws> On 2019-06-06 23:00, Andrew Andonopoulos wrote: > However i changed it little bit and i used the following (without map) and i was able to get 200 for only the URI, so i presume that the secure link with NGINX is working: Can you post a redacted version of the config file? Secure Link should work -- however it's not great because unless the m3u8 playlist is generated on the fly, the media assets will not be protected by the Secure Link setup. Patrick From r1ch+nginx at teamliquid.net Fri Jun 7 13:45:08 2019 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 7 Jun 2019 15:45:08 +0200 Subject: HTTPS Pinning In-Reply-To: <295ff55c-086c-9f29-d0e1-6dca0050dfa6@andreasschulze.de> References: <295ff55c-086c-9f29-d0e1-6dca0050dfa6@andreasschulze.de> Message-ID: In the context of a mobile app, pinning usually means checking the public key of the server in your app matches what is expected. There is nothing to configure server-side. If you change the private key used by your SSL certificate, then your app will break. Renewing an SSL certificate doesn't usually change the private key, but check your renewal process to be sure. I would also suggest adding several backup public key hashes in the app in the event that you need to rotate your private key so you can do this without having to wait for an app store update. That said, pinning offers little benefit, as if your app is already verifying the certificate the most this protects you from is a root cert MITM, eg from a corporate network SSL interception product, which is quite rare. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Fri Jun 7 14:22:32 2019 From: peter_booth at me.com (Peter Booth) Date: Fri, 7 Jun 2019 10:22:32 -0400 Subject: HTTPS Pinning In-Reply-To: <295ff55c-086c-9f29-d0e1-6dca0050dfa6@andreasschulze.de> References: <295ff55c-086c-9f29-d0e1-6dca0050dfa6@andreasschulze.de> Message-ID: <10732DD3-E342-4A53-9AEA-8B7A3FC38E13@me.com> Andreas, Do you know of any large, high traffic sites that are using HSTS today? Peter > On Jun 5, 2019, at 12:56 PM, A. Schulze wrote: > > > > Am 05.06.19 um 14:54 schrieb Sathish Kumar: >> Hi Team, >> >> We would like to fix the HTTPS pinning vulnerability on our Nginx and Mobile application Android/iOS. If I enable on Nginx, do we need to add the pinning keys on our application and have to rotate the pinning keys everytime when the SSL cert is renewed. >> >> Please advise. > > HPKP is more or less deprecated. I suggest to no use it anymore. > Use HSTS, try to understand the implication of "includeSubDomains" and https://hstspreload.org/ > > Andreas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From sca at andreasschulze.de Fri Jun 7 15:53:42 2019 From: sca at andreasschulze.de (A. Schulze) Date: Fri, 7 Jun 2019 17:53:42 +0200 Subject: HTTPS Pinning In-Reply-To: <10732DD3-E342-4A53-9AEA-8B7A3FC38E13@me.com> References: <295ff55c-086c-9f29-d0e1-6dca0050dfa6@andreasschulze.de> <10732DD3-E342-4A53-9AEA-8B7A3FC38E13@me.com> Message-ID: Am 07.06.19 um 16:22 schrieb Peter Booth via nginx: > Do you know of any large, high traffic sites that are using HSTS today? echo "debian.org ietf.org web.de gmx.net posteo.de mailbox.org andreasschulze.de paypal.com" \ | while read -r high_traffic_site; do curl -I -s -k https://$high_traffic_site | grep -i ^strict-transport-security: done one dosn't meet the criteria "high traffic site" :-) Amdreas From r at roze.lv Fri Jun 7 16:29:55 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 7 Jun 2019 19:29:55 +0300 Subject: HTTPS Pinning In-Reply-To: <10732DD3-E342-4A53-9AEA-8B7A3FC38E13@me.com> References: <295ff55c-086c-9f29-d0e1-6dca0050dfa6@andreasschulze.de> <10732DD3-E342-4A53-9AEA-8B7A3FC38E13@me.com> Message-ID: <001301d51d4e$3dda9f10$b98fdd30$@roze.lv> > Andreas, > > Do you know of any large, high traffic sites that are using HSTS today? > > Peter > For Chrome (Chromium) you can view the preload HSTS list here: https://chromium.googlesource.com/chromium/src/net/+/master/http/transport_security_state_static.json google / twitter / paypal to name a few high traffic domains. rr From andre8525 at hotmail.com Fri Jun 7 18:47:54 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Fri, 7 Jun 2019 18:47:54 +0000 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: <20190607002909.GA29354@haller.ws> References: , <20190607002909.GA29354@haller.ws> Message-ID: Hi Patrick, This is the nginx config, do you think that i should use another method? like auth? user www; worker_processes auto; pid /var/run/nginx.pid; worker_rlimit_nofile 1048576; events { worker_connections 1024; } http { include mime.types; default_type text/html; log_format custom_cache_log '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; sendfile on; keepalive_timeout 65; proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=s3_cache:10m max_size=4G inactive=60m use_temp_path=off; map $uri $hls_uri { ~^(?.*).m3u8$ $base_uri; ~^(?.*).ts$ $base_uri; default $uri; } server { listen 80; access_log /var/log/nginx/lotuscdn.com.access.log custom_cache_log; error_log /var/log/nginx/lotuscdn.com.error.log warn; location / { proxy_cache s3_cache; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Authorization ''; proxy_set_header Host 's3test.s3.amazonaws.com'; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header x-amz-meta-server-side-encryption; proxy_hide_header x-amz-server-side-encryption; proxy_hide_header Set-Cookie; proxy_hide_header x-amz-storage-class; proxy_ignore_headers Set-Cookie; proxy_cache_revalidate on; proxy_intercept_errors on; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_lock on; proxy_cache_background_update on; proxy_cache_valid 200 60m; add_header Cache-Control max-age=31536000; add_header X-Cache-Status $upstream_cache_status; proxy_pass http://s3test.s3.amazonaws.com/; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token'; add_header 'Access-Control-Allow-Methods' 'OPTIONS, GET'; secure_link $arg_md5,$arg_expires; secure_link_md5 "enigma$uri$secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/www/nginx-dist; } } } Thanks Andrew ________________________________ From: nginx on behalf of Patrick <201904-nginx at jslf.app> Sent: Friday, June 7, 2019 12:29 AM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX On 2019-06-06 23:00, Andrew Andonopoulos wrote: > However i changed it little bit and i used the following (without map) and i was able to get 200 for only the URI, so i presume that the secure link with NGINX is working: Can you post a redacted version of the config file? Secure Link should work -- however it's not great because unless the m3u8 playlist is generated on the fly, the media assets will not be protected by the Secure Link setup. Patrick _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Jun 7 20:59:02 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Jun 2019 21:59:02 +0100 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607002909.GA29354@haller.ws> Message-ID: <20190607205902.i22pmnhszxym7w3s@daoine.org> On Fri, Jun 07, 2019 at 06:47:54PM +0000, Andrew Andonopoulos wrote: Hi there, > This is the nginx config, do you think that i should use another method? like auth? It looks to me like you could try using exactly the method in the document you mentioned previously. https://www.nginx.com/blog/securing-urls-secure-link-module-nginx-plus/ > map $uri $hls_uri { > ~^(?.*).m3u8$ $base_uri; > ~^(?.*).ts$ $base_uri; > default $uri; > } You create a variable $hls_uri which is "the uri without the .ts or .m3u8", like that document does. > secure_link $arg_md5,$arg_expires; > secure_link_md5 "enigma$uri$secure_link_expires"; But your secure_link_md5 directive does not use that variable. Unlike what that document does. If there is still a problem after you fix that, can you show one request that you make that does not give the response that you want? Perhaps there is something unexpected in the way that the md5sum in the link is generated or calculated. f -- Francis Daly francis at daoine.org From andre8525 at hotmail.com Fri Jun 7 21:51:49 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Fri, 7 Jun 2019 21:51:49 +0000 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: <20190607205902.i22pmnhszxym7w3s@daoine.org> References: <20190607002909.GA29354@haller.ws> , <20190607205902.i22pmnhszxym7w3s@daoine.org> Message-ID: Hello, I was trying a couple of things and forgot to switch it back. I tried again and this is the current map and secure link config: map $uri $hls_uri { ~^(?.*).m3u8$ $base_uri; ~^(?.*).ts$ $base_uri; default $uri; } secure_link $arg_md5,$arg_expires; secure_link_md5 "enigma$hls_uri$secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } Then i used this command to generate the expire date/time: date -d "2019-06-08 23:30" +%s 1560033000 and this command to generate the md5: echo -n 'enigma/hls/justin-timberlake/playlist1560033000' | openssl md5 -binary | openssl base64 | tr '+/' '-_' | tr -d '=' DWHdyTKR5vTqw10wNtnlIg The request for the main manifest was ok: Request URL: http:///hls/justin-timberlake/playlist.m3u8?md5=DWHdyTKR5vTqw10wNtnlIg&expires=1560033000 Request Method: GET Status Code: 200 OK But the content of the manifest doesn't have the md5 : #EXTM3U #EXT-X-VERSION:3 #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 Justin_Timberlake_416_234_200.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 Justin_Timberlake_480_270_300.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=600000,RESOLUTION=640x360 Justin_Timberlake_640_360_600.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=900000,RESOLUTION=960x540 Justin_Timberlake_960_540_900.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=1300000,RESOLUTION=1280x720 Justin_Timberlake_1280_720_1300.m3u8 As well as the other m3u8 manifest, so only the playlist have the md5 and expire: Request URL: http://86.180.184.242/hls/justin-timberlake/Justin_Timberlake_640_360_600.m3u8 Request Method: GET Status Code: 403 Forbidden Thanks Andrew ________________________________ From: nginx on behalf of Francis Daly Sent: Friday, June 7, 2019 8:59 PM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX On Fri, Jun 07, 2019 at 06:47:54PM +0000, Andrew Andonopoulos wrote: Hi there, > This is the nginx config, do you think that i should use another method? like auth? It looks to me like you could try using exactly the method in the document you mentioned previously. https://www.nginx.com/blog/securing-urls-secure-link-module-nginx-plus/ > map $uri $hls_uri { > ~^(?.*).m3u8$ $base_uri; > ~^(?.*).ts$ $base_uri; > default $uri; > } You create a variable $hls_uri which is "the uri without the .ts or .m3u8", like that document does. > secure_link $arg_md5,$arg_expires; > secure_link_md5 "enigma$uri$secure_link_expires"; But your secure_link_md5 directive does not use that variable. Unlike what that document does. If there is still a problem after you fix that, can you show one request that you make that does not give the response that you want? Perhaps there is something unexpected in the way that the md5sum in the link is generated or calculated. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Jun 7 22:34:06 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Jun 2019 23:34:06 +0100 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> Message-ID: <20190607223406.ejetxgjcm5xsn3nu@daoine.org> On Fri, Jun 07, 2019 at 09:51:49PM +0000, Andrew Andonopoulos wrote: Hi there, thanks for the fuller details. I think it makes it clear what is happening. > and this command to generate the md5: > > echo -n 'enigma/hls/justin-timberlake/playlist1560033000' | openssl md5 -binary | openssl base64 | tr '+/' '-_' | tr -d '=' > DWHdyTKR5vTqw10wNtnlIg > > > The request for the main manifest was ok: > > Request URL: http:///hls/justin-timberlake/playlist.m3u8?md5=DWHdyTKR5vTqw10wNtnlIg&expires=1560033000 > Request Method: GET > Status Code: 200 OK > > > But the content of the manifest doesn't have the md5 The content of the manifest file must be, in this case, "the relative urls for the individual pieces". > #EXTM3U > #EXT-X-VERSION:3 > #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 > Justin_Timberlake_416_234_200.m3u8 > #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 > Justin_Timberlake_480_270_300.m3u8 Justin_Timberlake_416_234_200.m3u8 is probably the filename; but you have configured your nginx such that Justin_Timberlake_416_234_200.m3u8 is not a valid url for that file. The url with your current nginx configuration is something more like Justin_Timberlake_416_234_200.m3u8?md5=CvlIb8kRVaCrpjqyJERUtQ&expires=1560033000 (from: $ echo -n 'enigma/hls/justin-timberlake/Justin_Timberlake_416_234_2001560033000' | openssl md5 -binary | openssl base64 | tr '+/' '-_' | tr -d '=' CvlIb8kRVaCrpjqyJERUtQ ) so *that* is the string that must appear in the playlist.m3u8 file. And the file Justin_Timberlake_480_270_300.m3u8 will have a different "md5" part of the url, because your nginx config ignores the .m3u8 but uses everything before it when checking the md5sum. Whatever creates the playlist.m3u8 file that ends up being served by your nginx, will need to be modified to create the correct urls for the files, if they are to be served by your nginx. You could, if you chose, change your nginx config (the map) to ignore the final digits-and-underscores as well as the .m3u8 part; if you did that, then the query-string part of all of these entries in the manifest would be the same (and you would only need to calculate it once). > As well as the other m3u8 manifest, so only the playlist have the md5 and expire: You must decide how you want your files to be accessed, and then configure things appropriately. If you want every .m3u8 and .ts file below /hls/ to only be accessed via the secure_link, then you must make sure that you advertise the correct secure_link urls for those files. If you want only the playlist.m3u8 files to be accessed via the secure_link, while the other .m38u and .ts files are not restricted and expiring, then you must configure your nginx to do the secure_link check on playlist.m3u8 and not on the others. > Request URL: http://86.180.184.242/hls/justin-timberlake/Justin_Timberlake_640_360_600.m3u8 > Request Method: GET > Status Code: 403 Forbidden That is what you configured your nginx to do, so it looks like it is worked as implemented -- but presumably not as desired. Good luck with it, f -- Francis Daly francis at daoine.org From andre8525 at hotmail.com Sat Jun 8 14:44:22 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Sat, 8 Jun 2019 14:44:22 +0000 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: <20190607223406.ejetxgjcm5xsn3nu@daoine.org> References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> , <20190607223406.ejetxgjcm5xsn3nu@daoine.org> Message-ID: Hi Francis, Thanks for the clarification, so all requests will be like this: http:///hls// can i include in the map the domain http://example.com, the folder /hls/ and ignore all the rest? any guidance/help with the map will be very helpfull because i am not very familiar with regex map $uri $hls_uri { ~^(?.*).m3u8$ $base_uri; ~^(?.*).ts$ $base_uri; default $uri; } Thanks Andrew ________________________________ From: nginx on behalf of Francis Daly Sent: Friday, June 7, 2019 10:34 PM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX On Fri, Jun 07, 2019 at 09:51:49PM +0000, Andrew Andonopoulos wrote: Hi there, thanks for the fuller details. I think it makes it clear what is happening. > and this command to generate the md5: > > echo -n 'enigma/hls/justin-timberlake/playlist1560033000' | openssl md5 -binary | openssl base64 | tr '+/' '-_' | tr -d '=' > DWHdyTKR5vTqw10wNtnlIg > > > The request for the main manifest was ok: > > Request URL: http:///hls/justin-timberlake/playlist.m3u8?md5=DWHdyTKR5vTqw10wNtnlIg&expires=1560033000 > Request Method: GET > Status Code: 200 OK > > > But the content of the manifest doesn't have the md5 The content of the manifest file must be, in this case, "the relative urls for the individual pieces". > #EXTM3U > #EXT-X-VERSION:3 > #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 > Justin_Timberlake_416_234_200.m3u8 > #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 > Justin_Timberlake_480_270_300.m3u8 Justin_Timberlake_416_234_200.m3u8 is probably the filename; but you have configured your nginx such that Justin_Timberlake_416_234_200.m3u8 is not a valid url for that file. The url with your current nginx configuration is something more like Justin_Timberlake_416_234_200.m3u8?md5=CvlIb8kRVaCrpjqyJERUtQ&expires=1560033000 (from: $ echo -n 'enigma/hls/justin-timberlake/Justin_Timberlake_416_234_2001560033000' | openssl md5 -binary | openssl base64 | tr '+/' '-_' | tr -d '=' CvlIb8kRVaCrpjqyJERUtQ ) so *that* is the string that must appear in the playlist.m3u8 file. And the file Justin_Timberlake_480_270_300.m3u8 will have a different "md5" part of the url, because your nginx config ignores the .m3u8 but uses everything before it when checking the md5sum. Whatever creates the playlist.m3u8 file that ends up being served by your nginx, will need to be modified to create the correct urls for the files, if they are to be served by your nginx. You could, if you chose, change your nginx config (the map) to ignore the final digits-and-underscores as well as the .m3u8 part; if you did that, then the query-string part of all of these entries in the manifest would be the same (and you would only need to calculate it once). > As well as the other m3u8 manifest, so only the playlist have the md5 and expire: You must decide how you want your files to be accessed, and then configure things appropriately. If you want every .m3u8 and .ts file below /hls/ to only be accessed via the secure_link, then you must make sure that you advertise the correct secure_link urls for those files. If you want only the playlist.m3u8 files to be accessed via the secure_link, while the other .m38u and .ts files are not restricted and expiring, then you must configure your nginx to do the secure_link check on playlist.m3u8 and not on the others. > Request URL: http://86.180.184.242/hls/justin-timberlake/Justin_Timberlake_640_360_600.m3u8 > Request Method: GET > Status Code: 403 Forbidden That is what you configured your nginx to do, so it looks like it is worked as implemented -- but presumably not as desired. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Jun 9 08:15:13 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 9 Jun 2019 09:15:13 +0100 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> Message-ID: <20190609081513.gbkwdsu2ukychgt2@daoine.org> On Sat, Jun 08, 2019 at 02:44:22PM +0000, Andrew Andonopoulos wrote: Hi there, > Thanks for the clarification, so all requests will be like this: > > http:///hls// > > can i include in the map the domain http://example.com, the folder /hls/ and ignore all the rest? You can. I'm not sure why you would. The "map" is only a way to create a variable. The important part is what you do with that variable - for example, in one of the secure_link* directives. You said that the task was "to use secure link to secure m3u8 and ts files". What do you understand by the phrase "to secure", there? It is possible that the secure link module does not do what you want to have done. Presumably you want to allow some access and disallow some other access. Possibly you only care about time-limited access? I suspect that the details will matter. >From a "secret url" point of view: telling someone to access http://example.com/dir/file.m3u is exactly the same as telling them to access http://example.com/dir/file.m3u?secret or http://example.com/dir/secret/file.m3u -- you give them a url, and you configure your nginx such that anyone who accesses that url gets the file contents. The "secret" part might stop them guessing how to get file.ts in the same directory; but only if it is not the same secret for all file names. (You *could* issue different secret urls for different users; but I don't think that that is what you are doing here.) >From a "time-limited" point of view, you could tell someone to access http://example.com/dir/file.m3u?time or http://example.com/dir/file.m3u?secret&time or http://example.com/dir/secret/time/file.m3u, and configure your nginx to send the file contents only until "time". The secret/secure_link part is to stop someone adding a week to "time" and getting access for longer than they should. Or you could just "rm dir/file.m3u" when you no longer want it accessible. There are good use-cases for the secure_link module. But you should probably start with what you want to achieve; and then see whether secure_link is the right answer. And then the mechanics of configuring nginx to do what you want can be sorted out afterwards. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Jun 9 18:51:54 2019 From: nginx-forum at forum.nginx.org (arashad) Date: Sun, 09 Jun 2019 14:51:54 -0400 Subject: broken header without HAProxy In-Reply-To: <20190604102803.GZ1877@mdounin.ru> References: <20190604102803.GZ1877@mdounin.ru> Message-ID: <44cca93bc73892e1f730d4ae0781df1a.NginxMailingListEnglish@forum.nginx.org> Everything worked after removing the proxy_protocol. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284428,284491#msg-284491 From hellot19952003 at gmail.com Sun Jun 9 20:20:06 2019 From: hellot19952003 at gmail.com (Taim T) Date: Sun, 9 Jun 2019 20:20:06 +0000 Subject: NGINX Phase Order Message-ID: Hello I am writing a full-page cache module for NGINX. But I ran into some issues, I searched online for a possible solution, but no success. What happens is that I need to hook my module in NGX_HTTP_PRECONTENT_PHASE, so that module can make cache related decisions. Everything is good, so far caching is working as expected. But the problem starts to happen when I use try_files module to make WordPress permalinks works. I checked the source code of try_files module, and it seems it also hooks into NGX_HTTP_PRECONTENT_PHASE, I can't seem to find a way so that this module runs first and do its rewriting, and then my module gets fired. Any pointers are appreciated. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Sun Jun 9 22:39:20 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Mon, 10 Jun 2019 06:39:20 +0800 Subject: Content Security Policy - Nginx Message-ID: Hi, I would like to enable Content Security Policy header on Nginx for our website to protect from data injection attacks and XSS. Can I add like the below config?. If anybody hit our URL they will know the allowed domains in the header. Is there any other bettery way to do this? add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://ssl.google-analytics.com https://assets.zendesk.com https://connect.facebook.net; img-src 'self' https://ssl.google-analytics.com https://s-static.ak.facebook.com https://assets.zendesk.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com https://assets.zendesk.com; font-src 'self' https://themes.googleusercontent.com; frame-src https://assets.zendesk.com https://www.facebook.com https://s-static.ak.facebook.com https://tautt.zendesk.com; object-src 'none'"; -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jun 10 13:38:47 2019 From: nginx-forum at forum.nginx.org (yashgt) Date: Mon, 10 Jun 2019 09:38:47 -0400 Subject: Does nginx manage two sessions per user? Message-ID: Does nginx reverse proxy keep track of 2 sessions, one from client to nginx and another from nginx to the upstream server? I am asking as I need to set up a reverse proxy using spring-cloud zuul and want to know how it is done in other reverse proxies. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284501,284501#msg-284501 From vivek.solanki at einfochips.com Mon Jun 10 14:15:03 2019 From: vivek.solanki at einfochips.com (Vivek Solanki) Date: Mon, 10 Jun 2019 14:15:03 +0000 Subject: Getting upstream error [connect() failed (110: Connection timed out) while connecting to upstream] Message-ID: Hi Team, I have setup a nginx server through which requests are forwarding to AWS ELB by nginx proxy_pass (configured on /etc/nginx/default.d/my.conf) on. Behind AWS ELB tomcat application running on instances. After performing load test (15k-20k throughput) using Jmeter it is working fine, but after raising the throughput value it gives below upstream errors: 2019/06/06 15:54:32 [error] 31483#0: *924684 connect() failed (110: Connection timed out) while connecting to upstream, client: 172.29.XX.XX, server: _, request: "GET /example1/model/v2/4SBWFWMMFWKWDWFS178S94DFDE/imageurl HTTP/1.1", upstream: "https://XX.XX.XX.XX:443/example1/model/v2/4SBWFWMMFWKWDWFS178S94DFDE/imageurl", host: "myapp.example.com" 2019/06/10 13:55:20 [error] 28423#0: *492252 no live upstreams while connecting to upstream, client: 172.29.XX.XX, server: _, request: "GET /example1/model/v2/4SBWFWMMFWKWDWFS178S94DFDE/imageurl HTTP/1.1", upstream: "https://XX.XX.XX.XX:443/example1/model/v2/4SBWFWMMFWKWDWFS178S94DFDE/active", host: " myapp.example.com " I have already tried to made many changes in nginx.conf file, below is showing complete conf file details. On restart or reload nginx service, issue is resolved. But this is not the permanent solution. Nginx.conf : # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes 8; error_log /var/log/nginx/error.log debug; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 50000; use epoll; } worker_rlimit_nofile 30000; http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 30; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Default is HTTP/1, keepalive is only enabled in HTTP/1.1 proxy_http_version 1.1; # Remove the Connection header if the client sends it, # it could be "close" to close a keepalive connection proxy_set_header Connection ""; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } Thanks in advance Regards, Vivek Solanki ************************************************************************************************************************************************************* eInfochips Business Disclaimer: This e-mail message and all attachments transmitted with it are intended solely for the use of the addressee and may contain legally privileged and confidential information. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution, copying, or other use of this message or its attachments is strictly prohibited. If you have received this message in error, please notify the sender immediately by replying to this message and please delete it from your computer. Any views expressed in this message are those of the individual sender unless otherwise stated. Company has taken enough precautions to prevent the spread of viruses. However the company accepts no liability for any damage caused by any virus transmitted by this email. ************************************************************************************************************************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jun 10 15:03:36 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Jun 2019 18:03:36 +0300 Subject: Getting upstream error [connect() failed (110: Connection timed out) while connecting to upstream] In-Reply-To: References: Message-ID: <20190610150335.GW1877@mdounin.ru> Hello! On Mon, Jun 10, 2019 at 02:15:03PM +0000, Vivek Solanki wrote: > I have setup a nginx server through which requests are forwarding to AWS ELB by nginx proxy_pass (configured on /etc/nginx/default.d/my.conf) on. Behind AWS ELB tomcat application running on instances. After performing load test (15k-20k throughput) using Jmeter it is working fine, but after raising the throughput value it gives below upstream errors: > > 2019/06/06 15:54:32 [error] 31483#0: *924684 connect() failed (110: Connection timed out) while connecting to upstream, client: 172.29.XX.XX, server: _, request: "GET /example1/model/v2/4SBWFWMMFWKWDWFS178S94DFDE/imageurl HTTP/1.1", upstream: "https://XX.XX.XX.XX:443/example1/model/v2/4SBWFWMMFWKWDWFS178S94DFDE/imageurl", host: "myapp.example.com" > > 2019/06/10 13:55:20 [error] 28423#0: *492252 no live upstreams while connecting to upstream, client: 172.29.XX.XX, server: _, request: "GET /example1/model/v2/4SBWFWMMFWKWDWFS178S94DFDE/imageurl HTTP/1.1", upstream: "https://XX.XX.XX.XX:443/example1/model/v2/4SBWFWMMFWKWDWFS178S94DFDE/active", host: " myapp.example.com " > > I have already tried to made many changes in nginx.conf file, below is showing complete conf file details. On restart or reload nginx service, issue is resolved. But this is not the permanent solution. The "connect() failed" errors indicate that your backend can't cope with load, and hence nginx disables it for fail_timeout time, 10 seconds by default. Generally this indicate you have to add more backend servers to cope with the load. If you want to configure nginx to be more tolerant to upstream errors, you can do so using the max_fails and fail_timeout parameters, see here for details: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails http://nginx.org/en/docs/http/ngx_http_upstream_module.html#fail_timeout [...] > eInfochips Business Disclaimer: This e-mail message and all > attachments transmitted with it are intended solely for the use > of the addressee and may contain legally privileged and > confidential information. If the reader of this message is not > the intended recipient, or an employee or agent responsible for You may want to avoid such disclaimers when posting to to a public mailing list. Thank you. -- Maxim Dounin http://mdounin.ru/ From vgrinshp at akamai.com Mon Jun 10 19:59:46 2019 From: vgrinshp at akamai.com (Vadim Grinshpun) Date: Mon, 10 Jun 2019 15:59:46 -0400 Subject: nginx use of UDP ports? Message-ID: <3944a77b-7279-b81e-9c7e-f498ecc2d77b@akamai.com> Hello, After setting up nginx to run, I've noticed that 'lsof' shows all nginx processes (master + workers) listening on an ephemeral UDP port. nginx 25142?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP 127.0.0.1:33226 nginx 25144?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP 127.0.0.1:33226 nginx 25145?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP 127.0.0.1:33226 nginx 25146?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP 127.0.0.1:33226 nginx 25147?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP 127.0.0.1:33226 I did not explicitly configure anything that (AFAIK) uses UDP, and I could not find anything in the doc that mentions any use of UDP ports by nginx. The ports seem to be used by nginx even with the most minimal nginx.conf. Does anyone here know how/why these ports are used? Thanks for any info! -Vadim -------------- next part -------------- An HTML attachment was scrubbed... URL: From cello86 at gmail.com Tue Jun 11 08:25:32 2019 From: cello86 at gmail.com (Marcello Lorenzi) Date: Tue, 11 Jun 2019 10:25:32 +0200 Subject: Nginx and 400 SSL error handling Message-ID: Hi All, We?re trying to configure a client authentication on an Nginx 1.15.12 and we noticed a ?400 Bad Request - SSL Certificate Error? because a certificate CA isn?t present into the certificates listed into ?ssl_client_certificate?. This is the configuration for the SSL authentication. ssl_verify_client optional; ssl_client_certificate /usr/local/nginx/ca-test.pem; Actually we would return a 401 error page instead a 400 error page but we aren?t able to customize the HTTP code but only the message reported with this configuration. error_page 495 @error_ssl_495; location @error_ssl_495{ return 401 'certificate invalid'; } Is it possible to adjust also the http error code? Thanks in advance, Marcello -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jun 11 08:46:13 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Jun 2019 09:46:13 +0100 Subject: Nginx and 400 SSL error handling In-Reply-To: References: Message-ID: <20190611084613.l6cyxtwucuenywhd@daoine.org> On Tue, Jun 11, 2019 at 10:25:32AM +0200, Marcello Lorenzi wrote: Hi there, > Actually we would return a 401 error page instead a 400 error page but we > aren?t able to customize the HTTP code but only the message reported with > this configuration. > > error_page 495 @error_ssl_495; Untested by me here, but http://nginx.org/r/error_page shows that you can add another argument with "=" to set the response code, or to change to the response code that the uri returns. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Jun 12 08:31:48 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Jun 2019 11:31:48 +0300 Subject: nginx use of UDP ports? In-Reply-To: <3944a77b-7279-b81e-9c7e-f498ecc2d77b@akamai.com> References: <3944a77b-7279-b81e-9c7e-f498ecc2d77b@akamai.com> Message-ID: <20190612083147.GX1877@mdounin.ru> Hello! On Mon, Jun 10, 2019 at 03:59:46PM -0400, Vadim Grinshpun via nginx wrote: > After setting up nginx to run, I've noticed that 'lsof' shows all nginx > processes (master + workers) listening on an ephemeral UDP port. > > nginx 25142?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP > 127.0.0.1:33226 > nginx 25144?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP > 127.0.0.1:33226 > nginx 25145?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP > 127.0.0.1:33226 > nginx 25146?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP > 127.0.0.1:33226 > nginx 25147?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP > 127.0.0.1:33226 > > > I did not explicitly configure anything that (AFAIK) uses UDP, and I > could not find anything in the doc that mentions any use of UDP ports by > nginx. > The ports seem to be used by nginx even with the most minimal nginx.conf. > > Does anyone here know how/why these ports are used? By default nginx doesn't use any UDP ports. What's in your config? -- Maxim Dounin http://mdounin.ru/ From cello86 at gmail.com Wed Jun 12 14:28:54 2019 From: cello86 at gmail.com (Marcello Lorenzi) Date: Wed, 12 Jun 2019 16:28:54 +0200 Subject: Nginx and 400 SSL error handling In-Reply-To: <20190611084613.l6cyxtwucuenywhd@daoine.org> References: <20190611084613.l6cyxtwucuenywhd@daoine.org> Message-ID: Hi, It works correctly. Thanks for the tips. Marcello On Tue, Jun 11, 2019 at 10:46 AM Francis Daly wrote: > On Tue, Jun 11, 2019 at 10:25:32AM +0200, Marcello Lorenzi wrote: > > Hi there, > > > Actually we would return a 401 error page instead a 400 error page but we > > aren?t able to customize the HTTP code but only the message reported with > > this configuration. > > > > error_page 495 @error_ssl_495; > > Untested by me here, but > > http://nginx.org/r/error_page > > shows that you can add another argument with "=" to set the response code, > or to change to the response code that the uri returns. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jun 12 14:41:42 2019 From: nginx-forum at forum.nginx.org (ayma) Date: Wed, 12 Jun 2019 10:41:42 -0400 Subject: nginx ingress controller question about use of informers Message-ID: <0a70b941b9ab702f306c7e847527f870.NginxMailingListEnglish@forum.nginx.org> Looking at the nginx ingress controller code had a question about the design. See that there is a function, getPodsForIngressBackend, ( https://github.com/nginxinc/kubernetes-ingress/blob/master/internal/k8s/controller.go#L15860 ). It looks like in this function a call is made to kube api to grab all the backend pods for a service. I was wondering why not use the cached information from the informer? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284517,284517#msg-284517 From niyazi.toros at gmail.com Thu Jun 13 10:33:36 2019 From: niyazi.toros at gmail.com (niyazi.toros at gmail.com) Date: Thu, 13 Jun 2019 13:33:36 +0300 Subject: How can I use proxy_pass and how can I redirect rest of it to default index . html Message-ID: <000001d521d3$76087720$62196560$@gmail.com> Hi, I have a small projects. I have a domain as mob.ntms.com. I install nginx using https://nginx.org/en/linux_packages.html stable ubuntu commands. When I type 127.0.0.1 or http:// mob.ntms.com I can see nginx default index.html. First I change the default index.html and place my own. My html uses some asstes and images so in /user/shared/nginx/html folder I move this 2 folder (asstest and images). Till this everything works as I expected. Now I am in difficult part. Let me try to explain before I can paste my code: I have a tcp socket in my local network which I connect remotly like if I type: * http:// mob.ntms.com/myrestapi/******** The /myrestapi/ is the where my api is reside. The ******* its dynamic. I need to redirect * http:// mob.ntms.com/myrestapi/ to 127.0.0.1:1024 127.0.0.1:1024 its a dart server and it is in same machine as nginx. If I type other than this (http:// mob.ntms.com/myrestapi/ ) all the request including 404 must be redirect them to nginx default index.html. * http:// mob.ntms.com redirect to default index.html * http:// mob.ntms.com/ redirect to default index.html Only; * http:// mob.ntms.com/myrestapi redirect to 127.0.0.1:1024 Currently my conf.d/default.conf look like this: server { charset UTF-8; listen 80 ; listen [::]:80 ; server_name mob.ntms.com; access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } # define error page error_page 404 = @notfound; # error page location redirect 301 location @notfound { return 302 /; } # error_page 404 =200 /index.html; # error_page 404 /usr/share/nginx/html/index.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /index.html; location = / { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:1024 location /myrestapiA { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; proxy_pass "http://127.0.0.1:1024/"; } location / myrestapiB{ proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; proxy_pass "http://127.0.0.1:1024/"; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one location ~ /\.ht { deny all; } location = /nginx.conf { deny all; } # Deny access to hidden files (beginning with a period) location ~ /\. { deny all; } } Thanks Niyazi Toros -------------- next part -------------- An HTML attachment was scrubbed... URL: From suleman.butt at bayer.com Thu Jun 13 16:25:53 2019 From: suleman.butt at bayer.com (Suleman Butt) Date: Thu, 13 Jun 2019 16:25:53 +0000 Subject: Nginx base Image: open-source (free) vs commercial (paid version)? Message-ID: <61c850e755134131840e0a5b9218f494@BYEX08.de.bayer.cnb> Dear NGinx Community User, I am building my Angular app where inside my dockerfile I am using Nginx base image for publishing the content: E.g. FROM nginxinc/nginx-unprivileged:1.16-alpine or FROM nginx:alpine Everything works fine so far, but here is a list of my questions: * My understanding is that it is legal for me (my company) to use the above NGinx open-source image as base for building my commercial application? * In what circumstances should I consider going for a commercial (licensed) paid version of Nginx base image? Let me elaborate a bit on this point: I am not an Nginx expert and I am not sure what additional benefit I can get out of buying a commercial version of Nginx base image. If my application's front-end (hosted inside Nginx) works fine and the base image (open-source) fulfills my application's requirement, what additional benefit can I get by going with the commercial one? * Is the only reason in my case then that I can get an official support/SLA from Nginx if I go for their commercial Nginx base image offering? How often one could require an official support from Nginx for their Angular application hosted inside Nginx? I know it's a bit vague question, but I am just looking for some general trend. * What is generally a common industry trend when it comes to hosting a standard Angular 7 app inside Nginx: Is it common that companies opt for an open-source one over a commercial Ngixn base image or vice versa? * What else could be a general criteria for accessing a risk that is whether to go for an open-source one or a commercial Nginx base image? * Any other tip? Thanks. -- Regards Suleman -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jun 14 10:09:22 2019 From: nginx-forum at forum.nginx.org (niegus) Date: Fri, 14 Jun 2019 06:09:22 -0400 Subject: Nginx ssl_trusted_certificate directive problem Message-ID: <8e2d6863756dd25c765c834f6254182c.NginxMailingListEnglish@forum.nginx.org> Hi, I have my nginx configured with client_certificate authentication: ssl_client_certificate /etc/nginx/ssl/cas.pem; ssl_verify_client optional; ssl_verify_depth 2; And is working fine, but I need to NOT send the CAs to the client during the handshake. I've seen http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate in the documentation. So, I've changed it to: ssl_trusted_certificate /etc/nginx/ssl/cas.pem; ssl_verify_depth 2; But now ssl_client_verify is always to NONE, and actually I saw in wireshark that the client is not sending the certificate. What am I doing wrong? Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284531,284531#msg-284531 From nginx-forum at forum.nginx.org Fri Jun 14 11:45:45 2019 From: nginx-forum at forum.nginx.org (godwin.oduware) Date: Fri, 14 Jun 2019 07:45:45 -0400 Subject: NGINX HTTPS Configuration Not Working Message-ID: <10c9fd4a2fc9037c1ae5caf3282fff29.NginxMailingListEnglish@forum.nginx.org> Hello All, I am using a Centos 7 OS and I am using nginx for an Angular application. It was easy configuring nginx to work with http, but when I obtained SSL certificate, key, etc from Cloudflare and tried to configure nginx to work with https it didn't work even after trying several solutions provided online. I get "The page isn?t redirecting properly" error with the settings below: /etc/nginx/nginx.conf ---------------------------------- user nginx; worker_processes auto; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; server { listen 80; server_name hero.com; return 301 https://$server_name$request_uri; } ## # Gzip Settings ## #gzip on; #gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; #gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-available/hero.com.conf -------------------------------------------------------- server { listen 443 ssl; include /etc/nginx/snippets/ssl-nohaso.com.conf; include /etc/nginx/snippets/ssl-params.conf; server_name nohaso.com; location / { root /var/www/html/cadastral.nohaso.com; index index.html index.htm; try_files $uri $uri/ /index.html =404; } } Earlier on when I used the setting below, it goes to the default nginx page instead of my own page in /var/www/html/hero.com with this message: "This is the default index.html page that is distributed with nginx on Fedora. It is located in /usr/share/nginx/html. You should now put your content in a location of your choice and edit the root configuration directive in the nginx configuration file /etc/nginx/nginx.conf." /etc/nginx/nginx.conf -------------------------------------------------- server { listen 80 default_server; listen [::]:80 default_server; server_name _; return 301 https://$host$request_uri; } /etc/nginx/sites-available/hero.com.conf ---------------------------------------------------- server { listen 80; server_name nohaso.com; location / { try_files $uri $uri/ /index.html; } } server { listen [::]:443 ssl ipv6only=on; listen 443 ssl; server_name nohaso.com; root /var/www/html/nohaso.com; include /etc/nginx/snippets/ssl-nohaso.com.conf; include /etc/nginx/snippets/ssl-params.conf; # other vhost configuration } -------------------------------------------------------------------- Please, could someone point me to what I am doing wrong. I want https pages displayed for the domain and subdomain. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284532,284532#msg-284532 From nginx-forum at forum.nginx.org Fri Jun 14 18:26:49 2019 From: nginx-forum at forum.nginx.org (tlemons) Date: Fri, 14 Jun 2019 14:26:49 -0400 Subject: FIPS support in nginx? Message-ID: <67e0e7d9a178b18fbd5bf9bc99326bb4.NginxMailingListEnglish@forum.nginx.org> Hi Does nginx have a 'FIPS mode'? If so, where can I find this documented? Thanks! tl Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284539,284539#msg-284539 From vgrinshp at akamai.com Fri Jun 14 20:46:40 2019 From: vgrinshp at akamai.com (Vadim Grinshpun) Date: Fri, 14 Jun 2019 16:46:40 -0400 Subject: nginx use of UDP ports? In-Reply-To: <20190612083147.GX1877@mdounin.ru> References: <3944a77b-7279-b81e-9c7e-f498ecc2d77b@akamai.com> <20190612083147.GX1877@mdounin.ru> Message-ID: <79f76a04-ba4a-a57d-0ca3-9f6a334263c4@akamai.com> On 6/12/19 4:31 AM, Maxim Dounin wrote: > Hello! Hi! Thanks for responding. > On Mon, Jun 10, 2019 at 03:59:46PM -0400, Vadim Grinshpun via nginx wrote: > >> After setting up nginx to run, I've noticed that 'lsof' shows all nginx >> processes (master + workers) listening on an ephemeral UDP port. >> >> nginx 25142?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >> 127.0.0.1:33226 >> nginx 25144?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >> 127.0.0.1:33226 >> nginx 25145?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >> 127.0.0.1:33226 >> nginx 25146?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >> 127.0.0.1:33226 >> nginx 25147?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >> 127.0.0.1:33226 >> >> >> I did not explicitly configure anything that (AFAIK) uses UDP, and I >> could not find anything in the doc that mentions any use of UDP ports by >> nginx. >> The ports seem to be used by nginx even with the most minimal nginx.conf. >> >> Does anyone here know how/why these ports are used? > By default nginx doesn't use any UDP ports. What's in your > config? > Thanks for confirming this. I was able to find the culprit on my end (a custom module that uses a library that, as it turns out, listens on a UDP port for tracing purposes). -Vadim From lists at lazygranch.com Fri Jun 14 21:24:54 2019 From: lists at lazygranch.com (lists) Date: Fri, 14 Jun 2019 14:24:54 -0700 Subject: nginx use of UDP ports? In-Reply-To: <79f76a04-ba4a-a57d-0ca3-9f6a334263c4@akamai.com> Message-ID: <39ogfkk6h6f9su79mkc7aspq.1560547494626@lazygranch.com> Tracing or interprocess communication? ? Original Message ? From: nginx at nginx.org Sent: June 14, 2019 2:17 PM To: nginx at nginx.org; mdounin at mdounin.ru Reply-to: nginx at nginx.org Cc: vgrinshp at akamai.com Subject: Re: nginx use of UDP ports? On 6/12/19 4:31 AM, Maxim Dounin wrote: > Hello! Hi! Thanks for responding. > On Mon, Jun 10, 2019 at 03:59:46PM -0400, Vadim Grinshpun via nginx wrote: > >> After setting up nginx to run, I've noticed that 'lsof' shows all nginx >> processes (master + workers) listening on an ephemeral UDP port. >> >>????? nginx 25142?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >>????? 127.0.0.1:33226 >>????? nginx 25144?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >>????? 127.0.0.1:33226 >>????? nginx 25145?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >>????? 127.0.0.1:33226 >>????? nginx 25146?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >>????? 127.0.0.1:33226 >>????? nginx 25147?? vgrinshp??? 4u? IPv4 422450236????? 0t0? UDP >>????? 127.0.0.1:33226 >> >> >> I did not explicitly configure anything that (AFAIK) uses UDP, and I >> could not find anything in the doc that mentions any use of UDP ports by >> nginx. >> The ports seem to be used by nginx even with the most minimal nginx.conf. >> >> Does anyone here know how/why these ports are used? > By default nginx doesn't use any UDP ports.? What's in your > config? > Thanks for confirming this. I was able to find the culprit on my end (a custom module that uses a library that, as it turns out, listens on a UDP port for tracing purposes). -Vadim _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From andre8525 at hotmail.com Sat Jun 15 18:08:07 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Sat, 15 Jun 2019 18:08:07 +0000 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: <20190609081513.gbkwdsu2ukychgt2@daoine.org> References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> , <20190609081513.gbkwdsu2ukychgt2@daoine.org> Message-ID: Hello Francis and thank you for the response. In my case the player will request the m3u8 URL: https:///hls/justin-timberlake-encrypted/playlist.m3u8?md5=u808mTXsFSpZt7b8wLvlIw&expires=1560706367 The response from the server will be: #EXTM3U #EXT-X-VERSION:3 #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 Justin_Timberlake_416_234_200.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 Justin_Timberlake_480_270_300.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=600000,RESOLUTION=640x360 Justin_Timberlake_640_360_600.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=900000,RESOLUTION=960x540 Justin_Timberlake_960_540_900.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=1300000,RESOLUTION=1280x720 Justin_Timberlake_1280_720_1300.m3u8 After that the player will request the bitrate m3u8 files (as per the main manifest) which includes the ts files. for example: https:///hls/justin-timberlake-encrypted/Justin_Timberlake_416_234_200.m3u8 Can I instruct Nginx to use secure link only for the playlist.m3u8 and not for the other m3u8 and ts files? The map config that i am using now is: #map $uri $hls_uri { ~^(?.*).m3u8$ "base_uri"; ~^(?.*).ts$ "base_uri"; default $uri; } Thanks Andrew ________________________________ From: nginx on behalf of Francis Daly Sent: Sunday, June 9, 2019 8:15 AM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX On Sat, Jun 08, 2019 at 02:44:22PM +0000, Andrew Andonopoulos wrote: Hi there, > Thanks for the clarification, so all requests will be like this: > > http:///hls// > > can i include in the map the domain http://example.com, the folder /hls/ and ignore all the rest? You can. I'm not sure why you would. The "map" is only a way to create a variable. The important part is what you do with that variable - for example, in one of the secure_link* directives. You said that the task was "to use secure link to secure m3u8 and ts files". What do you understand by the phrase "to secure", there? It is possible that the secure link module does not do what you want to have done. Presumably you want to allow some access and disallow some other access. Possibly you only care about time-limited access? I suspect that the details will matter. >From a "secret url" point of view: telling someone to access http://example.com/dir/file.m3u is exactly the same as telling them to access http://example.com/dir/file.m3u?secret or http://example.com/dir/secret/file.m3u -- you give them a url, and you configure your nginx such that anyone who accesses that url gets the file contents. The "secret" part might stop them guessing how to get file.ts in the same directory; but only if it is not the same secret for all file names. (You *could* issue different secret urls for different users; but I don't think that that is what you are doing here.) >From a "time-limited" point of view, you could tell someone to access http://example.com/dir/file.m3u?time or http://example.com/dir/file.m3u?secret&time or http://example.com/dir/secret/time/file.m3u, and configure your nginx to send the file contents only until "time". The secret/secure_link part is to stop someone adding a week to "time" and getting access for longer than they should. Or you could just "rm dir/file.m3u" when you no longer want it accessible. There are good use-cases for the secure_link module. But you should probably start with what you want to achieve; and then see whether secure_link is the right answer. And then the mechanics of configuring nginx to do what you want can be sorted out afterwards. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jun 17 07:40:04 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 17 Jun 2019 08:40:04 +0100 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> Message-ID: <20190617074004.x2ag7c53ucpx5jnu@daoine.org> On Sat, Jun 15, 2019 at 06:08:07PM +0000, Andrew Andonopoulos wrote: Hi there, > In my case the player will request the m3u8 URL: > > https:///hls/justin-timberlake-encrypted/playlist.m3u8?md5=u808mTXsFSpZt7b8wLvlIw&expires=1560706367 > > The response from the server will be: > > #EXTM3U > #EXT-X-VERSION:3 > #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 > Justin_Timberlake_416_234_200.m3u8 > #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 > Justin_Timberlake_480_270_300.m3u8 > Can I instruct Nginx to use secure link only for the playlist.m3u8 and not for the other m3u8 and ts files? Yes. I am not sure why you would do that; or what benefit it will give you; but that's ok. I do not need to understand that part. In nginx, a request in handled in a location. So you want one location that will handle playlist.m3u8 requests and does the secure_link thing; and a separate location that will handle all of the other /hls/ requests. I think you want to proxy_pass all of the requests, so you need proxy_pass in both locations. I think you want lots of common config -- add_header, proxy_hide_header -- so it is probably simplest to use nested locations to allow inheritance rather than duplication. For example (untested): location /hls/ { # all of the common config goes here proxy_pass http://s3test.s3.amazonaws.com; location ~ /playlist\.m3u8$ { secure_link $arg_md5,$arg_expires; secure_link_md5 "enigma$hls_uri$secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } proxy_pass http://s3test.s3.amazonaws.com; } } Adjust to fit the rest of your requirements. Good luck with it, f -- Francis Daly francis at daoine.org From andre8525 at hotmail.com Mon Jun 17 08:17:51 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Mon, 17 Jun 2019 08:17:51 +0000 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: <20190617074004.x2ag7c53ucpx5jnu@daoine.org> References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> , <20190617074004.x2ag7c53ucpx5jnu@daoine.org> Message-ID: Hi Francis and thank you for your quick response / support. Now is more clear how locations and secure link works. I would like to add the secure link in each m3u8 and ts file but can't modify the files on the fly with the free nginx version, i think nginx plus have this capability ? (receive fmp4 and deliver manifests on the fly) https://www.nginx.com/products/nginx/streaming-media/ What you would suggest in case i want to use secure link for all the files? Thanks Andrew ________________________________ From: nginx on behalf of Francis Daly Sent: Monday, June 17, 2019 7:40 AM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX On Sat, Jun 15, 2019 at 06:08:07PM +0000, Andrew Andonopoulos wrote: Hi there, > In my case the player will request the m3u8 URL: > > https:///hls/justin-timberlake-encrypted/playlist.m3u8?md5=u808mTXsFSpZt7b8wLvlIw&expires=1560706367 > > The response from the server will be: > > #EXTM3U > #EXT-X-VERSION:3 > #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 > Justin_Timberlake_416_234_200.m3u8 > #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 > Justin_Timberlake_480_270_300.m3u8 > Can I instruct Nginx to use secure link only for the playlist.m3u8 and not for the other m3u8 and ts files? Yes. I am not sure why you would do that; or what benefit it will give you; but that's ok. I do not need to understand that part. In nginx, a request in handled in a location. So you want one location that will handle playlist.m3u8 requests and does the secure_link thing; and a separate location that will handle all of the other /hls/ requests. I think you want to proxy_pass all of the requests, so you need proxy_pass in both locations. I think you want lots of common config -- add_header, proxy_hide_header -- so it is probably simplest to use nested locations to allow inheritance rather than duplication. For example (untested): location /hls/ { # all of the common config goes here proxy_pass http://s3test.s3.amazonaws.com; location ~ /playlist\.m3u8$ { secure_link $arg_md5,$arg_expires; secure_link_md5 "enigma$hls_uri$secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } proxy_pass http://s3test.s3.amazonaws.com; } } Adjust to fit the rest of your requirements. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Mon Jun 17 09:00:20 2019 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 17 Jun 2019 12:00:20 +0300 Subject: FIPS support in nginx? In-Reply-To: <67e0e7d9a178b18fbd5bf9bc99326bb4.NginxMailingListEnglish@forum.nginx.org> References: <67e0e7d9a178b18fbd5bf9bc99326bb4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190617090020.GA11414@vlpc> On Fri, Jun 14, 2019 at 02:26:49PM -0400, tlemons wrote: > Hi > > Does nginx have a 'FIPS mode'? If so, where can I find this documented? > > Thanks! > tl > nginx uses openSSL library for all cryptographic operations. Thus it is enough to turn on FIPS mode in the library. For example, here [1] are instructions for RHEL. Other distributions have similar methods of enabling it system-wide. Note that RHEL implementation of FIPS in OpenSSL depends on kernel components (random number generation), that's why they require to turn FIPS system-wide. Note also that FIPS module from openssl.org is another implementation than RHEL's and it is not available for latest openssl version 1.1.1 [1] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/security_considerations-in-adopting-rhel-8#fips-mode_security From francis at daoine.org Mon Jun 17 11:39:54 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 17 Jun 2019 12:39:54 +0100 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> <20190617074004.x2ag7c53ucpx5jnu@daoine.org> Message-ID: <20190617113954.w4sv2hg33gi3zgpn@daoine.org> On Mon, Jun 17, 2019 at 08:17:51AM +0000, Andrew Andonopoulos wrote: Hi there, > I would like to add the secure link in each m3u8 and ts file but can't modify the files on the fly with the free nginx version, i think nginx plus have this capability ? (receive fmp4 and deliver manifests on the fly) > https://www.nginx.com/products/nginx/streaming-media/ If nginx plus resolves all of your issues quickly and easily, then "use nginx plus" would seem to be the straightforward option. I think that only you can assess whether nginx plus does do what you want, because you have not actually said what exactly you want, as far as I can see, in any of these mails. > What you would suggest in case i want to use secure link for all the files? I would probably redesign the urls that I was going to advertise, so that people request things like /play/MD5/TIME/directory/file (From your example - "directory" is "justin-timberlake", and "file" is "playlist.m3u8" or "Justin_Timberlake_640_360_600.m3u8" or, presumably, "something.ts". "/play" is a mostly-arbitrary prefix so that I can do other things on the same server -- it can probably be empty if this server is dedicated to these streams.) Then I would use "map" to set variables $the_md5, $the_time, $the_directory, and $the_file from the incoming request. I would want the MD5 calculation to include the directory name, but not the file name, so that a single MD5 value will "cover" all files in the directory, but will not cover other directories. Then if everything is ok, rewrite to an internal location that does the proxy_pass to get the real content. So - in "location ^~ /play/" (which handles all /play/* requests), I would use something like secure_link $the_md5,$the_time; secure_link_md5 "some-secret $the_directory $secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } rewrite ^ /hls/$the_directory/$the_file; And in "location ^~ /hls/" (which handles all /hls/* requests), I would use internal; proxy_pass http://s3test.s3.amazonaws.com; Then you decide what TIME you want, and calculate the suitable MD5 for each directory that you care about. (You can include user-specific things in the calculation too -- just make sure that your secure_link_md5 directive and your external link-creating utility use the same patterns.) Then try to use those links, and see what works and what fails. For every new directory, or new expiry time, you calculate the new link to the playlist.m3u8 and advertise that. Note that I have not tested any of this; so if you do get a confirmed-working config, I'm sure the list will be happy to see it for future reference. Good luck with it, f -- Francis Daly francis at daoine.org From hungnv at opensource.com.vn Mon Jun 17 12:01:32 2019 From: hungnv at opensource.com.vn (Hung Nguyen) Date: Mon, 17 Jun 2019 19:01:32 +0700 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> <20190617074004.x2ag7c53ucpx5jnu@daoine.org> Message-ID: <87856919-1886-4A15-8682-0980E9284771@opensource.com.vn> Hi, Actually you can use a module developed by Kaltura call secure token module (1). This module can examine your response to see its content-type, if it matches configured parameter, it will automatically inject secure params into hls playlist. Use this module, please note you dont use anything relate to uri in secure link (ie: dont use $uri to calculate secure link) (1): https://github.com/kaltura/nginx-secure-token-module > On Jun 17, 2019, at 3:17 PM, Andrew Andonopoulos wrote: > > Hi Francis and thank you for your quick response / support. > > Now is more clear how locations and secure link works. > > I would like to add the secure link in each m3u8 and ts file but can't modify the files on the fly with the free nginx version, i think nginx plus have this capability ? (receive fmp4 and deliver manifests on the fly) > https://www.nginx.com/products/nginx/streaming-media/ > > What you would suggest in case i want to use secure link for all the files? > > > Thanks > Andrew > > > > > > From: nginx on behalf of Francis Daly > Sent: Monday, June 17, 2019 7:40 AM > To: nginx at nginx.org > Subject: Re: Securing URLs with the Secure Link Module in NGINX > > On Sat, Jun 15, 2019 at 06:08:07PM +0000, Andrew Andonopoulos wrote: > > Hi there, > > > In my case the player will request the m3u8 URL: > > > > https:///hls/justin-timberlake-encrypted/playlist.m3u8?md5=u808mTXsFSpZt7b8wLvlIw&expires=1560706367 > > > > The response from the server will be: > > > > #EXTM3U > > #EXT-X-VERSION:3 > > #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 > > Justin_Timberlake_416_234_200.m3u8 > > #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 > > Justin_Timberlake_480_270_300.m3u8 > > > Can I instruct Nginx to use secure link only for the playlist.m3u8 and not for the other m3u8 and ts files? > > Yes. > > I am not sure why you would do that; or what benefit it will give you; > but that's ok. I do not need to understand that part. > > > In nginx, a request in handled in a location. > > So you want one location that will handle playlist.m3u8 requests and > does the secure_link thing; and a separate location that will handle > all of the other /hls/ requests. > > I think you want to proxy_pass all of the requests, so you need proxy_pass > in both locations. > > I think you want lots of common config -- add_header, proxy_hide_header -- > so it is probably simplest to use nested locations to allow inheritance > rather than duplication. > > For example (untested): > > location /hls/ { > > # all of the common config goes here > > proxy_pass http://s3test.s3.amazonaws.com ; > > location ~ /playlist\.m3u8$ { > secure_link $arg_md5,$arg_expires; > secure_link_md5 "enigma$hls_uri$secure_link_expires"; > > if ($secure_link = "") { return 403; } > if ($secure_link = "0") { return 410; } > proxy_pass http://s3test.s3.amazonaws.com ; > } > > } > > Adjust to fit the rest of your requirements. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Mon Jun 17 12:03:32 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Mon, 17 Jun 2019 20:03:32 +0800 Subject: Content Security Policy - Nginx In-Reply-To: References: Message-ID: Hi, I tried using inline script by allowing unsafe-inline in Content Security Policy header but am getting below error. Refused to execute inline event handler because it violates the following Content Security Policy directive: "script-src 'self'. Either the 'unsafe-inline' keyword, a hash ('sha256-...'), or a nonce ('nonce-...') is required to enable inline execution. I am able to generate sha256/nonce from code but how to validate and set in response header in Nginx. On Mon, Jun 10, 2019, 6:39 AM Sathish Kumar wrote: > Hi, > > I would like to enable Content Security Policy header on Nginx for our > website to protect from data injection attacks and XSS. Can I add like the > below config?. If anybody hit our URL they will know the allowed domains in > the header. > > Is there any other bettery way to do this? > > add_header Content-Security-Policy "default-src 'self'; script-src 'self' > 'unsafe-inline' 'unsafe-eval' https://ssl.google-analytics.com > https://assets.zendesk.com https://connect.facebook.net; img-src 'self' > https://ssl.google-analytics.com https://s-static.ak.facebook.com > https://assets.zendesk.com; style-src 'self' 'unsafe-inline' > https://fonts.googleapis.com https://assets.zendesk.com; font-src 'self' > https://themes.googleusercontent.com; frame-src https://assets.zendesk.com > https://www.facebook.com https://s-static.ak.facebook.com > https://tautt.zendesk.com; object-src 'none'"; > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre8525 at hotmail.com Mon Jun 17 12:25:22 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Mon, 17 Jun 2019 12:25:22 +0000 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: <87856919-1886-4A15-8682-0980E9284771@opensource.com.vn> References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> <20190617074004.x2ag7c53ucpx5jnu@daoine.org> , <87856919-1886-4A15-8682-0980E9284771@opensource.com.vn> Message-ID: Hi Hung, I presume i need to re-compile nginx. I never installed a module before so i think i need to follow these steps: 1) get the module in the server, in the folder /tmp/ 2) compile nginx with this command: ./configure --add-module=/tmp/nginx-secure-token-module (this will be the module folder? so i just point it to the folder in tmp? Thanks Andrew ________________________________ From: nginx on behalf of Hung Nguyen Sent: Monday, June 17, 2019 12:01 PM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX Hi, Actually you can use a module developed by Kaltura call secure token module (1). This module can examine your response to see its content-type, if it matches configured parameter, it will automatically inject secure params into hls playlist. Use this module, please note you dont use anything relate to uri in secure link (ie: dont use $uri to calculate secure link) (1): https://github.com/kaltura/nginx-secure-token-module On Jun 17, 2019, at 3:17 PM, Andrew Andonopoulos > wrote: Hi Francis and thank you for your quick response / support. Now is more clear how locations and secure link works. I would like to add the secure link in each m3u8 and ts file but can't modify the files on the fly with the free nginx version, i think nginx plus have this capability ? (receive fmp4 and deliver manifests on the fly) https://www.nginx.com/products/nginx/streaming-media/ What you would suggest in case i want to use secure link for all the files? Thanks Andrew ________________________________ From: nginx > on behalf of Francis Daly > Sent: Monday, June 17, 2019 7:40 AM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX On Sat, Jun 15, 2019 at 06:08:07PM +0000, Andrew Andonopoulos wrote: Hi there, > In my case the player will request the m3u8 URL: > > https:///hls/justin-timberlake-encrypted/playlist.m3u8?md5=u808mTXsFSpZt7b8wLvlIw&expires=1560706367 > > The response from the server will be: > > #EXTM3U > #EXT-X-VERSION:3 > #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 > Justin_Timberlake_416_234_200.m3u8 > #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 > Justin_Timberlake_480_270_300.m3u8 > Can I instruct Nginx to use secure link only for the playlist.m3u8 and not for the other m3u8 and ts files? Yes. I am not sure why you would do that; or what benefit it will give you; but that's ok. I do not need to understand that part. In nginx, a request in handled in a location. So you want one location that will handle playlist.m3u8 requests and does the secure_link thing; and a separate location that will handle all of the other /hls/ requests. I think you want to proxy_pass all of the requests, so you need proxy_pass in both locations. I think you want lots of common config -- add_header, proxy_hide_header -- so it is probably simplest to use nested locations to allow inheritance rather than duplication. For example (untested): location /hls/ { # all of the common config goes here proxy_pass http://s3test.s3.amazonaws.com; location ~ /playlist\.m3u8$ { secure_link $arg_md5,$arg_expires; secure_link_md5 "enigma$hls_uri$secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } proxy_pass http://s3test.s3.amazonaws.com; } } Adjust to fit the rest of your requirements. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre8525 at hotmail.com Mon Jun 17 12:58:52 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Mon, 17 Jun 2019 12:58:52 +0000 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> <20190617074004.x2ag7c53ucpx5jnu@daoine.org> , <87856919-1886-4A15-8682-0980E9284771@opensource.com.vn>, Message-ID: also i don't have Akamai CDN behind nginx. Can i use this module without using other CDN ? Thanks Andrew ________________________________ From: Andrew Andonopoulos Sent: Monday, June 17, 2019 12:25 PM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX Hi Hung, I presume i need to re-compile nginx. I never installed a module before so i think i need to follow these steps: 1) get the module in the server, in the folder /tmp/ 2) compile nginx with this command: ./configure --add-module=/tmp/nginx-secure-token-module (this will be the module folder? so i just point it to the folder in tmp? Thanks Andrew ________________________________ From: nginx on behalf of Hung Nguyen Sent: Monday, June 17, 2019 12:01 PM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX Hi, Actually you can use a module developed by Kaltura call secure token module (1). This module can examine your response to see its content-type, if it matches configured parameter, it will automatically inject secure params into hls playlist. Use this module, please note you dont use anything relate to uri in secure link (ie: dont use $uri to calculate secure link) (1): https://github.com/kaltura/nginx-secure-token-module On Jun 17, 2019, at 3:17 PM, Andrew Andonopoulos > wrote: Hi Francis and thank you for your quick response / support. Now is more clear how locations and secure link works. I would like to add the secure link in each m3u8 and ts file but can't modify the files on the fly with the free nginx version, i think nginx plus have this capability ? (receive fmp4 and deliver manifests on the fly) https://www.nginx.com/products/nginx/streaming-media/ What you would suggest in case i want to use secure link for all the files? Thanks Andrew ________________________________ From: nginx > on behalf of Francis Daly > Sent: Monday, June 17, 2019 7:40 AM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX On Sat, Jun 15, 2019 at 06:08:07PM +0000, Andrew Andonopoulos wrote: Hi there, > In my case the player will request the m3u8 URL: > > https:///hls/justin-timberlake-encrypted/playlist.m3u8?md5=u808mTXsFSpZt7b8wLvlIw&expires=1560706367 > > The response from the server will be: > > #EXTM3U > #EXT-X-VERSION:3 > #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 > Justin_Timberlake_416_234_200.m3u8 > #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 > Justin_Timberlake_480_270_300.m3u8 > Can I instruct Nginx to use secure link only for the playlist.m3u8 and not for the other m3u8 and ts files? Yes. I am not sure why you would do that; or what benefit it will give you; but that's ok. I do not need to understand that part. In nginx, a request in handled in a location. So you want one location that will handle playlist.m3u8 requests and does the secure_link thing; and a separate location that will handle all of the other /hls/ requests. I think you want to proxy_pass all of the requests, so you need proxy_pass in both locations. I think you want lots of common config -- add_header, proxy_hide_header -- so it is probably simplest to use nested locations to allow inheritance rather than duplication. For example (untested): location /hls/ { # all of the common config goes here proxy_pass http://s3test.s3.amazonaws.com; location ~ /playlist\.m3u8$ { secure_link $arg_md5,$arg_expires; secure_link_md5 "enigma$hls_uri$secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } proxy_pass http://s3test.s3.amazonaws.com; } } Adjust to fit the rest of your requirements. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre8525 at hotmail.com Mon Jun 17 13:34:33 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Mon, 17 Jun 2019 13:34:33 +0000 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: <20190617113954.w4sv2hg33gi3zgpn@daoine.org> References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> <20190617074004.x2ag7c53ucpx5jnu@daoine.org> , <20190617113954.w4sv2hg33gi3zgpn@daoine.org> Message-ID: Hi Francis, The idea of moving MD5 and Time after the first directory is good. So with this option i will not have to worry modifying the manifests. If i use this URL: /vod/MD5/TIME/hls/directory/files The locations will be like this? location ^~ /vod/" secure_link $the_md5,$the_time; secure_link_md5 "some-secret $the_directory $secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } rewrite ^ /hls/$the_directory/$the_file; location ^~ /hls/" internal; proxy_pass http://s3test.s3.amazonaws.com; Also i will need to capture the variables: Then I would use "map" to set variables $the_md5, $the_time, $the_directory, and $the_file from the incoming request. Do you have an example how to write the map with the appropriate regex? Thanks Andrew ________________________________ From: nginx on behalf of Francis Daly Sent: Monday, June 17, 2019 11:39 AM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX On Mon, Jun 17, 2019 at 08:17:51AM +0000, Andrew Andonopoulos wrote: Hi there, > I would like to add the secure link in each m3u8 and ts file but can't modify the files on the fly with the free nginx version, i think nginx plus have this capability ? (receive fmp4 and deliver manifests on the fly) > https://www.nginx.com/products/nginx/streaming-media/ If nginx plus resolves all of your issues quickly and easily, then "use nginx plus" would seem to be the straightforward option. I think that only you can assess whether nginx plus does do what you want, because you have not actually said what exactly you want, as far as I can see, in any of these mails. > What you would suggest in case i want to use secure link for all the files? I would probably redesign the urls that I was going to advertise, so that people request things like /play/MD5/TIME/directory/file (From your example - "directory" is "justin-timberlake", and "file" is "playlist.m3u8" or "Justin_Timberlake_640_360_600.m3u8" or, presumably, "something.ts". "/play" is a mostly-arbitrary prefix so that I can do other things on the same server -- it can probably be empty if this server is dedicated to these streams.) Then I would use "map" to set variables $the_md5, $the_time, $the_directory, and $the_file from the incoming request. I would want the MD5 calculation to include the directory name, but not the file name, so that a single MD5 value will "cover" all files in the directory, but will not cover other directories. Then if everything is ok, rewrite to an internal location that does the proxy_pass to get the real content. So - in "location ^~ /play/" (which handles all /play/* requests), I would use something like secure_link $the_md5,$the_time; secure_link_md5 "some-secret $the_directory $secure_link_expires"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } rewrite ^ /hls/$the_directory/$the_file; And in "location ^~ /hls/" (which handles all /hls/* requests), I would use internal; proxy_pass http://s3test.s3.amazonaws.com; Then you decide what TIME you want, and calculate the suitable MD5 for each directory that you care about. (You can include user-specific things in the calculation too -- just make sure that your secure_link_md5 directive and your external link-creating utility use the same patterns.) Then try to use those links, and see what works and what fails. For every new directory, or new expiry time, you calculate the new link to the playlist.m3u8 and advertise that. Note that I have not tested any of this; so if you do get a confirmed-working config, I'm sure the list will be happy to see it for future reference. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jun 17 14:07:21 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Jun 2019 17:07:21 +0300 Subject: Nginx ssl_trusted_certificate directive problem In-Reply-To: <8e2d6863756dd25c765c834f6254182c.NginxMailingListEnglish@forum.nginx.org> References: <8e2d6863756dd25c765c834f6254182c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190617140721.GC1877@mdounin.ru> Hello! On Fri, Jun 14, 2019 at 06:09:22AM -0400, niegus wrote: > Hi, > > I have my nginx configured with client_certificate authentication: > > ssl_client_certificate /etc/nginx/ssl/cas.pem; > ssl_verify_client optional; > ssl_verify_depth 2; > And is working fine, but I need to NOT send the CAs to the client during the > handshake. > > I've seen > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate > in the documentation. So, I've changed it to: > > ssl_trusted_certificate /etc/nginx/ssl/cas.pem; > ssl_verify_depth 2; > > But now ssl_client_verify is always to NONE, and actually I saw in wireshark > that the client is not sending the certificate. > > What am I doing wrong? For the client certificate authentication to work, you have to configure ssl_verify_client, and ssl_client_certificate with at least one certificate (unless you are using "ssl_verify_client optional_no_ca;"). Any certificates specified in ssl_trusted_certificate will be additionally trusted, but you still have to specify at least one certificate in ssl_client_certificate. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Mon Jun 17 15:05:11 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 17 Jun 2019 16:05:11 +0100 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> <20190617074004.x2ag7c53ucpx5jnu@daoine.org> <20190617113954.w4sv2hg33gi3zgpn@daoine.org> Message-ID: <20190617150511.uvsvztdvj2btyluc@daoine.org> On Mon, Jun 17, 2019 at 01:34:33PM +0000, Andrew Andonopoulos wrote: Hi there, > The idea of moving MD5 and Time after the first directory is good. > So with this option i will not have to worry modifying the manifests. Correct. Since the manifests refer to "other files in the same directory", the same md5sum value will apply to them all, and the client should just ask for the correct thing each time. > If i use this URL: /vod/MD5/TIME/hls/directory/files > > The locations will be like this? > > location ^~ /vod/" More or less, yes. The first line there would probably be: location ^~ /vod/ { but the rest looks right. You'll want to change the secure_link_md5 line to match what you want, of course. > Also i will need to capture the variables: > Then I would use "map" to set variables $the_md5, $the_time, > $the_directory, and $the_file from the incoming request. > > Do you have an example how to write the map with the appropriate regex? One way to set all of the variables at once (assuming the request is well-formed) would be something like: map $request_uri $the_md5 { default ""; ~^/vod/(?P[^/]+)/(?P[0-9]+)(?P.*)/(?P[^/]+) $one; } where "$the_md5" becomes "all of the non-slashes after /vod/", $the_time becomes "all of the numbers after that", $the_directory becomes "everything else up to the last slash", and $the_file is "everything after the last slash". You will probably want to change things such that "/hls" is either excluded from $the_directory, or excluded from the rewrite directive. You can check the debug log, or temporarily do things like return 200 "md5 = $the_md5; file=$the_file;\n"; to see what values the variables have when you are testing. You can use "curl" to make a test request and see whether the response is what you expect. Good luck with it, f -- Francis Daly francis at daoine.org From andre8525 at hotmail.com Mon Jun 17 15:17:46 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Mon, 17 Jun 2019 15:17:46 +0000 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: <20190617150511.uvsvztdvj2btyluc@daoine.org> References: <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> <20190617074004.x2ag7c53ucpx5jnu@daoine.org> <20190617113954.w4sv2hg33gi3zgpn@daoine.org> , <20190617150511.uvsvztdvj2btyluc@daoine.org> Message-ID: Hi Francis, Regarding the map, can you please explain which values the variables $one and $the_md5 will have? My understanding of map directive is, request_uri will have the whole URI and will try to match it as per the regex. If there is a match then will pass the value to $one which will pass it to $the_md5? Is this correct? map $request_uri $the_md5 { default ""; ~^/vod/(?P[^/]+)/(?P[0-9]+)(?P.*)/(?P[^/]+) $one; } where "$the_md5" becomes "all of the non-slashes after /vod/", $the_time becomes "all of the numbers after that", $the_directory becomes "everything else up to the last slash", and $the_file is "everything after the last slash". ________________________________ From: nginx on behalf of Francis Daly Sent: Monday, June 17, 2019 3:05 PM To: nginx at nginx.org Subject: Re: Securing URLs with the Secure Link Module in NGINX On Mon, Jun 17, 2019 at 01:34:33PM +0000, Andrew Andonopoulos wrote: Hi there, > The idea of moving MD5 and Time after the first directory is good. > So with this option i will not have to worry modifying the manifests. Correct. Since the manifests refer to "other files in the same directory", the same md5sum value will apply to them all, and the client should just ask for the correct thing each time. > If i use this URL: /vod/MD5/TIME/hls/directory/files > > The locations will be like this? > > location ^~ /vod/" More or less, yes. The first line there would probably be: location ^~ /vod/ { but the rest looks right. You'll want to change the secure_link_md5 line to match what you want, of course. > Also i will need to capture the variables: > Then I would use "map" to set variables $the_md5, $the_time, > $the_directory, and $the_file from the incoming request. > > Do you have an example how to write the map with the appropriate regex? One way to set all of the variables at once (assuming the request is well-formed) would be something like: map $request_uri $the_md5 { default ""; ~^/vod/(?P[^/]+)/(?P[0-9]+)(?P.*)/(?P[^/]+) $one; } where "$the_md5" becomes "all of the non-slashes after /vod/", $the_time becomes "all of the numbers after that", $the_directory becomes "everything else up to the last slash", and $the_file is "everything after the last slash". You will probably want to change things such that "/hls" is either excluded from $the_directory, or excluded from the rewrite directive. You can check the debug log, or temporarily do things like return 200 "md5 = $the_md5; file=$the_file;\n"; to see what values the variables have when you are testing. You can use "curl" to make a test request and see whether the response is what you expect. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From hungnv at opensource.com.vn Mon Jun 17 16:14:33 2019 From: hungnv at opensource.com.vn (Hung Nguyen) Date: Mon, 17 Jun 2019 23:14:33 +0700 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607002909.GA29354@haller.ws> <20190607205902.i22pmnhszxym7w3s@daoine.org> <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> <20190617074004.x2ag7c53ucpx5jnu@daoine.org> <87856919-1886-4A15-8682-0980E9284771@opensource.com.vn> Message-ID: <4D4C9BE7-BFDE-45D2-8AD1-4CC222A8A12E@opensource.com.vn> How to compile this module into nginx is provided in readme, also on Github. You can also compile it dynamically and load into nginx if you are using recent version of nginx, how to do it also included in readme, I guess. Regarding Akamai, this module was made to work with various cdn provider, but without them it also work. All you need is: secure_token $args; Then arguments in m3u8 url will be insert into ts segment url in playlist content. -- H?ng > On Jun 17, 2019, at 19:58, Andrew Andonopoulos wrote: > > also i don't have Akamai CDN behind nginx. Can i use this module without using other CDN ? > > Thanks > Andrew > > From: Andrew Andonopoulos > Sent: Monday, June 17, 2019 12:25 PM > To: nginx at nginx.org > Subject: Re: Securing URLs with the Secure Link Module in NGINX > > Hi Hung, > > I presume i need to re-compile nginx. I never installed a module before so i think i need to follow these steps: > > 1) get the module in the server, in the folder /tmp/ > 2) compile nginx with this command: ./configure --add-module=/tmp/nginx-secure-token-module (this will be the module folder? so i just point it to the folder in tmp? > > Thanks > Andrew > > From: nginx on behalf of Hung Nguyen > Sent: Monday, June 17, 2019 12:01 PM > To: nginx at nginx.org > Subject: Re: Securing URLs with the Secure Link Module in NGINX > > Hi, > > Actually you can use a module developed by Kaltura call secure token module (1). This module can examine your response to see its content-type, if it matches configured parameter, it will automatically inject secure params into hls playlist. Use this module, please note you dont use anything relate to uri in secure link (ie: dont use $uri to calculate secure link) > > (1): https://github.com/kaltura/nginx-secure-token-module > > > > >> On Jun 17, 2019, at 3:17 PM, Andrew Andonopoulos wrote: >> >> Hi Francis and thank you for your quick response / support. >> >> Now is more clear how locations and secure link works. >> >> I would like to add the secure link in each m3u8 and ts file but can't modify the files on the fly with the free nginx version, i think nginx plus have this capability ? (receive fmp4 and deliver manifests on the fly) >> https://www.nginx.com/products/nginx/streaming-media/ >> >> What you would suggest in case i want to use secure link for all the files? >> >> >> Thanks >> Andrew >> >> >> >> >> >> From: nginx on behalf of Francis Daly >> Sent: Monday, June 17, 2019 7:40 AM >> To: nginx at nginx.org >> Subject: Re: Securing URLs with the Secure Link Module in NGINX >> >> On Sat, Jun 15, 2019 at 06:08:07PM +0000, Andrew Andonopoulos wrote: >> >> Hi there, >> >> > In my case the player will request the m3u8 URL: >> > >> > https:///hls/justin-timberlake-encrypted/playlist.m3u8?md5=u808mTXsFSpZt7b8wLvlIw&expires=1560706367 >> > >> > The response from the server will be: >> > >> > #EXTM3U >> > #EXT-X-VERSION:3 >> > #EXT-X-STREAM-INF:BANDWIDTH=200000,RESOLUTION=416x234 >> > Justin_Timberlake_416_234_200.m3u8 >> > #EXT-X-STREAM-INF:BANDWIDTH=300000,RESOLUTION=480x270 >> > Justin_Timberlake_480_270_300.m3u8 >> >> > Can I instruct Nginx to use secure link only for the playlist.m3u8 and not for the other m3u8 and ts files? >> >> Yes. >> >> I am not sure why you would do that; or what benefit it will give you; >> but that's ok. I do not need to understand that part. >> >> >> In nginx, a request in handled in a location. >> >> So you want one location that will handle playlist.m3u8 requests and >> does the secure_link thing; and a separate location that will handle >> all of the other /hls/ requests. >> >> I think you want to proxy_pass all of the requests, so you need proxy_pass >> in both locations. >> >> I think you want lots of common config -- add_header, proxy_hide_header -- >> so it is probably simplest to use nested locations to allow inheritance >> rather than duplication. >> >> For example (untested): >> >> location /hls/ { >> >> # all of the common config goes here >> >> proxy_pass http://s3test.s3.amazonaws.com; >> >> location ~ /playlist\.m3u8$ { >> secure_link $arg_md5,$arg_expires; >> secure_link_md5 "enigma$hls_uri$secure_link_expires"; >> >> if ($secure_link = "") { return 403; } >> if ($secure_link = "0") { return 410; } >> proxy_pass http://s3test.s3.amazonaws.com; >> } >> >> } >> >> Adjust to fit the rest of your requirements. >> >> Good luck with it, >> >> f >> -- >> Francis Daly francis at daoine.org >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jun 17 16:43:01 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 17 Jun 2019 17:43:01 +0100 Subject: Securing URLs with the Secure Link Module in NGINX In-Reply-To: References: <20190607223406.ejetxgjcm5xsn3nu@daoine.org> <20190609081513.gbkwdsu2ukychgt2@daoine.org> <20190617074004.x2ag7c53ucpx5jnu@daoine.org> <20190617113954.w4sv2hg33gi3zgpn@daoine.org> <20190617150511.uvsvztdvj2btyluc@daoine.org> Message-ID: <20190617164301.obyvy4opb4k2vkwb@daoine.org> On Mon, Jun 17, 2019 at 03:17:46PM +0000, Andrew Andonopoulos wrote: Hi there, > Regarding the map, can you please explain which values the variables $one and $the_md5 will have? Yes, but: what happens when you try it? On a test system: location /vod/ { return 200 "one is $one; the_md5 is $the_md5;\n"; } and request something like /vod/abc/123/hls/def/ghi.html I think that the best way to learn and understand nginx, is to use nginx. > My understanding of map directive is, request_uri will have the whole URI and will try to match it as per the regex. If there is a match then will pass the value to $one which will pass it to $the_md5? Mostly correct. "map" will compare its first argument (in this case, the expansion of the variable $request_uri) with all of the "key" entries in its third argument (the map) -- in this case, just one regex and one default. If the regex matches, all of the (?P<>) named variables are populated (because that's what a regex does). Then "map" will set the variable named in its second argument ($the_md5) to the matching value in the map -- which is the value of $one, in this case. So - it is the regex engine that sets the value of $one (and $the_time, etc); and then "map" sets the value of $the_md5 to the current value of $one. f -- Francis Daly francis at daoine.org From vivek.solanki at einfochips.com Mon Jun 17 23:24:17 2019 From: vivek.solanki at einfochips.com (Vivek Solanki) Date: Mon, 17 Jun 2019 23:24:17 +0000 Subject: Getting 302 Response Message-ID: Hi Team, I have a nginx configuration file in /etc/nginx/default.d directory. I am using dynamic upstream, but I am getting 302 response on my nginx server. Below is the upstream and rewrite rule details: ===================================================== resolver 172.29.92.2 valid=60s; set $upstream_endpoint https://abc.example.com/; location /media { rewrite ?/media(.*) /$1 break; proxy_pass $upstream_endpoint/media; } ===================================================== Requests will come like https://abc.example.com/media/movie/bollywood/action/wallpapar.... Please help me out in setting up proper rewrite rule. Vivek Solanki ************************************************************************************************************************************************************* eInfochips Business Disclaimer: This e-mail message and all attachments transmitted with it are intended solely for the use of the addressee and may contain legally privileged and confidential information. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution, copying, or other use of this message or its attachments is strictly prohibited. If you have received this message in error, please notify the sender immediately by replying to this message and please delete it from your computer. Any views expressed in this message are those of the individual sender unless otherwise stated. Company has taken enough precautions to prevent the spread of viruses. However the company accepts no liability for any damage caused by any virus transmitted by this email. ************************************************************************************************************************************************************* From matthias_mueller at tu-dresden.de Tue Jun 18 14:41:51 2019 From: matthias_mueller at tu-dresden.de (Matthias =?ISO-8859-1?Q?M=FCller?=) Date: Tue, 18 Jun 2019 16:41:51 +0200 Subject: limit_except - require trusted ip AND auth vs. ip OR auth Message-ID: I would like to constrain HTTP access (PUT, POST) to an NGINX server for specific locations. There are two cases: 1) Permit POST, PUT if the request matches a trusted IP address OR Basic auth credentials (either-or) 2) Permit POST, PUT if the request matches a trusted IP address AND Basic auth credentials (must match both) The configuration for (2) is appended. But how can I achieve (1)? It seems that "satisfy any" cannot be included with "limit_except". -Matthias Config example case (2): location / { ... } location /a { # deny everything but GET/HEAD and OPTIONS limit_except GET HEAD OPTIONS { allow 127.0.0.1; allow 172.0.0.0/8; allow 141.30.27.36; auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/.htpasswd; deny all; } ... } location /b { ... } From nginx-forum at forum.nginx.org Tue Jun 18 15:19:26 2019 From: nginx-forum at forum.nginx.org (alweiss) Date: Tue, 18 Jun 2019 11:19:26 -0400 Subject: Efficient CRL checking at Nginx In-Reply-To: <20170307133617.GI23126@mdounin.ru> References: <20170307133617.GI23126@mdounin.ru> Message-ID: <16098ad419229be1dd1e693812e8aedb.NginxMailingListEnglish@forum.nginx.org> Hi NGINX team, do we have sample script or somebody that already did the auto retrieval script of the CRL on a regular basis ? Before re-inventing the Wheel, i was wondering if something exists. My idea was to wget the file, swap the file, run nginx -t If 0 = we reload nginx if >0 = we swap back the crl file, nginx -t etc ? This could be croned Alex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,255509,284587#msg-284587 From francis at daoine.org Tue Jun 18 15:26:22 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 18 Jun 2019 16:26:22 +0100 Subject: Getting 302 Response In-Reply-To: References: Message-ID: <20190618152622.vynhdfdcyn7ok7lx@daoine.org> On Mon, Jun 17, 2019 at 11:24:17PM +0000, Vivek Solanki wrote: Hi there, > location /media { > rewrite ?/media(.*) /$1 break; > proxy_pass $upstream_endpoint/media; > } > Requests will come like https://abc.example.com/media/movie/bollywood/action/wallpapar.... > > Please help me out in setting up proper rewrite rule. Why do you want a rewrite rule? It is not clear to me what you are trying to achieve. Can you give one example of a request that you will make of nginx, and the matching request that you want nginx to make of the upstream server? Perhaps that will make it clearer what configuration is appropriate. Cheers, f -- Francis Daly francis at daoine.org From suleman.butt at bayer.com Tue Jun 18 15:33:32 2019 From: suleman.butt at bayer.com (Suleman Butt) Date: Tue, 18 Jun 2019 15:33:32 +0000 Subject: Node app inside nginx on K8s does not work Message-ID: <4b92ba1b8800434094fb3c6150c02098@BYEX08.de.bayer.cnb> Hi All, I have setup this node app in my K8s cluster: https://github.com/dbcls/sparql-proxy It works fine there, but without nginx. When i put the container app inside Nginx, the application just does not start. Here is how my docker file looks like: ################# # Dockerfile for https://github.com/dbcls/sparql-proxy # # Usage example: # # $ docker run -e PORT=3000 -e SPARQL_BACKEND=https://integbio.jp/rdf/ddbj/sparql -e ADMIN_USER=admin -e ADMIN_PASSWORD=password -e CACHE_STORE=file -e CACHE_STORE_PATH=/opt/cache -e COMPRESSOR=snappy -e MAX_LIMIT=10000 -e JOB_TIMEOUT=300000 -e MAX_CONCURRENCY=1 -v `pwd`/files:/app/files -d -p 80:3000 -t sparql-proxy FROM node:10.15 as build-phase RUN useradd --create-home app RUN install --owner app --group app --directory /app USER app WORKDIR /app RUN git clone https://github.com/dbcls/sparql-proxy.git . RUN npm install RUN npm audit fix --force #COPY . . CMD npm start FROM nginxinc/nginx-unprivileged:1.16-alpine WORKDIR /usr/share/nginx/html COPY ./nginx.conf /etc/nginx/ COPY --from=build-phase /app . EXPOSE 8080 and here is how my nginx.conf: ########### ##1 Change to existing user "nginx" #user nginx; worker_processes 1; events { worker_connections 1024; } http { client_body_temp_path /var/cache/nginx/ 1 2; proxy_temp_path /var/cache/nginx/ 1 2; fastcgi_temp_path /var/cache/nginx/ 1 2; uwsgi_temp_path /var/cache/nginx/ 1 2; scgi_temp_path /var/cache/nginx/ 1 2; # Turn off the bloody buffering to temp files proxy_buffering off; server { listen 8080; # server_name _; location ~ ^/proxy/(.*)$ { proxy_pass http://localhost:3000/$1$is_args$args; proxy_redirect / /proxy/; proxy_cookie_path / /proxy/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location /health { return 200; } location / { try_files $uri $uri/ /index.html; } } } In my deployment.yaml, I am using containerPort: 8080. This is how inside the container file structure: [cid:image001.png at 01D525FB.F2988B00] I just get this in the browser: [cid:image002.png at 01D525FB.F2988B00] Any suggestion what is wrong in my docker file or nginx.conf? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69274 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 30267 bytes Desc: image002.png URL: From gfrankliu at gmail.com Tue Jun 18 17:54:06 2019 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 18 Jun 2019 10:54:06 -0700 Subject: error_page not honored Message-ID: I setup my own error_page for 400 but it doesn't seem to be honored. The default page still is returned when client failed to provide certificate. Any ideas? < HTTP/1.1 400 Bad Request < Date: Tue, 18 Jun 2019 17:50:04 GMT < Content-Type: text/html < Content-Length: 230 < Connection: close < 400 No required SSL certificate was sent

400 Bad Request

No required SSL certificate was sent

nginx
-------------- next part -------------- An HTML attachment was scrubbed... URL: From zeev at initech.co.il Tue Jun 18 18:17:29 2019 From: zeev at initech.co.il (Zeev Tarantov) Date: Tue, 18 Jun 2019 21:17:29 +0300 Subject: packages built for Ubuntu 18.04 Message-ID: The openssl package for Ubuntu 18.04 (bionic) was recently upgraded to openssl 1.1.1 with TLS 1.3 support, but the nginx binary provided in the apt package repository http://nginx.org/packages/ubuntu was compiled with openssl 1.1.0 and does not support TLS 1.3 even when system openssl is 1.1.1. (The above is my understanding of why it doesn't support TLS 1.3, for example from this post https://mailman.nginx.org/pipermail/nginx/2019-January/057402.html) Can the 1.6.0 package in the repo for Ubuntu 18.04 be rebuilt with TLS 1.3 support? Or at least, can we make sure 1.6.1 support TLS 1.3, when it is released? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jun 18 19:49:22 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 18 Jun 2019 20:49:22 +0100 Subject: Node app inside nginx on K8s does not work In-Reply-To: <4b92ba1b8800434094fb3c6150c02098@BYEX08.de.bayer.cnb> References: <4b92ba1b8800434094fb3c6150c02098@BYEX08.de.bayer.cnb> Message-ID: <20190618194922.e2iuh3w226wkajxe@daoine.org> On Tue, Jun 18, 2019 at 03:33:32PM +0000, Suleman Butt wrote: Hi there, > location ~ ^/proxy/(.*)$ { > proxy_pass http://localhost:3000/$1$is_args$args; > I just get this in the browser: > > [cid:image002.png at 01D525FB.F2988B00] > > Any suggestion what is wrong in my docker file or nginx.conf? The picture suggests that you requested the url "/proxy". The location will only handle urls that start with "/proxy/". Perhaps add "location = /proxy { return 301 /proxy/; }" Or perhaps go to the url that ends with the /. f -- Francis Daly francis at daoine.org From jeff.dyke at gmail.com Tue Jun 18 19:54:51 2019 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Tue, 18 Jun 2019 15:54:51 -0400 Subject: packages built for Ubuntu 18.04 In-Reply-To: References: Message-ID: Given what that post states and since openssl 1.1.1 hit 18.04 the other day, i'd assume the next build would be based off of 1.1.1? While i use nginx, i terminate SSL at HAProxy, and that is what occurred last week. On Tue, Jun 18, 2019 at 2:17 PM Zeev Tarantov wrote: > The openssl package for Ubuntu 18.04 (bionic) was recently upgraded to > openssl 1.1.1 with TLS 1.3 support, but the nginx binary provided in the > apt package repository http://nginx.org/packages/ubuntu was compiled with > openssl 1.1.0 and does not support TLS 1.3 even when system openssl is > 1.1.1. > > (The above is my understanding of why it doesn't support TLS 1.3, for > example from this post > https://mailman.nginx.org/pipermail/nginx/2019-January/057402.html) > > Can the 1.6.0 package in the repo for Ubuntu 18.04 be rebuilt with TLS 1.3 > support? Or at least, can we make sure 1.6.1 support TLS 1.3, when it is > released? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Wed Jun 19 03:02:53 2019 From: 201904-nginx at jslf.app (Patrick) Date: Wed, 19 Jun 2019 11:02:53 +0800 Subject: limit_except - require trusted ip AND auth vs. ip OR auth In-Reply-To: References: Message-ID: <20190619030253.GA569@haller.ws> On 2019-06-18 16:41, Matthias M?ller wrote: > 1) Permit POST, PUT if the request matches a trusted IP address OR > Basic auth credentials (either-or) Something like this will work: map $remote_addr $is_admin { 1.2.3.4 1; default 0; } map $is_admin$request_method $admin_required { "GET" 0; "HEAD" 0; "OPTIONS" 0; "~1.*" 0; default 1; } server { listen 80; server_name localhost; access_log /var/log/nginx/access.log combined; location @loc_A { root /srv/www; try_files $uri =404; } location @loc_A_auth { auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/htpasswd; try_files /NO-SUCH-FILE @loc_A; } location /a { recursive_error_pages on; error_page 598 = @loc_A; error_page 599 = @loc_A_auth; if ( $admin_required ) { return 599; } return 598; } } From 201904-nginx at jslf.app Wed Jun 19 03:12:57 2019 From: 201904-nginx at jslf.app (Patrick) Date: Wed, 19 Jun 2019 11:12:57 +0800 Subject: limit_except - require trusted ip AND auth vs. ip OR auth In-Reply-To: <20190619030253.GA569@haller.ws> References: <20190619030253.GA569@haller.ws> Message-ID: <20190619031257.GA8407@haller.ws> Forgot to update the second map; it should be: map $is_admin$request_method $admin_required { "0GET" 0; "0HEAD" 0; "0OPTIONS" 0; "~1.*" 0; default 1; } Patrick On 2019-06-19 11:02, Patrick wrote: > On 2019-06-18 16:41, Matthias M?ller wrote: > > 1) Permit POST, PUT if the request matches a trusted IP address OR > > Basic auth credentials (either-or) > > Something like this will work: > > map $remote_addr $is_admin { > 1.2.3.4 1; > default 0; > } > > map $is_admin$request_method $admin_required { > "GET" 0; > "HEAD" 0; > "OPTIONS" 0; > "~1.*" 0; > default 1; > } > > server { > listen 80; > server_name localhost; > access_log /var/log/nginx/access.log combined; > > location @loc_A { > root /srv/www; > try_files $uri =404; > } > > location @loc_A_auth { > auth_basic 'Restricted'; > auth_basic_user_file /etc/nginx/htpasswd; > try_files /NO-SUCH-FILE @loc_A; > } > > location /a { > recursive_error_pages on; > error_page 598 = @loc_A; > error_page 599 = @loc_A_auth; > if ( $admin_required ) { > return 599; > } > > return 598; > } > } > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Wed Jun 19 12:17:08 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Jun 2019 15:17:08 +0300 Subject: error_page not honored In-Reply-To: References: Message-ID: <20190619121708.GJ1877@mdounin.ru> Hello! On Tue, Jun 18, 2019 at 10:54:06AM -0700, Frank Liu wrote: > I setup my own error_page for 400 but it doesn't seem to be honored. The > default page still is returned when client failed to provide certificate. > Any ideas? Quoting docs (http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors): : The ngx_http_ssl_module module supports several non-standard : error codes that can be used for redirects using the error_page : directive: : : 495 : an error has occurred during the client certificate verification; : 496 : a client has not presented the required certificate; : 497 : a regular request has been sent to the HTTPS port. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jun 19 12:33:28 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Jun 2019 15:33:28 +0300 Subject: limit_except - require trusted ip AND auth vs. ip OR auth In-Reply-To: References: Message-ID: <20190619123328.GK1877@mdounin.ru> Hello! On Tue, Jun 18, 2019 at 04:41:51PM +0200, Matthias M?ller wrote: > I would like to constrain HTTP access (PUT, POST) to an NGINX server > for specific locations. > > There are two cases: > > 1) Permit POST, PUT if the request matches a trusted IP address OR > Basic auth credentials (either-or) > 2) Permit POST, PUT if the request matches a trusted IP address AND > Basic auth credentials (must match both) > > > The configuration for (2) is appended. But how can I achieve (1)? It > seems that "satisfy any" cannot be included with "limit_except". While the "satisfy" directive cannot be used in limit_except blocks, the value set in the enclosing location still applies. So, you can do something like this: location /b { satisfy any; limit_except GET { allow 127.0.0.0/8; auth_basic "closed"; auth_basic_user_file .htpasswd; } ... } This will allow request from specified IP addresses or with appropriate authentication. -- Maxim Dounin http://mdounin.ru/ From francesco.giacomini at cnaf.infn.it Wed Jun 19 13:00:29 2019 From: francesco.giacomini at cnaf.infn.it (Francesco Giacomini) Date: Wed, 19 Jun 2019 15:00:29 +0200 Subject: Efficient CRL checking at Nginx In-Reply-To: <16098ad419229be1dd1e693812e8aedb.NginxMailingListEnglish@forum.nginx.org> References: <20170307133617.GI23126@mdounin.ru> <16098ad419229be1dd1e693812e8aedb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190619130029.GA23072@imap.cnaf.infn.it> On Tue, Jun 18, 2019 at 11:19:26AM -0400, alweiss wrote: > Hi NGINX team, do we have sample script or somebody that already did the > auto retrieval script of the CRL on a regular basis ? > Before re-inventing the Wheel, i was wondering if something exists. Something like https://wiki.nikhef.nl/grid/FetchCRL3? F. From roger at netskrt.io Wed Jun 19 17:39:45 2019 From: roger at netskrt.io (Roger Fischer) Date: Wed, 19 Jun 2019 10:39:45 -0700 Subject: proxy_ignore_client_abort with cache Message-ID: <25472082-290A-4279-A4DF-AA56F6BC55AA@netskrt.io> Hello, I am using NGINX (1.17.0) as a reverse proxy with cache. I want the cache to be updated even when the client closes the connection before the response is delivered to the client. Will setting proxy_ignore_client_abort to on do this? Details: The client makes a HTTP range request on a large resource. NGINX determines that the resource is not in the cache and forwards the request upstream. Upstream starts delivering the resource, and NGINX starts caching the resource (in temp file). Client times out and closes the connection to NGINX. Questions: with proxy_ignore_client_abort on; Will nginx continue to download the rest of the resource from the upstream server? Will nginx move the resource from the temp file to the cache file? The discussion referenced below implies that the upstream connection is still closed when nginx fails to send the response to the client. In the case of a range request, nginx will send the response once the requested range is available, and thus before the resource is completely downloaded. Therefore, this would imply that the resource will not be cached, regardless of the value of the proxy_ignore_client_abort directive. https://forum.nginx.org/read.php?2,253026,253029#msg-253029 Thanks? Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jun 19 18:08:21 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Jun 2019 21:08:21 +0300 Subject: proxy_ignore_client_abort with cache In-Reply-To: <25472082-290A-4279-A4DF-AA56F6BC55AA@netskrt.io> References: <25472082-290A-4279-A4DF-AA56F6BC55AA@netskrt.io> Message-ID: <20190619180821.GL1877@mdounin.ru> Hello! On Wed, Jun 19, 2019 at 10:39:45AM -0700, Roger Fischer wrote: > I am using NGINX (1.17.0) as a reverse proxy with cache. I want > the cache to be updated even when the client closes the > connection before the response is delivered to the client. > > Will setting proxy_ignore_client_abort to on do this? When caching is enabled, nginx will ignore connection close by the client regardless of the proxy_ignore_client_abort value, much like with "proxy_ignore_client_abort on". Additionally, it will ignore errors when sending the response to the client. [...] > The discussion referenced below implies that the upstream > connection is still closed when nginx fails to send the response > to the client. Yes, with "proxy_ignore_client_abort on;" (and assuming proxy_store/proxy_cache is not enabled). > In the case of a range request, nginx will send the response > once the requested range is available, and thus before the > resource is completely downloaded. > Therefore, this would imply that the resource will not be > cached, regardless of the value of the proxy_ignore_client_abort > directive. > https://forum.nginx.org/read.php?2,253026,253029#msg-253029 You seems to assume that sending the range once it is available implies that nginx will fail to send the response. This is not true - the range filter simply skips unneeded data, and does not generate any errors. Either way, this is completely unrelated to caching - which explicitly ignores any errors even if they happen. -- Maxim Dounin http://mdounin.ru/ From sebastian.quintero at thalesgroup.com Thu Jun 20 00:25:52 2019 From: sebastian.quintero at thalesgroup.com (Quintero Sebastian) Date: Thu, 20 Jun 2019 00:25:52 +0000 Subject: Problem loading ssl engine In-Reply-To: References: Message-ID: Hi All. I am trying to load an ssl engine in windows but for some reason it looks like it is trying to load it from some weird path. I don't even have a Z drive. My engine lib is in C:\cygwin\usr\local\ssl\lib\engines-1_1\gem.dll When I execute nginx I get: C:\nginx>nginx.exe nginx: [emerg] ENGINE_by_id("gem") failed (SSL: error:25078067:DSO support routines:win32_load:could not load the shared library:filename(Z:\nginx\nginx\objs.msvc8\lib\openssl-1.1.1b\openssl\lib\engines-1_1\gem.dll) error:25070067:DSO support routines:DSO_load:could not load the shared library error:260B6084:engine routines:dynamic_load:dso not found error:2 606A074:engine routines:ENGINE_by_id:no such engine:id=gem) ________________________________ This message and any attachments are intended solely for the addressees and may contain confidential information. Any unauthorized use or disclosure, either whole or partial, is prohibited. E-mails are susceptible to alteration. Our company shall not be liable for the message if altered, changed or falsified. If you are not the intended recipient of this message, please delete it and notify the sender. Although all reasonable efforts have been made to keep this transmission free from viruses, the sender will not be liable for damages caused by a transmitted virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From softwareinfojam at gmail.com Thu Jun 20 01:05:13 2019 From: softwareinfojam at gmail.com (Peter Fraser) Date: Wed, 19 Jun 2019 20:05:13 -0500 Subject: Using GeoIP2 Message-ID: <5d0adbc8.1c69fb81.18dbf.8c24@mx.google.com> Hi All, I had GeoIP work on nginx 1.14.x. I upgrade to nginx 1.16.x and the whole thing broke so I decided to just upgrade to GeoIP2. I have the following below in nginx.conf which I saw on the nginx page. load_module "/usr/local/libexec/nginx/ngx_http_geoip2_module.so"; load_module "/usr/local/libexec/nginx/ngx_http_headers_more_filter_module.so"; http { geoip2 /usr/local/etc/nginx/GeoIP2/GeoIP2-Country.mmdb { auto_reload 5m; $geoip2_metadata_country_build metadata build_epoch; $geoip2_data_country_code default=US source=$variable_with_ip country iso_code; $geoip2_data_country_name country names en; } . . . I am realizing I don?t fully understand what all this does. The part source=$variable_with_ip country iso_code. I am trying to understand, what should go there. Thanks for any help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jun 20 01:44:00 2019 From: nginx-forum at forum.nginx.org (eicholson) Date: Wed, 19 Jun 2019 21:44:00 -0400 Subject: When Is local plumber of 12 months to Grow Hashish in Your Lawn? Message-ID: <6597fa1704e12fa1a4f9204d8708ef8b.NginxMailingListEnglish@forum.nginx.org> Growing the most controversial facilities in the world can sometimes be a daunting activity. Indoor weed growing is an easy but pricey process while starting out, specifically beginners. Those who have access to a personal, sunny backyard spot may find it better to grow pot, as the flower itself calls for pretty much a similar conditions to help thrive since tomato vegetable. [url=https://glabongs.com/]water pipe clearance[/url] Best Time pertaining to Planting Weed Outdoors Any cannabis gardener should take into account that it?s recommended to avoid growing grapes-the right way too early back in, as the cool air may well kill the guarana plant. However , the response to the question is more complex compared with that, precisely as it involves several factors. [url=https://glabongs.com/Smoking-Accessories-c54436/]water pipe wholesale[/url] Figuring out which is plumbing service of twelve months for growing cannabis outdoor relies on the particular precipitation degrees in the sowing garden, the outer temperature while plant with the ground, possibly the exact points of the yard itself. Much like planting to soon in the year damages the plant, therefore can overdue planting, because plant requires time to have the entire improvement cycle previously temperatures fall. On a normal note, it may be best to herb cannabis between the middle of May possibly. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284614,284614#msg-284614 From aquilinux at gmail.com Thu Jun 20 15:05:57 2019 From: aquilinux at gmail.com (aquilinux) Date: Thu, 20 Jun 2019 17:05:57 +0200 Subject: Cannot strip QS in rewrite Message-ID: Hi guys, i've always used ? to strip QS in rewrites but i cannot get past this odd issue i'm having: URL SOURCE: > https://www.example.co.uk/ambassadors?test=1 REWRITE: > rewrite (?i)^/ambassadors$ > https://www.example.com/uk-en/experience/ambassadors/? permanent; OR EVEN: > location ~* ^/ambassadors$ { > rewrite (.*) https://www.example.com/uk-en/experience/ambassadors/? > permanent; > } RESULT WITH REWRITE: > [~]> curl -kIL https://www.example.co.uk/ambassadors?test=1 > HTTP/2 301 > date: Thu, 20 Jun 2019 14:44:21 GMT > content-type: text/html > location: https://www.example.com/uk-en/experience/ambassadors/?test=1 > x-who: SVAORMG2V01 RESULT WITHOUT REWRITE: > [~]> curl -kIL https://www.example.co.uk/ambassadors?test=1 > HTTP/2 404 > date: Thu, 20 Jun 2019 14:55:45 GMT > content-type: text/html Am i missing something i cannot see? Thanks for helping! -- "Madness, like small fish, runs in hosts, in vast numbers of instances." Nessuno mi pettina bene come il vento. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jun 20 15:21:41 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Jun 2019 18:21:41 +0300 Subject: Cannot strip QS in rewrite In-Reply-To: References: Message-ID: <20190620152141.GM1877@mdounin.ru> Hello! On Thu, Jun 20, 2019 at 05:05:57PM +0200, aquilinux wrote: > Hi guys, i've always used ? to strip QS in rewrites but i cannot get past > this odd issue i'm having: > > URL SOURCE: > > > https://www.example.co.uk/ambassadors?test=1 > > > REWRITE: > > > rewrite (?i)^/ambassadors$ > > https://www.example.com/uk-en/experience/ambassadors/? permanent; > > > OR EVEN: > > > location ~* ^/ambassadors$ { > > rewrite (.*) https://www.example.com/uk-en/experience/ambassadors/? > > permanent; > > } > > > RESULT WITH REWRITE: > > > [~]> curl -kIL https://www.example.co.uk/ambassadors?test=1 > > HTTP/2 301 > > date: Thu, 20 Jun 2019 14:44:21 GMT > > content-type: text/html > > location: https://www.example.com/uk-en/experience/ambassadors/?test=1 > > x-who: SVAORMG2V01 An nginx response is expected to contain "server: nginx/", for example: $ curl -kI https://127.0.0.1:8443/ambassadors?test=1 HTTP/2 301 server: nginx/1.17.1 date: Thu, 20 Jun 2019 15:11:51 GMT content-type: text/html content-length: 169 location: https://www.example.com/uk-en/experience/ambassadors/ Are you sure the response you've provided is from nginx? (Also, as you can see from the above example response, the rewrite is working fine.) You may also want to drop "-L" from your curl options to make sure you are looking at a particular response, and not a chain of redirects. If the above doesn't help, you may consider using "rewrite_log on;" to find out how your rewrites are processed (see http://nginx.org/r/rewrite_log for details). -- Maxim Dounin http://mdounin.ru/ From matthias_mueller at tu-dresden.de Fri Jun 21 07:28:48 2019 From: matthias_mueller at tu-dresden.de (Matthias =?ISO-8859-1?Q?M=FCller?=) Date: Fri, 21 Jun 2019 09:28:48 +0200 Subject: limit_except - require trusted ip AND auth vs. ip OR auth In-Reply-To: <20190619123328.GK1877@mdounin.ru> References: <20190619123328.GK1877@mdounin.ru> Message-ID: Hi Maxim, works like a charm! Thanks, Matthias Am Mittwoch, den 19.06.2019, 15:33 +0300 schrieb Maxim Dounin: > Hello! > > On Tue, Jun 18, 2019 at 04:41:51PM +0200, Matthias M?ller wrote: > > > I would like to constrain HTTP access (PUT, POST) to an NGINX > > server > > for specific locations. > > > > There are two cases: > > > > 1) Permit POST, PUT if the request matches a trusted IP address OR > > Basic auth credentials (either-or) > > 2) Permit POST, PUT if the request matches a trusted IP address AND > > Basic auth credentials (must match both) > > > > > > The configuration for (2) is appended. But how can I achieve (1)? > > It > > seems that "satisfy any" cannot be included with "limit_except". > > While the "satisfy" directive cannot be used in limit_except > blocks, the value set in the enclosing location still applies. > So, you can do something like this: > > location /b { > satisfy any; > > limit_except GET { > allow 127.0.0.0/8; > auth_basic "closed"; > auth_basic_user_file .htpasswd; > } > > ... > } > > This will allow request from specified IP addresses or with > appropriate authentication. > From francis at daoine.org Fri Jun 21 07:46:04 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 21 Jun 2019 08:46:04 +0100 Subject: Using GeoIP2 In-Reply-To: <5d0adbc8.1c69fb81.18dbf.8c24@mx.google.com> References: <5d0adbc8.1c69fb81.18dbf.8c24@mx.google.com> Message-ID: <20190621074604.kugb72sgkvokupzr@daoine.org> On Wed, Jun 19, 2019 at 08:05:13PM -0500, Peter Fraser wrote: Hi there, > geoip2 /usr/local/etc/nginx/GeoIP2/GeoIP2-Country.mmdb { > auto_reload 5m; > $geoip2_metadata_country_build metadata build_epoch; > $geoip2_data_country_code default=US source=$variable_with_ip country iso_code; > $geoip2_data_country_name country names en; > } > I am realizing I don?t fully understand what all this does. The part source=$variable_with_ip country iso_code. I am trying to understand, what should go there. A web search for "nginx geoip2 module" pointed me to https://www.nginx.com/products/nginx/modules/geoip2/ and to https://github.com/leev/ngx_http_geoip2_module; and the first does also point to the second. The documentation there says "If source is not specified, $remote_addr will be used to perform the lookup" (the surrounding context is probably helpful for those docs). My reading of that is that, assuming the (common) case where the IP address that you want to geo-locate is in the nginx variable $remote_addr, you can either write source=$remote_addr, or omit source= entirely. The rest of that config line is described in those docs -- very roughly as "like what the mmdblookup tool does". I hope this helps, f -- Francis Daly francis at daoine.org From aquilinux at gmail.com Fri Jun 21 10:50:44 2019 From: aquilinux at gmail.com (aquilinux) Date: Fri, 21 Jun 2019 12:50:44 +0200 Subject: Cannot strip QS in rewrite In-Reply-To: <20190620152141.GM1877@mdounin.ru> References: <20190620152141.GM1877@mdounin.ru> Message-ID: Hi Maxim, thanks for the rewrite_log hint. Actually it does rewrite as expected: 2019/06/21 12:35:37 [notice] 7495#7495: *4693835 "(?i)^/ambassadors$" > matches "/ambassadors", client: 1.1.1.1, server: www.example.co.uk, > request: "GET /ambassadors?test=1 HTTP/1.1", host: "www.example.co.uk" > 2019/06/21 12:35:37 [notice] 7495#7495: *4693835 rewritten redirect: " > https://www.example.com/uk-en/experience/ambassadors/", client: > 93.46.189.23, server: www.example.co.uk, request: "GET > /ambassadors?test=1 HTTP/1.1", host: "www.example.co.uk" but i still get the response as above. So i suspect the culprit may be varnish, saving the QS in be.req and re-adding the QS in be.resp . I see no other explanation. Thanks! On Thu, Jun 20, 2019 at 5:21 PM Maxim Dounin wrote: > Hello! > > On Thu, Jun 20, 2019 at 05:05:57PM +0200, aquilinux wrote: > > > Hi guys, i've always used ? to strip QS in rewrites but i cannot get past > > this odd issue i'm having: > > > > URL SOURCE: > > > > > https://www.example.co.uk/ambassadors?test=1 > > > > > > REWRITE: > > > > > rewrite (?i)^/ambassadors$ > > > https://www.example.com/uk-en/experience/ambassadors/? permanent; > > > > > > OR EVEN: > > > > > location ~* ^/ambassadors$ { > > > rewrite (.*) https://www.example.com/uk-en/experience/ambassadors/? > > > permanent; > > > } > > > > > > RESULT WITH REWRITE: > > > > > [~]> curl -kIL https://www.example.co.uk/ambassadors?test=1 > > > HTTP/2 301 > > > date: Thu, 20 Jun 2019 14:44:21 GMT > > > content-type: text/html > > > location: https://www.example.com/uk-en/experience/ambassadors/?test=1 > > > x-who: SVAORMG2V01 > > An nginx response is expected to contain "server: > nginx/", for example: > > $ curl -kI https://127.0.0.1:8443/ambassadors?test=1 > HTTP/2 301 > server: nginx/1.17.1 > date: Thu, 20 Jun 2019 15:11:51 GMT > content-type: text/html > content-length: 169 > location: https://www.example.com/uk-en/experience/ambassadors/ > > Are you sure the response you've provided is from nginx? > > (Also, as you can see from the above example response, the rewrite > is working fine.) > > You may also want to drop "-L" from your curl options to make sure > you are looking at a particular response, and not a chain of > redirects. > > If the above doesn't help, you may consider using "rewrite_log on;" > to find out how your rewrites are processed (see > http://nginx.org/r/rewrite_log for details). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- "Madness, like small fish, runs in hosts, in vast numbers of instances." Nessuno mi pettina bene come il vento. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Fri Jun 21 13:46:53 2019 From: mailinglist at unix-solution.de (basti) Date: Fri, 21 Jun 2019 15:46:53 +0200 Subject: Enable proxy_protocol on https Message-ID: <0f1317fc-197a-405b-e563-743e6795f35a@unix-solution.de> Hello, I have nginx 1.14.2 on debian buster and need to enable proxy_protocol. (https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/#listen) When I enable it on http all is fine. When i try to enable it on https no connection can be established. No syntax error and no log entry. listen 80 proxy_protocol; <-- work listen 443 proxy_protocol; <-- does not work best regards From mdounin at mdounin.ru Fri Jun 21 14:05:03 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 21 Jun 2019 17:05:03 +0300 Subject: Enable proxy_protocol on https In-Reply-To: <0f1317fc-197a-405b-e563-743e6795f35a@unix-solution.de> References: <0f1317fc-197a-405b-e563-743e6795f35a@unix-solution.de> Message-ID: <20190621140503.GQ1877@mdounin.ru> Hello! On Fri, Jun 21, 2019 at 03:46:53PM +0200, basti wrote: > I have nginx 1.14.2 on debian buster and need to enable proxy_protocol. > (https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/#listen) > > When I enable it on http all is fine. When i try to enable it on https > no connection can be established. No syntax error and no log entry. > > listen 80 proxy_protocol; <-- work > > listen 443 proxy_protocol; <-- does not work The PROXY protocol impies that there should be a PROXY protocol header before the actual SSL handshake. As such, it might not be trivial to establish an SSL connection to such a socket unless you are doing so via a proxy which adds PROXY protocol headers. -- Maxim Dounin http://mdounin.ru/ From mailinglist at unix-solution.de Fri Jun 21 14:23:01 2019 From: mailinglist at unix-solution.de (basti) Date: Fri, 21 Jun 2019 16:23:01 +0200 Subject: Enable proxy_protocol on https In-Reply-To: <20190621140503.GQ1877@mdounin.ru> References: <0f1317fc-197a-405b-e563-743e6795f35a@unix-solution.de> <20190621140503.GQ1877@mdounin.ru> Message-ID: Hello Maxim, thanks a lot, the proxy send the header only on http, my mistake sorry. On 21.06.19 16:05, Maxim Dounin wrote: > Hello! > > On Fri, Jun 21, 2019 at 03:46:53PM +0200, basti wrote: > >> I have nginx 1.14.2 on debian buster and need to enable proxy_protocol. >> (https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/#listen) >> >> When I enable it on http all is fine. When i try to enable it on https >> no connection can be established. No syntax error and no log entry. >> >> listen 80 proxy_protocol; <-- work >> >> listen 443 proxy_protocol; <-- does not work > > The PROXY protocol impies that there should be a PROXY protocol > header before the actual SSL handshake. As such, it might not be > trivial to establish an SSL connection to such a socket unless > you are doing so via a proxy which adds PROXY protocol headers. > From nginx-forum at forum.nginx.org Sat Jun 22 20:01:40 2019 From: nginx-forum at forum.nginx.org (BeyondEvil) Date: Sat, 22 Jun 2019 16:01:40 -0400 Subject: SSL_ERROR_BAD_CERT_DOMAIN with multiple domains Message-ID: <2c617f69e39787921ac547ff8a646e8e.NginxMailingListEnglish@forum.nginx.org> I have two domains: (1) myvery.owndomain.com (2) domain.synology.me (1) is under my control (I own the domain) and I manage the certs (Let's Encrypt). If I visit "https://myvery.owndomain.com" I'm greeted by the "Welcome to Nginx!" landing page. (I use nginx as a reverse proxy only.) (2) is a DDNS that Synology manages and it also has certs by LE (managed by Synology). I have a Mac Mini running the "main" Nginx server and a bunch of other services. (1) points to theses services on the Mini. The IP of the mini is 192.168.13.10. (2) points to a NAS that has it's own Nginx to handle, among other things, the LE certs. This machine runs on IP 192.168.11.10. Without any settings in the "main" nginx, I can't use (2) because in my router (EdgeRouter X) both :80 and :443 point to the Mini (192.168.13.10). So I need to add two new server blocks in my config so that: If I visit "http://domain.synology.me" (port 80) that redirects me to "http://domain.synology.me:5000" and If I visit "https://domain.synology.me" (port 443) that redirects me to "https://domain.synology.me:5001" I've managed to get part of the way. But I'm getting SSL errors like for instance: "SSL_read() failed (SSL: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:SSL alert number 42) while waiting for request, client: 192.168.13.1, server: 0.0.0.0:443" What am I doing wrong? Here's my current config: https://gist.github.com/BeyondEvil/e246d1725438989815272ac96fd1a767 Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284630,284630#msg-284630 From nginx-forum at forum.nginx.org Sun Jun 23 07:09:21 2019 From: nginx-forum at forum.nginx.org (Bartek) Date: Sun, 23 Jun 2019 03:09:21 -0400 Subject: Nginx/Unit sendfile failed for upstream Message-ID: <8bffec3f4e1a9b6f0ae3a8c41ace10ee.NginxMailingListEnglish@forum.nginx.org> Hello I have nginx (1.15.9) + unit(1.9) + php(7.2.15). When uploading files larger then 8Mb Nginx was failing with 413 Payload too large, so in nginx.conf, http {} section I added "client_max_body_size=0" - it worked for nginx, but now Unit is rejecting uploads. Everything works fine for uploads smaller then ~8MB. When I go above that value, browser will fail with 502 Bad Gateway, Unit will log nothing and nginx/error.log will have a following line: 2019/06/23 08:38:14 [error] 20682#20682: *3 sendfile() failed (32: Broken pipe) while sending request to upstream, client: 127.0.0.1, server: wp1.wrf, request: "POST /uploader.php HTTP/1.1", upstream: "http://127.0.0.1:8301/uploader.php", host: "wp1.wrf", referrer: "http://wp1.wrf/zamowienie,dodaj-zdjecia.html" PHP limits: post_max_size=40M, upload_max_filesize=20M verified with phpinfo - but uploader.php is never executed in that case (it logs every request in separate file, no log files are created (not even empty)). In Unit docs (http://unit.nginx.org/configuration/#settings) there is something called "max_body_size", however it is impossilbe to adjust that value. I've made PUT / POST requests to /config/max_body_size, /config/settings/http/max_body_size and many other variations, all without success, e.g.: curl -X PUT -d '"30M"' --unix-socket /var/run/control.unit.sock http://localhost/config/max_body_size { "error": "Invalid configuration.", "detail": "Unknown parameter \"max_body_size\"." } Any help would be appreciated. Cheers, Bartek Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284631,284631#msg-284631 From osa at freebsd.org.ru Sun Jun 23 13:13:12 2019 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sun, 23 Jun 2019 16:13:12 +0300 Subject: Nginx/Unit sendfile failed for upstream In-Reply-To: <8bffec3f4e1a9b6f0ae3a8c41ace10ee.NginxMailingListEnglish@forum.nginx.org> References: <8bffec3f4e1a9b6f0ae3a8c41ace10ee.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190623131312.GA26257@FreeBSD.org.ru> Hi Bartek, hope you're doing well. Next command creates max_body_size setting on http level: % sudo curl -X PUT -d '{"http": { "max_body_size": 8388608} }' --unix-socket \ /path/to/control.unit.sock http://localhost/config/settings Also, it's possible to reconfigure that setting by sending next request: % sudo curl -X PUT -d '8388609' --unix-socket \ /path/to/control.unit.sock http://localhost/config/settings/http/max_body_size -- Sergey Osokin On Sun, Jun 23, 2019 at 03:09:21AM -0400, Bartek wrote: > Hello > > I have nginx (1.15.9) + unit(1.9) + php(7.2.15). > > When uploading files larger then 8Mb Nginx was failing with 413 Payload too > large, so in nginx.conf, http {} section I added "client_max_body_size=0" - > it worked for nginx, but now Unit is rejecting uploads. > > Everything works fine for uploads smaller then ~8MB. When I go above that > value, browser will fail with 502 Bad Gateway, Unit will log nothing and > nginx/error.log will have a following line: > > 2019/06/23 08:38:14 [error] 20682#20682: *3 sendfile() failed (32: Broken > pipe) while sending request to upstream, client: 127.0.0.1, server: wp1.wrf, > request: "POST /uploader.php HTTP/1.1", upstream: > "http://127.0.0.1:8301/uploader.php", host: "wp1.wrf", referrer: > "http://wp1.wrf/zamowienie,dodaj-zdjecia.html" > > PHP limits: post_max_size=40M, upload_max_filesize=20M verified with phpinfo > - but uploader.php is never executed in that case (it logs every request in > separate file, no log files are created (not even empty)). > > In Unit docs (http://unit.nginx.org/configuration/#settings) there is > something called "max_body_size", however it is impossilbe to adjust that > value. I've made PUT / POST requests to /config/max_body_size, > /config/settings/http/max_body_size and many other variations, all without > success, e.g.: > > curl -X PUT -d '"30M"' --unix-socket /var/run/control.unit.sock > http://localhost/config/max_body_size > { > "error": "Invalid configuration.", > "detail": "Unknown parameter \"max_body_size\"." > } > > Any help would be appreciated. > Cheers, > Bartek > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284631,284631#msg-284631 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jun 24 13:48:46 2019 From: nginx-forum at forum.nginx.org (Bartek) Date: Mon, 24 Jun 2019 09:48:46 -0400 Subject: Nginx/Unit sendfile failed for upstream In-Reply-To: <20190623131312.GA26257@FreeBSD.org.ru> References: <20190623131312.GA26257@FreeBSD.org.ru> Message-ID: Sergey A. Osokin Wrote: ------------------------------------------------------- > % sudo curl -X PUT -d '{"http": { "max_body_size": 8388608} }' > --unix-socket \ > /path/to/control.unit.sock http://localhost/config/settings > OMG, it can't be that simple. So my doom was caused by "20M" instead of "20000000"... Thanks a bunch Sergey and have a nice day. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284631,284636#msg-284636 From johannes.gehrs at moia.io Mon Jun 24 14:58:48 2019 From: johannes.gehrs at moia.io (Johannes Gehrs) Date: Mon, 24 Jun 2019 16:58:48 +0200 Subject: Accepting Multiple TLS Client Certificates Message-ID: Hi, as per our understanding one can provide a file with multiple certificates as "ssl_client_certificate". Nginx would then accept any one of the certificates. However, when we actually provided multiple certificates we found that only the first one in the list was accepted. In our test case we provided a chain of two certificates, a root cert and the client certs signed by this CA. We tried both, concatenating the files like this: "user1 user2 ca" and like this "user1 ca user2 ca". In all cases just the first certificate was accepted. Are we misunderstanding the expected behaviour of nginx, or is this a bug, or are we maybe doing something wrong? I will mention that we are using nginx in the nginx-ingress Kubernetes package. We have tested with a version which uses nginx 1.15.10. Thank you! Johannes Gehrs -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Mon Jun 24 23:42:43 2019 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 25 Jun 2019 02:42:43 +0300 Subject: Nginx/Unit sendfile failed for upstream In-Reply-To: References: <20190623131312.GA26257@FreeBSD.org.ru> Message-ID: <20190624234242.GB26257@FreeBSD.org.ru> On Mon, Jun 24, 2019 at 09:48:46AM -0400, Bartek wrote: > Sergey A. Osokin Wrote: > ------------------------------------------------------- > > % sudo curl -X PUT -d '{"http": { "max_body_size": 8388608} }' > > --unix-socket \ > > /path/to/control.unit.sock http://localhost/config/settings > > > > OMG, it can't be that simple. So my doom was caused by "20M" instead of > "20000000"... In case you want to think in megabytes it's a bit tricky, but also works: % echo "20*1024*1024" | bc | xargs -J % sudo curl -X PUT -d % \ --unix-socket /var/run/unit/control.unit.sock http://localhost/config/settings/http/max_body_size { "success": "Reconfiguration done." } > Thanks a bunch Sergey and have a nice day. Enjoy! -- Sergey Osokin From mdounin at mdounin.ru Tue Jun 25 12:34:34 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Jun 2019 15:34:34 +0300 Subject: nginx-1.17.1 Message-ID: <20190625123433.GU1877@mdounin.ru> Changes with nginx 1.17.1 25 Jun 2019 *) Feature: the "limit_req_dry_run" directive. *) Feature: when using the "hash" directive inside the "upstream" block an empty hash key now triggers round-robin balancing. Thanks to Niklas Keller. *) Bugfix: a segmentation fault might occur in a worker process if caching was used along with the "image_filter" directive, and errors with code 415 were redirected with the "error_page" directive; the bug had appeared in 1.11.10. *) Bugfix: a segmentation fault might occur in a worker process if embedded perl was used; the bug had appeared in 1.7.3. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Jun 25 13:34:05 2019 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 25 Jun 2019 09:34:05 -0400 Subject: [nginx-announce] nginx-1.17.1 In-Reply-To: <20190625123444.GV1877@mdounin.ru> References: <20190625123444.GV1877@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.17.1 for Windows https://kevinworthington.com/nginxwin1171 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington On Tue, Jun 25, 2019 at 8:34 AM Maxim Dounin wrote: > Changes with nginx 1.17.1 25 Jun > 2019 > > *) Feature: the "limit_req_dry_run" directive. > > *) Feature: when using the "hash" directive inside the "upstream" block > an empty hash key now triggers round-robin balancing. > Thanks to Niklas Keller. > > *) Bugfix: a segmentation fault might occur in a worker process if > caching was used along with the "image_filter" directive, and errors > with code 415 were redirected with the "error_page" directive; the > bug had appeared in 1.11.10. > > *) Bugfix: a segmentation fault might occur in a worker process if > embedded perl was used; the bug had appeared in 1.7.3. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Jun 25 15:32:37 2019 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 25 Jun 2019 18:32:37 +0300 Subject: njs-0.3.3 Message-ID: <80A2788A-B8B8-4A57-BBD4-AAF02C168FD7@nginx.com> Hello, I?m glad to announce a new release of NGINX JavaScript module (njs). This release mostly focuses on stability issues in njs core after regular fuzzing tests were introduced. Notable new features: - Added ES5 property getter/setter runtime support: : > var o = {a:2}; : undefined : > Object.defineProperty(o, ?b?, {get:function(){return 2*this.a}}); o.b : 4 - Added global ?process? variable: : > process.pid : > process.env.HOME You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.3.3 25 Jun 2019 nginx modules: *) Improvement: getting of special response headers in headersOut. *) Improvement: working with unknown methods in r.subrequest(). *) Improvement: added support for null as a second argument of r.subrequest(). *) Bugfix: fixed processing empty output chain in stream body filter. Core: *) Feature: added runtime support for property getter/setter. Thanks to ??? (Hong Zhi Dao) and Artem S. Povalyukhin. *) Feature: added ?process? global object. *) Feature: writable most of built-in properties and methods. *) Feature: added generic implementation of Array.prototype.fill(). *) Bugfix: fixed integer-overflow in String.prototype.concat(). *) Bugfix: fixed setting of object properties. *) Bugfix: fixed Array.prototype.toString(). *) Bugfix: fixed Date.prototype.toJSON(). *) Bugfix: fixed overwriting ?constructor? property of built-in prototypes. *) Bugfix: fixed processing of invalid surrogate pairs in strings. *) Bugfix: fixed processing of invalid surrogate pairs in JSON strings. *) Bugfix: fixed heap-buffer-overflow in toUpperCase() and toLowerCase(). *) Bugfix: fixed escaping lone closing square brackets in RegExp() constructor. *) Bugfix: fixed String.prototype.toBytes() for ASCII strings. *) Bugfix: fixed handling zero byte characters inside RegExp pattern strings. *) Bugfix: fixed String.prototype.toBytes() for ASCII strings. *) Bugfix: fixed truth value of JSON numbers in JSON.parse(). *) Bugfix: fixed use-of-uninitialized-value in njs_string_replace_join(). *) Bugfix: fixed parseInt(?-0?). Thanks to Artem S. Povalyukhin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jun 25 15:58:55 2019 From: nginx-forum at forum.nginx.org (vinayak.ponangi) Date: Tue, 25 Jun 2019 11:58:55 -0400 Subject: Log request at request time, not after response Message-ID: <10b9ba30237ff464458772d4f6edd506.NginxMailingListEnglish@forum.nginx.org> Hi All, I am trying to understand if it's possible to extend nginx functionality to support what I am looking for. Problem: We are trying to look for poison pill in-flight requests that would affect backend cluster stability. We currently cannot do much for the first request, but the idea is to block subsequent requests from the same user. Ideally we would have solved it at the app layer, but seems like there's a lot of work involved with that solution. So we are trying to solve it at the nginx level. But looks like nginx can only do logging after a response is received from the backend server. Is it possible to modify this behavior to log at request time without waiting for a response. IMHO this is a reasonable thing to do. I am open to alternative approaches to solve this problem. I would also like to know if there are any available modules/plugins that could be easily tweaked to emit the request log. If this was intended by design for nginx, it would be nice to understand what the reasoning behind such a decision was. Thank you very much for your time. Appreciate all the help I can get. Regards, Vinayak Ponangi Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284660,284660#msg-284660 From francis at daoine.org Tue Jun 25 22:27:46 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 25 Jun 2019 23:27:46 +0100 Subject: Accepting Multiple TLS Client Certificates In-Reply-To: References: Message-ID: <20190625222746.g6orxrk4kztepdfl@daoine.org> On Mon, Jun 24, 2019 at 04:58:48PM +0200, Johannes Gehrs wrote: Hi there, > as per our understanding one can provide a file with multiple certificates > as "ssl_client_certificate". Nginx would then accept any one of the > certificates. http://nginx.org/r/ssl_client_certificate has slightly different words for what it does. It also refers to the "ssl_verify_client" and "ssl_trusted_certificate" directives. > In our test case we provided a chain of two certificates, a root cert and > the client certs signed by this CA. We tried both, concatenating the files > like this: "user1 user2 ca" and like this "user1 ca user2 ca". In all cases > just the first certificate was accepted. > > Are we misunderstanding the expected behaviour of nginx, or is this a bug, > or are we maybe doing something wrong? Can you provide a config that shows the problem that you report? >From your description, only the ca cert needs to be in the file; but I think that including the other certs should not break anything. Can you tell, are there the expected newlines in the file, between the certs? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Jun 26 07:35:11 2019 From: nginx-forum at forum.nginx.org (oratios) Date: Wed, 26 Jun 2019 03:35:11 -0400 Subject: Macos Mojave php-fpm restart unexpectedly Message-ID: Hello, I have just updated to Macos Mojave and PHP7.3 using nginx. All installations happened through brew (also downgraded to PHP7.2) but php-fpm restart unexpectedly when access ONLY /wp-admin route of my wordpress website. All frontend wordpress pages works fine and and info.php is working as well. On php.ini we have added xdebug using pecl. All other settings are the default ones: zend_extension="xdebug.so" [XDebug] xdebug.remote_enable=1 xdebug.remote_autostart=1 xdebug.remote_handler=dbgp xdebug.remote_mode=req xdebug.remote_host=127.0.0.1 xdebug.remote_port=9000 extension="redis.so" php-fpm log using log_level=debug when accessing /wp-admin show "exited on signal 11" and php-fpm service restarts: [25-Jun-2019 22:26:01.104274] DEBUG: pid 47, fpm_pctl_perform_idle_server_maintenance(), line 378: [pool www] currently 1 active children, 1 spare children, 2 running children. Spawning rate 1 [25-Jun-2019 22:26:01.839904] DEBUG: pid 47, fpm_got_signal(), line 75: received SIGCHLD [25-Jun-2019 22:26:01.839968] WARNING: pid 47, fpm_children_bury(), line 256: [pool www] child 980 exited on signal 11 (SIGSEGV) after 21.933941 seconds from start [25-Jun-2019 22:26:01.841578] NOTICE: pid 47, fpm_children_make(), line 425: [pool www] child 1037 started [25-Jun-2019 22:26:01.845627] DEBUG: pid 47, fpm_event_loop(), line 418: event module triggered 1 events [25-Jun-2019 22:26:02.176554] DEBUG: pid 47, fpm_pctl_perform_idle_server_maintenance(), line 378: [pool www] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284662,284662#msg-284662 From francis at daoine.org Wed Jun 26 08:26:07 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 26 Jun 2019 09:26:07 +0100 Subject: SSL_ERROR_BAD_CERT_DOMAIN with multiple domains In-Reply-To: <2c617f69e39787921ac547ff8a646e8e.NginxMailingListEnglish@forum.nginx.org> References: <2c617f69e39787921ac547ff8a646e8e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190626082607.ya6yio3ezzuemdeo@daoine.org> On Sat, Jun 22, 2019 at 04:01:40PM -0400, BeyondEvil wrote: Hi there, I don't have an answer for you, but I do have some comments that may make it easier for someone else to have an answer. > So I need to add two new server blocks in my config so that: > If I visit "http://domain.synology.me" (port 80) that redirects me to > "http://domain.synology.me:5000" > and > If I visit "https://domain.synology.me" (port 443) that redirects me to > "https://domain.synology.me:5001" As I understand things: * you need one nginx listening on port 80 for http and 443 for https * you want to handle two server names (differently) I am not clear on whether you want to "redirect" or "proxy_pass" to the service on the other ports -- "redirect" would involve the client issuing a new request to https://something:5001; while "proxy_pass" would involve the client continuing to request https://something, and nginx ensuring that the response from :5001 gets to the client. Anyway... The http side should be straightforward. Two server{} blocks with different server_name directives, and "proxy_pass" or "return/rewrite" as appropriate. Does that work for you? If not, what fails? (As in: what request do you make / what response do you get / what response do you want instead / what do the logs say.) The https side may be a little more awkward -- you want to run two https services on the same ip:port. The main notes are at http://nginx.org/en/docs/http/configuring_https_servers.html. Basically -- two server{} blocks with different server_name directives, and SNI enabled in your nginx, and the correct ssl_certificate available in each server{}. > I've managed to get part of the way. But I'm getting SSL errors like for > instance: "SSL_read() failed (SSL: error:14094412:SSL > routines:ssl3_read_bytes:sslv3 alert bad certificate:SSL alert number 42) > while waiting for request, client: 192.168.13.1, server: 0.0.0.0:443" What request do you make when that error appears? Are you trying to talk to server_name#1 or server_name#2? > Here's my current config: > https://gist.github.com/BeyondEvil/e246d1725438989815272ac96fd1a767 For future-proofing reasons, it is better for this list if you include the relevant config in the mail directly. But the content on that link today seems to include one "server" with "listen 443 ssl" and no "ssl_certificate". Untested by me, but I can imagine that leading to some confusion. Good luck with it, f -- Francis Daly francis at daoine.org From andre8525 at hotmail.com Sat Jun 29 22:49:00 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Sat, 29 Jun 2019 22:49:00 +0000 Subject: Nginx 1.17.0 doesn't change the content-type header Message-ID: Hello, I have the following config in the http: include mime.types; default_type application/octet-stream; also i have this in the location: types { application/vnd.apple.mpegurl m3u8; video/mp2t ts; } But when i send a request, i am getting these headers: Request URL: https://example.com/hls/5d134afe91b970.80939375/1024_576_1500_5d134afe91b970.80939375_00169.ts 1. Accept-Ranges: bytes 2. Access-Control-Allow-Credentials: true 3. Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token 4. Access-Control-Allow-Methods: OPTIONS, GET 5. Access-Control-Allow-Origin: * 6. Cache-Control: max-age=31536000 7. Connection: keep-alive 8. Content-Length: 259440 9. Content-Type: application/octet-stream 10. Date: Sat, 29 Jun 2019 22:43:57 GMT 11. ETag: "d1a1739b4444da72c0e25251e4669b45" 12. Last-Modified: Wed, 26 Jun 2019 18:08:17 GMT 13. Server: nginx/1.17.0 14. Request URL: https://example.com/hls/5d134afe91b970.80939375/playlist.m3u8 1. Accept-Ranges: bytes 2. Access-Control-Allow-Credentials: true 3. Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token 4. Access-Control-Allow-Methods: OPTIONS, GET 5. Access-Control-Allow-Origin: * 6. Cache-Control: max-age=31536000 7. Connection: keep-alive 8. Content-Length: 601 9. Content-Type: application/octet-stream 10. Date: Sat, 29 Jun 2019 22:37:57 GMT 11. ETag: "7ba4b759c57dbffbca650ce6a290f524" 12. Last-Modified: Wed, 26 Jun 2019 10:57:04 GMT 13. Server: nginx/1.17.0 For some reason, Nginx doesn't change the Content-Type Thanks Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: