From nginx-forum at forum.nginx.org Sat Feb 1 09:45:00 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Sat, 01 Feb 2020 04:45:00 -0500
Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx
with Socket.io?
In-Reply-To: <20200131200709.GV26683@daoine.org>
References: <20200131200709.GV26683@daoine.org>
Message-ID: <2fbf125b596f72afe3574e03893ee8df.NginxMailingListEnglish@forum.nginx.org>
This is the "view source" of the html page:
testproject
This is the result of the "localhost" word search in app.js:
https://drive.google.com/open?id=11QpJKjd4PLKNMnO7m2utCJ9PrzO8Oyji :
/***/ }),
/***/ 1:
/*!********************************************************************************************************************************************************************!*\
!*** multi (webpack)-dev-server/client?http://localhost
(webpack)/hot/dev-server.js
(webpack)-dev-server/client?http://192.168.1.7:8080/sockjs-node
./src/main.js ***!
\********************************************************************************************************************************************************************/
/*! no static exports found */
/***/ (function(module, exports, __webpack_require__) {
__webpack_require__(/*!
/home/marco/vueMatters/testproject/node_modules/webpack-dev-server/client
/index.js?http://localhost
*/"./node_modules/webpack-dev-server/client/index.js?http://localhost");
__webpack_require__(/*!
/home/marco/vueMatters/testproject/node_modules/webpack/hot/dev-server.js
*/"./node_modules/webpack/hot/dev-server.js");
__webpack_require__(/*!
/home/marco/vueMatters/testproject/node_modules/webpack-dev-server/client
/index.js?http://192.168.1.7:8080/sockjs-node
*/"./node_modules/webpack-dev-server/client/index.js?http:
//192.168.1.7:8080/sockjs-node");
module.exports = __webpack_require__(/*! ./src/main.js
*/"./src/main.js");
/***/ })
/******/ });
These are the results of the "localhost" word search in chunk-vendors.js :
- https://drive.google.com/open?id=13EPYKgb7Vv4DHOTxD0jxYk_CbDDrJkMy
- https://drive.google.com/open?id=1UjWDsPyT-87GF4WJhVr-UhzOFGiBW8tH
- https://drive.google.com/open?id=1eq5pWm51sjCYQkIaQn5GZ6uEmrmS36Od
- https://drive.google.com/open?id=19QzxljB37HH5cvJ0jffdyX97u9hlBDsV
In this GitHub repository you can find all the related files:
https://github.com/marcoippolito/testproject
the sudo nano /etc/nginx/conf.d/default.conf is the following:
server {
listen 443 ssl http2 default_server;
server_name ggc.world;
ssl_certificate /etc/ssl/certs/chained.pem;
ssl_certificate_key /etc/ssl/private/domain.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-
draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
#ssl_stapling on;
#ssl_stapling_verify on;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
error_page 497 https://$host:$server_port$request_uri;
server_name www.ggc.world;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# https://www.nginx.com/blog/nginx-nodejs-websockets-socketio/
# https://gist.github.com/uorat/10b15a32f3ffa3f240662b9b0fefe706
# http://nginx.org/en/docs/stream/ngx_stream_core_module.html
upstream websocket {
ip_hash;
server localhost:3000;
}
server {
listen 81;
server_name ggc.world www.ggc.world;
#location / {
location ~ ^/(websocket|websocket\/socket-io) {
proxy_pass http://127.0.0.1:4201;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwared-For $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
}
}
#
https://stackoverflow.com/questions/40516288/webpack-dev-server-with-nginx-proxy-pass
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286887#msg-286887
From imnieves at gmail.com Sat Feb 1 19:48:00 2020
From: imnieves at gmail.com (Ian Morris Nieves)
Date: Sat, 1 Feb 2020 11:48:00 -0800
Subject: upstream hash consistent seems to depend on order of DNS entries
Message-ID:
Hello all,
Here is the setup:
- I am running nginx in a single docker container and it has an upstream to a docker service which is composed of 3 docker containers (which happens to be php-fpm)
- the service is configured to _not_ expose a single virtual ip address (vip), instead the service exposes the ip addresses of all 3 containers through docker?s built-in DNS. When this DNS is asked for the IP address of the service it will respond with a list of 3 IP address but the list will rotate in round-robin fashion each time a lookup is performed. Thus the first IP in the list will not be the same for any 2 consecutive lookups.
My first question is:
Is it the correct behavior that consistent hashing depends on the order of IP addresses in the DNS query response? I can imagine arguments either way, and it is possible that this critical detail is outside the scope of consistent hashing. I will also forward this question to the author of Ketama.
My last question is:
Does is make sense to give nginx the capability to do consistent hashing that is not dependent on the order of IP addresses in the DNS query response? Perhaps it can order/sort the IP addresses in the response into some canonical ordering. I am finding that Docker (unlike Kubernetes) forces me to receive my DNS query responses with IP addresses shuffled in round robin. Docker will not allow me to receive ?consistently? ordered IP addresses in a DNS query response. Perhaps in addition to the ?consistent? flag in nginx, there could also be a flag like ?sorted-ip? which will sort the IP addresses before applying the the hash algorithm.
Best,
Ian
From yichun at openresty.com Sun Feb 2 07:46:49 2020
From: yichun at openresty.com (Yichun Zhang)
Date: Sat, 1 Feb 2020 23:46:49 -0800
Subject: New Blog Post "How OpenResty and Nginx Allocate and Manage Memory"
Message-ID:
Hi folks,
I recently wrote a new blog post titled "How OpenResty and Nginx
Allocate and Manage Memory":
https://blog.openresty.com/en/how-or-alloc-mem/
It is the first of a series of articles on this topic. The purpose of
this series is to help OpenResty and Nginx open source users
effectively troubleshoot and optimize their applications or servers'
excessive memory footprint and/or memory leaks.
In the articles, we use OpenResty XRay to demonstrate real data and
graphs for real OpenResty applications in the wild.
Hopefully you'll find it interesting and stay tuned :)
Best,
Yichun
From francis at daoine.org Sun Feb 2 10:49:36 2020
From: francis at daoine.org (Francis Daly)
Date: Sun, 2 Feb 2020 10:49:36 +0000
Subject: right config for letsencrypt
In-Reply-To:
References:
Message-ID: <20200202104936.GW26683@daoine.org>
On Fri, Jan 31, 2020 at 10:33:31PM +0100, bagagerek wrote:
Hi there,
> I followed the manual but I can't seem tot get it right. I've forwarded port
> 8081 on my router.
If you want letsencrypt to use the "http" challenge, you must let incoming
traffic in on port 80 (and, presumably, send it to nginx).
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Sun Feb 2 11:06:24 2020
From: francis at daoine.org (Francis Daly)
Date: Sun, 2 Feb 2020 11:06:24 +0000
Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx
with Socket.io?
In-Reply-To: <2fbf125b596f72afe3574e03893ee8df.NginxMailingListEnglish@forum.nginx.org>
References: <20200131200709.GV26683@daoine.org>
<2fbf125b596f72afe3574e03893ee8df.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200202110624.GX26683@daoine.org>
On Sat, Feb 01, 2020 at 04:45:00AM -0500, MarcoI wrote:
Hi there,
> This is the "view source" of the html page:
This source does not include the words "Welcome to Your Vue.js App",
which appears in "16.jpg" picture.
That picture shows three other successful requests -- app.js,
chunk-vendor.js, and [object%20Module] before the "info" failures.
(The first two of those come from the "link", "rel=preload" parts of the
"head" of the initial response; or maybe from the "script" in the "body".)
I suspect that somewhere in the response of one of those other
three requests, is something that invites the browser to access
https://localhost/sockjs-node/info.
The best I can suggest is: find which one of those three responses it is;
then find what in your vue setup puts that there; then change it so that
it (probably) asks for /sockjs-node/info instead.
Exactly how to do that is probably in the vue documentation.
I see no evidence of a nginx config problem here, so far.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From benni.mueller.2108 at gmail.com Sun Feb 2 13:59:55 2020
From: benni.mueller.2108 at gmail.com (=?utf-8?Q?Benjamin_M=C3=BCller?=)
Date: Sun, 2 Feb 2020 14:59:55 +0100
Subject: No subject
Message-ID: <5e36d5dc.1c69fb81.55df4.9dc4@mx.google.com>
Dear NGINX community,
I hope this is the correct channel to ask a question but I have a big probelm with NGINX.
Currently I am trying to start a streaming service. It is for a school project.
I have multiple streams incoming. But each stream has only one owner which is allowed to watch it.
What do I have to do to make every client connect to the correct stream.
All streams are coming in over the same port.
Thanks for your help!!!
Kind regards,
Benjamin M?ller
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Sun Feb 2 15:25:24 2020
From: nginx-forum at forum.nginx.org (janning)
Date: Sun, 02 Feb 2020 10:25:24 -0500
Subject: Nginx fails on high load on debian 10 vs no problems on debian 9
Message-ID:
My first post here as we had never any problems with nginx.
We use 5 nginx server as loadbalancers for our spring boot application.
We were running them for years on debian 9 with the default nginx package
1.10.3
Now we switched three of our loadbalancers to debian 10 with nginx 1.14.2
First everything runs smoothly. Then, on high load we encountered some
problems. It starts with
2020/02/01 17:10:55 [crit] 5901#5901: *3325390 SSL_write() failed while
sending to client, client: ...
2020/02/01 17:10:55 [crit] 5901#5901: *3306981 SSL_write() failed while
sending to client, client: ...
In between we get lots of
2020/02/01 17:11:04 [error] 5902#5902: *3318748 upstream timed out (110:
Connection timed out) while connecting to upstream, ...
2020/02/01 17:11:04 [crit] 5902#5902: *3305656 SSL_write() failed while
sending response to client, client: ...
2020/02/01 17:11:30 [error] 5911#5911: unexpected response for
ocsp.int-x3.letsencrypt.org
It ends with
2020/02/01 17:11:33 [error] 5952#5952: unexpected response for
ocsp.int-x3.letsencrypt.org
The problem does only exits for 30-120 seconds on high load and disappears
afterwards.
In the kernel log we have sometimes:
Feb 1 17:11:04 kt104 kernel: [1033003.285044] TCP: request_sock_TCP:
Possible SYN flooding on port 443. Sending cookies. Check SNMP counters.
But on other occasions we don't see any kernel.log messages
On both debian 9 and debian 10 servers we did some identically TCP Tuning
# Kernel tuning settings
# https://www.nginx.com/blog/tuning-nginx/
net.core.rmem_max=26214400
net.core.wmem_max=26214400
net.ipv4.tcp_rmem=4096 524288 26214400
net.ipv4.tcp_wmem=4096 524288 26214400
net.core.somaxconn=1000
net.core.netdev_max_backlog=5000
net.ipv4.tcp_max_syn_backlog=10000
net.ipv4.ip_local_port_range=16000 61000
net.ipv4.tcp_max_tw_buckets=2000000
net.ipv4.tcp_fin_timeout=30
net.core.optmem_max=20480
The nginx config is exactly the same, so I just show some important parts:
user www-data;
worker_processes auto;
worker_rlimit_nofile 50000;
pid /run/nginx.pid;
events {
worker_connections 5000;
multi_accept on;
use epoll;
}
http {
root /var/www/loadbalancer;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
server_tokens off;
client_max_body_size 5m;
client_header_timeout 20s; # default 60s
client_body_timeout 20s; # default 60s
send_timeout 20s; # default 60s
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:100m;
ssl_buffer_size 4k;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers
'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_session_tickets on;
ssl_session_ticket_key /etc/nginx/ssl_session_ticket.key;
ssl_session_ticket_key /etc/nginx/ssl_session_ticket_old.key;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/ssl/rapidssl/intermediate-root.pem;
resolver 8.8.8.8;
log_format custom '$host $server_port $request_time
$upstream_response_time $remote_addr '
'"$http2" "$ssl_session_reused" $upstream_addr
$time_iso8601 '
'"$request" $status $body_bytes_sent
"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log custom;
error_log /var/log/nginx/error.log;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_path /var/cache/nginx/ levels=1:2
keys_zone=imagecache:10m inactive=7d use_temp_path=off;
proxy_connect_timeout 10s;
proxy_read_timeout 20s;
proxy_send_timeout 20s;
proxy_next_upstream off;
map $http_user_agent $outdated {
default 0;
"~MSIE [1-6]\." 1;
"~Mozilla.*Firefox/[1-9]\." 1;
"~Opera.*Version/[0-9]\." 1;
"~Chrome/[0-9]\." 1;
}
include sites/*.conf;
}
The upstream timeout signals some problems with our java machines. But at
the same time the debian9 nginx/loadbalancer is running fine and has no
problems connecting to any of the upstream servers.
And the problems with letsencrypt and SSL_write are signaling to me some
problems with nginx or TCP or whatever.
I really don't know how to debug this situation. But we can reliable
reproduce it most of the times we encounter high load on debian10 servers
and did never see it on debian 9.
Then I installed the stable version nginx 1.16 on debian10 to see if this is
a bug in nginx which is already fixed:
nginx version: nginx/1.16.1
built by gcc 8.3.0 (Debian 8.3.0-6)
built with OpenSSL 1.1.1c 28 May 2019 (running with OpenSSL 1.1.1d 10 Sep
2019)
TLS SNI support enabled
configure arguments: ...
But it didn't help.
Can somebody help me and give me some hints how to start further debugging
of this situation?
regards
Janning
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286893,286893#msg-286893
From jeff.dyke at gmail.com Sun Feb 2 16:49:17 2020
From: jeff.dyke at gmail.com (Jeff Dyke)
Date: Sun, 2 Feb 2020 11:49:17 -0500
Subject: right config for letsencrypt
In-Reply-To: <20200202104936.GW26683@daoine.org>
References:
<20200202104936.GW26683@daoine.org>
Message-ID:
since i do this through haproxy, it will be a little different, but where
ever port 80 is open to you can have a block that does the following
so in the http block of haproxy i send it to a backend when it meets
these requirements.
acl letsencrypt-request path_beg -i /.well-known/acme-challenge/
redirect scheme https code 301 unless letsencrypt-request
use_backend letsencrypt-backend if letsencrypt-request
Which sends the request to a local nginx instance (on the live haproxy
server) that can validate the cert b/c server_name = _; I generate these
regularly, so my setup is a little different, but
server {
listen 8888 proxy_protocol;
server_name _;
charset utf-8;
set_real_ip_from {{ servers.lb.master.ip }};
set_real_ip_from {{ servers.lb.slave.ip }};
real_ip_header proxy_protocol;
root /var/www/html;
location ~ /.well-known {
allow all;
}
deny all;
}
in a regular, single server nginx setup, i use the following block:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
location ~ /.well-known {
allow all;
}
return 301 https://www.example.com$request_uri;
}
So it will only redirect if its not a lets encrypt request. HAProxy may
mean nothing to you, but it shows an alternate configuration. And of
course Francis is correct, you need 80 open.
HTH
Jeff
On Sun, Feb 2, 2020 at 5:49 AM Francis Daly wrote:
> On Fri, Jan 31, 2020 at 10:33:31PM +0100, bagagerek wrote:
>
> Hi there,
>
> > I followed the manual but I can't seem tot get it right. I've forwarded
> port
> > 8081 on my router.
>
> If you want letsencrypt to use the "http" challenge, you must let incoming
> traffic in on port 80 (and, presumably, send it to nginx).
>
> f
> --
> Francis Daly francis at daoine.org
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gfrankliu at gmail.com Mon Feb 3 07:09:14 2020
From: gfrankliu at gmail.com (Frank Liu)
Date: Sun, 2 Feb 2020 23:09:14 -0800
Subject: error code 494
Message-ID:
Hi,
When I send a request with too longer header value to nginx 1.16.1, I get
400 Bad Request response code and default nginx error page.
If I create a custom error page:
error_page 494 /my4xx.html;
now I can see my error page but the http response code becomes 494. Is that
a bug?
Shall I see 400 instead?
Thanks!
Frank
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tomas.dubec at avast.com Mon Feb 3 12:25:55 2020
From: tomas.dubec at avast.com (=?UTF-8?B?RHViZWMsIFRvbcOhxaE=?=)
Date: Mon, 3 Feb 2020 13:25:55 +0100
Subject: nginx 1.16.1 segfault with post_action on CentOS
Message-ID:
Hi guys,
since I cannot login into trac (no OAuth handler found), I'll try reporting
it here. We are experiencing segmentation faults on nginx 1.16.1 with
post_action.
CentOS:
# cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)
Nginx:
# nginx -V
nginx version: nginx/1.16.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx
--with-compat --with-file-aio --with-threads --with-http_addition_module
--with-http_auth_request_module --with-http_dav_module
--with-http_flv_module --with-http_gunzip_module
--with-http_gzip_static_module --with-http_mp4_module
--with-http_random_index_module --with-http_realip_module
--with-http_secure_link_module --with-http_slice_module
--with-http_ssl_module --with-http_stub_status_module
--with-http_sub_module --with-http_v2_module --with-mail
--with-mail_ssl_module --with-stream --with-stream_realip_module
--with-stream_ssl_module --with-stream_ssl_preread_module
--with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches
-m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'
Backtrace:
(gdb) bt
#0 ngx_pnalloc (pool=0x0, size=210) at src/core/ngx_palloc.c:139
#1 0x00005623ad760f7b in ngx_http_log_handler (r=0x5623adee67e0) at
src/http/modules/ngx_http_log_module.c:362
#2 0x00005623ad757470 in ngx_http_log_request (r=r at entry=0x5623adee67e0)
at src/http/ngx_http_request.c:3674
#3 0x00005623ad758d2c in ngx_http_free_request (r=r at entry=0x5623adee67e0,
rc=rc at entry=0)
at src/http/ngx_http_request.c:3620
#4 0x00005623ad759931 in ngx_http_set_keepalive (r=0x5623adee67e0) at
src/http/ngx_http_request.c:3069
#5 ngx_http_finalize_connection (r=) at
src/http/ngx_http_request.c:2720
#6 0x00005623ad758fc6 in ngx_http_request_handler (ev=) at
src/http/ngx_http_request.c:2349
#7 0x00005623ad742c97 in ngx_epoll_process_events (cycle=,
timer=,
flags=) at src/event/modules/ngx_epoll_module.c:902
#8 0x00005623ad73927a in ngx_process_events_and_timers (cycle=cycle at entry
=0x5623add5cbc0)
at src/event/ngx_event.c:242
#9 0x00005623ad740f41 in ngx_worker_process_cycle
(cycle=cycle at entry=0x5623add5cbc0,
data=data at entry=0x2)
at src/os/unix/ngx_process_cycle.c:750
#10 0x00005623ad73f3eb in ngx_spawn_process (cycle=cycle at entry=0x5623add5cbc0,
proc=proc at entry=0x5623ad740ec0 ,
data=data at entry=0x2,
name=name at entry=0x5623ad7e16e3 "worker process", respawn=respawn at entry=-3)
at src/os/unix/ngx_process.c:199
#11 0x00005623ad7405f0 in ngx_start_worker_processes
(cycle=cycle at entry=0x5623add5cbc0,
n=4, type=type at entry=-3)
at src/os/unix/ngx_process_cycle.c:359
#12 0x00005623ad741903 in ngx_master_process_cycle (cycle=cycle at entry
=0x5623add5cbc0)
at src/os/unix/ngx_process_cycle.c:131
#13 0x00005623ad718d0f in main (argc=, argv=)
at src/core/nginx.c:382
error log:
2020/02/03 05:03:51 [error] 2916923#2916923: *15968395 limiting requests,
excess: 20.170 by zone "one", client: x.x.x.x, server: xxxx, request: "POST
/xxxx/xxxx HTTP/1.1", host: "xxxx"
2020/02/03 05:03:51 [alert] 2389939#2389939: worker process 2916923 exited
on signal 11 (core dumped)
nginx server configuration:
server {
listen *:443 ssl;
server_name xxxx;
ssl on;
ssl_certificate /etc/nginx/xxx.crt;
ssl_certificate_key /etc/nginx/xxx.key;
ssl_certificate /etc/nginx/xxx_ecc.crt;
ssl_certificate_key /etc/nginx/xxx_ecc.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers
ECDSA:HIGH:MEDIUM:!LOW:!SEED:!IDEA:!RC4:!MD5:!DH:!eNULL:!aNULL;
ssl_prefer_server_ciphers on;
index index.html index.htm index.php;
access_log /var/log/nginx/access.log ;
error_log /var/log/nginx/error.log;
limit_req zone=one burst=20 nodelay;
limit_req_status 429;
location / {
client_body_buffer_size 2m;
client_max_body_size 2m;
post_action @forward_anchor
proxy_http_version 1.1;
proxy_next_upstream error timeout invalid_header http_500;
proxy_set_header Connection "";
proxy_set_header Content-Type "application/octet-stream";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://xxxx;
proxy_read_timeout 90;
}
location @forward_anchor {
client_body_buffer_size 2m;
client_max_body_size 2m;
proxy_connect_timeout 5;
proxy_http_version 1.1;
proxy_next_upstream error timeout invalid_header http_500;
proxy_send_timeout 5;
proxy_set_header Connection "";
proxy_set_header Content-Type "application/octet-stream";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://xxxx;
proxy_read_timeout 90;
proxy_set_header Host $http_host;
}
}
We are running a server with the same configuration, apart from the
"post_action", which is missing. This configuration does not experience any
issues.
Can someone with access to trac please create a bug report?
Regards
Tomas Dubec
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Mon Feb 3 13:05:38 2020
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 3 Feb 2020 16:05:38 +0300
Subject: upstream hash consistent seems to depend on order of DNS entries
In-Reply-To:
References:
Message-ID: <20200203130538.GR12894@mdounin.ru>
Hello!
On Sat, Feb 01, 2020 at 11:48:00AM -0800, Ian Morris Nieves wrote:
> Here is the setup:
> - I am running nginx in a single docker container and it has an
> upstream to a docker service which is composed of 3 docker
> containers (which happens to be php-fpm)
> - the service is configured to _not_ expose a single virtual ip
> address (vip), instead the service exposes the ip addresses of
> all 3 containers through docker?s built-in DNS. When this DNS
> is asked for the IP address of the service it will respond with
> a list of 3 IP address but the list will rotate in round-robin
> fashion each time a lookup is performed. Thus the first IP in
> the list will not be the same for any 2 consecutive lookups.
>
> My first question is:
> Is it the correct behavior that consistent hashing depends on
> the order of IP addresses in the DNS query response? I can
> imagine arguments either way, and it is possible that this
> critical detail is outside the scope of consistent hashing. I
> will also forward this question to the author of Ketama.
Consistent hashing uses the _name_ as written in the
configuration, not the IP addresses the name resolves to.
If a name resolves to multiple IP addresses, these addresses are
considered equal and requests are distributed between them using
round-robin balancing.
That is, to balance multiple servers (containers) using consistent
hashing, you have to configure an upstream block with multiple
"server" directives in it.
--
Maxim Dounin
http://mdounin.ru/
From francis at daoine.org Mon Feb 3 13:20:15 2020
From: francis at daoine.org (Francis Daly)
Date: Mon, 3 Feb 2020 13:20:15 +0000
Subject: error code 494
In-Reply-To:
References:
Message-ID: <20200203132015.GY26683@daoine.org>
On Sun, Feb 02, 2020 at 11:09:14PM -0800, Frank Liu wrote:
Hi there,
> When I send a request with too longer header value to nginx 1.16.1, I get
> 400 Bad Request response code and default nginx error page.
> If I create a custom error page:
> error_page 494 /my4xx.html;
> now I can see my error page but the http response code becomes 494. Is that
> a bug?
I don't know whether "error_page keeps 494 as 494 instead of
auto-converting to 400" is a bug or not. (I can imagine "yes" and "no"
both being justifiable answers.)
But if you *want* 400, you can do
error_page 494 =400 /my4xx.html;
or, possibly for the specific case of 494,
error_page 494 =431 /my4xx.html;
Hope this helps,
f
--
Francis Daly francis at daoine.org
From mdounin at mdounin.ru Mon Feb 3 13:44:03 2020
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 3 Feb 2020 16:44:03 +0300
Subject: Certificate Chain Validation
In-Reply-To: <80fdb6b374039a63428691af118d22a4.NginxMailingListEnglish@forum.nginx.org>
References: <20200130121322.GP12894@mdounin.ru>
<80fdb6b374039a63428691af118d22a4.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200203134403.GS12894@mdounin.ru>
Hello!
On Thu, Jan 30, 2020 at 10:55:03AM -0500, slowgary wrote:
> Thanks for the correction Maxim. I tested this before posting by using an
> old certificate. Nginx did not throw an error but the browser did notify
> that the connection was insecure.
Depending on what exactly "certificate chain validation" in the
original question was intended to mean, there may be at least
three cases considered:
1. Certificate chains as configured for nginx itself, by using
within the ssl_certificate directive
(http://nginx.org/r/ssl_certificate). For these certificates
nginx does not try to do any validation (and in most cases it
simply can't do it - in particular, because it doesn't know the
name to be used by clients, and doesn't have a root certificate to
validate against).
2. Certificate chains as presented by a client, as per the
ssl_verify_client directive
(http://nginx.org/r/ssl_verify_client). These chains are always
properly validated, including expiration of all intermediate
certificates and the certificate itself.
3. Certificate chains as presented by an upstream server, when
using proxy_pass to an https://... URL. These chains are properly
validated as long as the proxy_ssl_verify directive is on
(http://nginx.org/r/proxy_ssl_verify). Note though that this is
not the default behaviour, and by default nginx will not try to
validate upstream server certificates at all.
Given that the original question asks if nginx will "proceed or
will it break the connection", I suspect the question is either
about (2) or (3), as (1) hardly make sense during a particular
connection handling.
If you think that you see nginx accepting an expired certificate
from a client, or accepting an expired certificate from an
upstream server with proxy_ssl_verify switched on - please report
more details.
If you've assumed (1), the statement you've made is anyway too
broad to be true, as clearly nginx _does_ validate the expiration
date of certificates - as long as it does any validation at all.
--
Maxim Dounin
http://mdounin.ru/
From gfrankliu at gmail.com Mon Feb 3 16:11:09 2020
From: gfrankliu at gmail.com (Frank Liu)
Date: Mon, 3 Feb 2020 08:11:09 -0800
Subject: error code 494
In-Reply-To: <20200203132015.GY26683@daoine.org>
References: <20200203132015.GY26683@daoine.org>
Message-ID: <3084740E-8F49-475C-9F1E-A9E7B0F00B22@gmail.com>
Thanks for the reply!
My question is more about why there is inconsistent response code between using default error page and default error page.
> On Feb 3, 2020, at 5:20 AM, Francis Daly wrote:
>
> ?On Sun, Feb 02, 2020 at 11:09:14PM -0800, Frank Liu wrote:
>
> Hi there,
>
>> When I send a request with too longer header value to nginx 1.16.1, I get
>> 400 Bad Request response code and default nginx error page.
>> If I create a custom error page:
>> error_page 494 /my4xx.html;
>> now I can see my error page but the http response code becomes 494. Is that
>> a bug?
>
> I don't know whether "error_page keeps 494 as 494 instead of
> auto-converting to 400" is a bug or not. (I can imagine "yes" and "no"
> both being justifiable answers.)
>
> But if you *want* 400, you can do
>
> error_page 494 =400 /my4xx.html;
>
> or, possibly for the specific case of 494,
>
> error_page 494 =431 /my4xx.html;
>
> Hope this helps,
>
> f
> --
> Francis Daly francis at daoine.org
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From mdounin at mdounin.ru Mon Feb 3 16:47:21 2020
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 3 Feb 2020 19:47:21 +0300
Subject: error code 494
In-Reply-To:
References:
Message-ID: <20200203164721.GW12894@mdounin.ru>
Hello!
On Sun, Feb 02, 2020 at 11:09:14PM -0800, Frank Liu wrote:
> When I send a request with too longer header value to nginx 1.16.1, I get
> 400 Bad Request response code and default nginx error page.
> If I create a custom error page:
> error_page 494 /my4xx.html;
> now I can see my error page but the http response code becomes 494. Is that
> a bug?
> Shall I see 400 instead?
Yes. And this is what happens with 495, 496, and 497. The
following patch should fix this:
# HG changeset patch
# User Maxim Dounin
# Date 1580748298 -10800
# Mon Feb 03 19:44:58 2020 +0300
# Node ID 7b48f7d056af4ce5a681b97f9f31702adb1f87f8
# Parent b8a512c6466c3b2f77876edf14061c5d97e6159f
Added default overwrite in error_page 494.
We used to have default error_page overwrite for 495, 496, and 497, so
a configuration like
error_page 495 /error;
will result in error 400, much like without any error_page configured.
The 494 status code was introduced later (in 3848:de59ad6bf557, nginx 0.9.4),
and relevant changes to ngx_http_core_error_page() were missed, resulting
in inconsistent behaviour of "error_page 494" - with error_page configured
it results in 494 being returned instead of 400.
Reported by Frank Liu,
http://mailman.nginx.org/pipermail/nginx/2020-February/058957.html.
diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c
--- a/src/http/ngx_http_core_module.c
+++ b/src/http/ngx_http_core_module.c
@@ -4689,6 +4689,7 @@ ngx_http_core_error_page(ngx_conf_t *cf,
case NGX_HTTP_TO_HTTPS:
case NGX_HTTPS_CERT_ERROR:
case NGX_HTTPS_NO_CERT:
+ case NGX_HTTP_REQUEST_HEADER_TOO_LARGE:
err->overwrite = NGX_HTTP_BAD_REQUEST;
}
}
--
Maxim Dounin
http://mdounin.ru/
From imnieves at gmail.com Mon Feb 3 20:59:33 2020
From: imnieves at gmail.com (Ian Morris Nieves)
Date: Mon, 3 Feb 2020 12:59:33 -0800
Subject: upstream hash consistent seems to depend on order of DNS entries
(Maxim Dounin)
In-Reply-To:
References:
Message-ID: <0D6C0C8D-9245-4D31-BCAD-3077905D758D@gmail.com>
Hi Maxim,
Thank you for your response.
OK, I understand what you have said.
This seems slightly strange to me. We both understand the purpose/point of hashing (consistent or not consistent). Why does it then make sense to fall-back to round robin?
In my case, the number of hosts will change over time, and I can?t update the nginx config. So I thought it would make sense to use a hostname that resolves to many IPs. This would be a scalable solutioin.
I would think that the more correct solution would be to hash the IP addresses that all the names resolve to?. not the names themselves.
I must be missing/mis-understanding something. Does the current implementation/solution make sense to you?
Best,
Ian
>
> Message: 2
> Date: Mon, 3 Feb 2020 16:05:38 +0300
> From: Maxim Dounin
> To: nginx at nginx.org
> Subject: Re: upstream hash consistent seems to depend on order of DNS
> entries
> Message-ID: <20200203130538.GR12894 at mdounin.ru>
> Content-Type: text/plain; charset=utf-8
>
> Hello!
>
> On Sat, Feb 01, 2020 at 11:48:00AM -0800, Ian Morris Nieves wrote:
>
>> Here is the setup:
>> - I am running nginx in a single docker container and it has an
>> upstream to a docker service which is composed of 3 docker
>> containers (which happens to be php-fpm)
>> - the service is configured to _not_ expose a single virtual ip
>> address (vip), instead the service exposes the ip addresses of
>> all 3 containers through docker?s built-in DNS. When this DNS
>> is asked for the IP address of the service it will respond with
>> a list of 3 IP address but the list will rotate in round-robin
>> fashion each time a lookup is performed. Thus the first IP in
>> the list will not be the same for any 2 consecutive lookups.
>>
>> My first question is:
>> Is it the correct behavior that consistent hashing depends on
>> the order of IP addresses in the DNS query response? I can
>> imagine arguments either way, and it is possible that this
>> critical detail is outside the scope of consistent hashing. I
>> will also forward this question to the author of Ketama.
>
> Consistent hashing uses the _name_ as written in the
> configuration, not the IP addresses the name resolves to.
>
> If a name resolves to multiple IP addresses, these addresses are
> considered equal and requests are distributed between them using
> round-robin balancing.
>
> That is, to balance multiple servers (containers) using consistent
> hashing, you have to configure an upstream block with multiple
> "server" directives in it.
>
> --
> Maxim Dounin
> http://mdounin.ru/
>
From rpaprocki at fearnothingproductions.net Mon Feb 3 21:28:20 2020
From: rpaprocki at fearnothingproductions.net (Robert Paprocki)
Date: Mon, 3 Feb 2020 13:28:20 -0800
Subject: upstream hash consistent seems to depend on order of DNS entries
(Maxim Dounin)
In-Reply-To: <0D6C0C8D-9245-4D31-BCAD-3077905D758D@gmail.com>
References:
<0D6C0C8D-9245-4D31-BCAD-3077905D758D@gmail.com>
Message-ID:
> In my case, the number of hosts will change over time, and I can?t update
the nginx config. So I thought it would make sense to use a hostname that
resolves to many IPs. This would be a scalable solutioin
In that case, it makes sense to use a templating tool to dynamically
populate the contents of the upstream{} block. Hook it in with your service
discovery/registration system.
On Mon, Feb 3, 2020 at 12:59 PM Ian Morris Nieves
wrote:
> Hi Maxim,
>
> Thank you for your response.
> OK, I understand what you have said.
>
> This seems slightly strange to me. We both understand the purpose/point
> of hashing (consistent or not consistent). Why does it then make sense to
> fall-back to round robin?
>
> In my case, the number of hosts will change over time, and I can?t update
> the nginx config. So I thought it would make sense to use a hostname that
> resolves to many IPs. This would be a scalable solutioin.
>
> I would think that the more correct solution would be to hash the IP
> addresses that all the names resolve to?. not the names themselves.
>
> I must be missing/mis-understanding something. Does the current
> implementation/solution make sense to you?
>
> Best,
> Ian
>
> >
> > Message: 2
> > Date: Mon, 3 Feb 2020 16:05:38 +0300
> > From: Maxim Dounin
> > To: nginx at nginx.org
> > Subject: Re: upstream hash consistent seems to depend on order of DNS
> > entries
> > Message-ID: <20200203130538.GR12894 at mdounin.ru>
> > Content-Type: text/plain; charset=utf-8
> >
> > Hello!
> >
> > On Sat, Feb 01, 2020 at 11:48:00AM -0800, Ian Morris Nieves wrote:
> >
> >> Here is the setup:
> >> - I am running nginx in a single docker container and it has an
> >> upstream to a docker service which is composed of 3 docker
> >> containers (which happens to be php-fpm)
> >> - the service is configured to _not_ expose a single virtual ip
> >> address (vip), instead the service exposes the ip addresses of
> >> all 3 containers through docker?s built-in DNS. When this DNS
> >> is asked for the IP address of the service it will respond with
> >> a list of 3 IP address but the list will rotate in round-robin
> >> fashion each time a lookup is performed. Thus the first IP in
> >> the list will not be the same for any 2 consecutive lookups.
> >>
> >> My first question is:
> >> Is it the correct behavior that consistent hashing depends on
> >> the order of IP addresses in the DNS query response? I can
> >> imagine arguments either way, and it is possible that this
> >> critical detail is outside the scope of consistent hashing. I
> >> will also forward this question to the author of Ketama.
> >
> > Consistent hashing uses the _name_ as written in the
> > configuration, not the IP addresses the name resolves to.
> >
> > If a name resolves to multiple IP addresses, these addresses are
> > considered equal and requests are distributed between them using
> > round-robin balancing.
> >
> > That is, to balance multiple servers (containers) using consistent
> > hashing, you have to configure an upstream block with multiple
> > "server" directives in it.
> >
> > --
> > Maxim Dounin
> > http://mdounin.ru/
> >
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Tue Feb 4 08:00:43 2020
From: nginx-forum at forum.nginx.org (erik)
Date: Tue, 04 Feb 2020 03:00:43 -0500
Subject: Using Yubikey/PKCS11 for Upstream Client Certificates
Message-ID: <9650714a9e61e7a0a6645e5dc1d06a02.NginxMailingListEnglish@forum.nginx.org>
Hi there,
I'm building a reverse proxy that needs to use TLS client certificates for
authentication to its proxy_pass location.
The documentation at
https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/
is pretty clear in how to point Nginx to the signed certificate and private
key file, but my cert and key are in hardware (YubiKey in PIV mode).
I have pkcs11 support through OpenSC, but I'm wondering if Nginx can work
with that. Is there a way to have it use the yubikey through pkcs11?
Cheers,
Erik
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286922,286922#msg-286922
From rainer at ultra-secure.de Tue Feb 4 16:17:26 2020
From: rainer at ultra-secure.de (rainer at ultra-secure.de)
Date: Tue, 04 Feb 2020 17:17:26 +0100
Subject: What about BREACH (CVE-2013-3587)?
Message-ID:
Hi,
testssl.ch still laments about BREACH, when tested against a recent
nginx 1.16.
Qualys ssllabs doesn't mention it at all.
Is it fixed?
Can you safely enable gzip on ssl-vhosts?
Best Regards
Rainer
From nginx-forum at forum.nginx.org Tue Feb 4 17:14:28 2020
From: nginx-forum at forum.nginx.org (erik)
Date: Tue, 04 Feb 2020 12:14:28 -0500
Subject: Using Yubikey/PKCS11 for Upstream Client Certificates
In-Reply-To: <9650714a9e61e7a0a6645e5dc1d06a02.NginxMailingListEnglish@forum.nginx.org>
References: <9650714a9e61e7a0a6645e5dc1d06a02.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <7a74e961a5df3da5ae7fc7e5469f4af5.NginxMailingListEnglish@forum.nginx.org>
Specifically, I'd like to know if the proxy_ssl_certificate and
proxy_ssl_certificate_key directives can support RFC-7512 PKCS#11 URIs, or
whether they're hardwired to be just local file paths.
With my private key in hardware, I'm looking for the ability to point nginx
to something like:
location /upstream {
proxy_pass https://backend.example.com;
proxy_ssl_certificate /etc/nginx/client.pem;
proxy_ssl_certificate_key
'pkcs11:type=private;token=some_token;object=username%40example.org';
}
Cheers,
Erik van Zijst
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286922,286930#msg-286930
From gfrankliu at gmail.com Tue Feb 4 18:03:23 2020
From: gfrankliu at gmail.com (Frank Liu)
Date: Tue, 4 Feb 2020 10:03:23 -0800
Subject: error code 494
In-Reply-To: <20200203164721.GW12894@mdounin.ru>
References:
<20200203164721.GW12894@mdounin.ru>
Message-ID:
Thanks Maxim for the quick fix!
Based on https://tools.ietf.org/html/rfc6585#section-5 , shall we by
default return 431 instead of 400?
On Mon, Feb 3, 2020 at 8:47 AM Maxim Dounin wrote:
> Hello!
>
> On Sun, Feb 02, 2020 at 11:09:14PM -0800, Frank Liu wrote:
>
> > When I send a request with too longer header value to nginx 1.16.1, I get
> > 400 Bad Request response code and default nginx error page.
> > If I create a custom error page:
> > error_page 494 /my4xx.html;
> > now I can see my error page but the http response code becomes 494. Is
> that
> > a bug?
> > Shall I see 400 instead?
>
> Yes. And this is what happens with 495, 496, and 497. The
> following patch should fix this:
>
> # HG changeset patch
> # User Maxim Dounin
> # Date 1580748298 -10800
> # Mon Feb 03 19:44:58 2020 +0300
> # Node ID 7b48f7d056af4ce5a681b97f9f31702adb1f87f8
> # Parent b8a512c6466c3b2f77876edf14061c5d97e6159f
> Added default overwrite in error_page 494.
>
> We used to have default error_page overwrite for 495, 496, and 497, so
> a configuration like
>
> error_page 495 /error;
>
> will result in error 400, much like without any error_page configured.
>
> The 494 status code was introduced later (in 3848:de59ad6bf557, nginx
> 0.9.4),
> and relevant changes to ngx_http_core_error_page() were missed, resulting
> in inconsistent behaviour of "error_page 494" - with error_page configured
> it results in 494 being returned instead of 400.
>
> Reported by Frank Liu,
> http://mailman.nginx.org/pipermail/nginx/2020-February/058957.html.
>
> diff --git a/src/http/ngx_http_core_module.c
> b/src/http/ngx_http_core_module.c
> --- a/src/http/ngx_http_core_module.c
> +++ b/src/http/ngx_http_core_module.c
> @@ -4689,6 +4689,7 @@ ngx_http_core_error_page(ngx_conf_t *cf,
> case NGX_HTTP_TO_HTTPS:
> case NGX_HTTPS_CERT_ERROR:
> case NGX_HTTPS_NO_CERT:
> + case NGX_HTTP_REQUEST_HEADER_TOO_LARGE:
> err->overwrite = NGX_HTTP_BAD_REQUEST;
> }
> }
>
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From themadbeaker at gmail.com Tue Feb 4 20:38:30 2020
From: themadbeaker at gmail.com (J.R.)
Date: Tue, 4 Feb 2020 14:38:30 -0600
Subject: What about BREACH (CVE-2013-3587)?
Message-ID:
> testssl.ch still laments about BREACH, when tested against a recent
> nginx 1.16.
>
> Qualys ssllabs doesn't mention it at all.
>
> Is it fixed?
>
> Can you safely enable gzip on ssl-vhosts?
I think you are confusing TLS compression with HTTP compression...
From imnieves at gmail.com Tue Feb 4 21:32:42 2020
From: imnieves at gmail.com (Ian Morris Nieves)
Date: Tue, 4 Feb 2020 13:32:42 -0800
Subject: message 3 / Robert Paprocki
In-Reply-To:
References:
Message-ID: <1AD6CEA8-417A-4796-97DA-E6B3FFD2690F@gmail.com>
Hi Robert, Thanks for your input on this. I sincerely appreciate it.
At the moment I am trying to get the most out of DNS (on Docker)? as a way to discover services (individual containers and replicas of containers).
If I can solve the issue without having to introduce a new tool or container, I would really like to. In that sense, I am not trying to use the perfect tool for every job? I am trying to maximize the usage of existing tools to minimize the number of different tools. Yes, this is my own design decision, and there is a price for that.
Coming back to consistent hashing, it still does not make sense to me that nginx falls back to round robin for individual hostnames that map to >1 IP, because that breaks the point of consistent hashing any time a node enters/leaves the hostname. Regardless of my own use case, when does this behavior actually make sense?
Your point about dynamically populating the upstream block makes alot of sense. +1 !!
I could actually use that work-around without a new service discovery mechanism. I could actually run a script in the background to get the multiple IP addresses that the hostname maps to? and update the nginx conf accordingly.
Still, nginx already has resolver? and a TTL? so pinging DNS is something nginx is already doing? seems strange I should have to re-invent that wheel.
I wonder if someone would chime in and say that this behavior is a feature (and if so, what is the use case?) or is it a bug (or possibly oversight?)?
Best,
Ian
> ------------------------------
>
> Message: 3
> Date: Mon, 3 Feb 2020 13:28:20 -0800
> From: Robert Paprocki
> To: nginx at nginx.org
> Subject: Re: upstream hash consistent seems to depend on order of DNS
> entries (Maxim Dounin)
> Message-ID:
>
> Content-Type: text/plain; charset="utf-8"
>
>> In my case, the number of hosts will change over time, and I can?t update
> the nginx config. So I thought it would make sense to use a hostname that
> resolves to many IPs. This would be a scalable solutioin
>
> In that case, it makes sense to use a templating tool to dynamically
> populate the contents of the upstream{} block. Hook it in with your service
> discovery/registration system.
>
> On Mon, Feb 3, 2020 at 12:59 PM Ian Morris Nieves
> wrote:
>
>> Hi Maxim,
>>
>> Thank you for your response.
>> OK, I understand what you have said.
>>
>> This seems slightly strange to me. We both understand the purpose/point
>> of hashing (consistent or not consistent). Why does it then make sense to
>> fall-back to round robin?
>>
>> In my case, the number of hosts will change over time, and I can?t update
>> the nginx config. So I thought it would make sense to use a hostname that
>> resolves to many IPs. This would be a scalable solutioin.
>>
>> I would think that the more correct solution would be to hash the IP
>> addresses that all the names resolve to?. not the names themselves.
>>
>> I must be missing/mis-understanding something. Does the current
>> implementation/solution make sense to you?
>>
>> Best,
>> Ian
>>
>>>
>>> Message: 2
>>> Date: Mon, 3 Feb 2020 16:05:38 +0300
>>> From: Maxim Dounin
>>> To: nginx at nginx.org
>>> Subject: Re: upstream hash consistent seems to depend on order of DNS
>>> entries
>>> Message-ID: <20200203130538.GR12894 at mdounin.ru>
>>> Content-Type: text/plain; charset=utf-8
>>>
>>> Hello!
>>>
>>> On Sat, Feb 01, 2020 at 11:48:00AM -0800, Ian Morris Nieves wrote:
>>>
>>>> Here is the setup:
>>>> - I am running nginx in a single docker container and it has an
>>>> upstream to a docker service which is composed of 3 docker
>>>> containers (which happens to be php-fpm)
>>>> - the service is configured to _not_ expose a single virtual ip
>>>> address (vip), instead the service exposes the ip addresses of
>>>> all 3 containers through docker?s built-in DNS. When this DNS
>>>> is asked for the IP address of the service it will respond with
>>>> a list of 3 IP address but the list will rotate in round-robin
>>>> fashion each time a lookup is performed. Thus the first IP in
>>>> the list will not be the same for any 2 consecutive lookups.
>>>>
>>>> My first question is:
>>>> Is it the correct behavior that consistent hashing depends on
>>>> the order of IP addresses in the DNS query response? I can
>>>> imagine arguments either way, and it is possible that this
>>>> critical detail is outside the scope of consistent hashing. I
>>>> will also forward this question to the author of Ketama.
>>>
>>> Consistent hashing uses the _name_ as written in the
>>> configuration, not the IP addresses the name resolves to.
>>>
>>> If a name resolves to multiple IP addresses, these addresses are
>>> considered equal and requests are distributed between them using
>>> round-robin balancing.
>>>
>>> That is, to balance multiple servers (containers) using consistent
>>> hashing, you have to configure an upstream block with multiple
>>> "server" directives in it.
>>>
>>> --
>>> Maxim Dounin
>>> http://mdounin.ru/
>>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
>
From rainer at ultra-secure.de Tue Feb 4 21:35:23 2020
From: rainer at ultra-secure.de (Rainer Duffner)
Date: Tue, 4 Feb 2020 22:35:23 +0100
Subject: What about BREACH (CVE-2013-3587)?
In-Reply-To:
References:
Message-ID:
> Am 04.02.2020 um 21:38 schrieb J.R. :
>
> I think you are confusing TLS compression with HTTP compression...
Probably.
I read that later somewhere else.
I just wonder why it?s lumped-in in testssl.sh.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gfrankliu at gmail.com Tue Feb 4 22:44:20 2020
From: gfrankliu at gmail.com (Frank Liu)
Date: Tue, 4 Feb 2020 14:44:20 -0800
Subject: What about BREACH (CVE-2013-3587)?
In-Reply-To:
References:
Message-ID:
This is documented. Quote from
http://nginx.org/en/docs/http/ngx_http_gzip_module.html
*When using the SSL/TLS protocol, compressed responses may be subject to
BREACH attacks. *
On Tue, Feb 4, 2020 at 1:35 PM Rainer Duffner
wrote:
>
>
> Am 04.02.2020 um 21:38 schrieb J.R. :
>
> I think you are confusing TLS compression with HTTP compression...
>
>
>
>
> Probably.
> I read that later somewhere else.
>
> I just wonder why it?s lumped-in in testssl.sh.
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From myeclipse2202 at 163.com Wed Feb 5 09:36:36 2020
From: myeclipse2202 at 163.com (myxingkong)
Date: Wed, 5 Feb 2020 17:36:36 +0800 (GMT+08:00)
Subject: Using nginx upload module and nginx lua together
Message-ID: <4c940269.6108.17014b56439.Coremail.myeclipse2202@163.com>
I am using nginx upload module for uplaoding files to server. But I wanted nginx to upload to different paths, not to a path mentioned in upload_store. So I am taking help of nginx-lua module to change the upload_store value on each request like below.
location /umtest {
upload_pass /nginx_response;
set $upload_store '';
rewrite_by_lua '
local header = ngx.req.raw_header()
ngx.say("type header",header)
dst_path_dir = ngx.req.get_headers()["Dst-Dir"]
ngx.say("dst_path_dir",dst_path_dir)
ngx.var.upload_store = dst_path_dir
ngx.say("upload store path" ,ngx.var.upload_store)
';
upload_set_form_field $upload_field_name.name
"$upload_file_name";
upload_set_form_field $upload_field_name.content_type
"$upload_content_type";
upload_set_form_field $upload_field_name.path
"$upload_tmp_path"
upload_cleanup 400 404 499 500-505;
}
Now when I POST to the '/umtest' it will change the upload_store value, but it will not execute the nginx upload direcives (i.e., upload will not happen). When I comment the rewrite_by_lua directive , the upload happens. My question is cant we use both at a same time to achieve the purpose?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From thresh at nginx.com Wed Feb 5 10:38:17 2020
From: thresh at nginx.com (Konstantin Pavlov)
Date: Wed, 5 Feb 2020 13:38:17 +0300
Subject: Using Yubikey/PKCS11 for Upstream Client Certificates
In-Reply-To: <7a74e961a5df3da5ae7fc7e5469f4af5.NginxMailingListEnglish@forum.nginx.org>
References: <9650714a9e61e7a0a6645e5dc1d06a02.NginxMailingListEnglish@forum.nginx.org>
<7a74e961a5df3da5ae7fc7e5469f4af5.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <26bcdd04-397e-7ecf-d881-704ba605d101@nginx.com>
Hi Erik,
I've been enable to use an yubikey neo to store a server key and utilize
them via pkcs11 engine in nginx some time ago. I didnt check the
upstream connection, since I only cared about front-end.
And as I only had a yubikey neo instead of a proper HSM, it turned out
to be a crypto deccelerator. :-)
I've took some notes on implementing it at http://thre.sh/yub.txt, hope
this helps.
04.02.2020 20:14, erik wrote:
> Specifically, I'd like to know if the proxy_ssl_certificate and
> proxy_ssl_certificate_key directives can support RFC-7512 PKCS#11 URIs, or
> whether they're hardwired to be just local file paths.
>
> With my private key in hardware, I'm looking for the ability to point nginx
> to something like:
>
> location /upstream {
> proxy_pass https://backend.example.com;
> proxy_ssl_certificate /etc/nginx/client.pem;
> proxy_ssl_certificate_key
> 'pkcs11:type=private;token=some_token;object=username%40example.org';
> }
>
> Cheers,
> Erik van Zijst
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286922,286930#msg-286930
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
Konstantin Pavlov
https://www.nginx.com/
From nginx-forum at forum.nginx.org Wed Feb 5 12:04:09 2020
From: nginx-forum at forum.nginx.org (!QVAS!)
Date: Wed, 05 Feb 2020 07:04:09 -0500
Subject: Nginx's sub_filter does not replace paths in CSS and JS
Message-ID:
I have two nginx servers, let's call them nginx1 and nginx2. Nginx1 is
located in a separate infrastructure and processes requests, and also
provides static files for examplesite.com. Nginx2 is in a different
infrastructure and processes requests, and also gives static files on
request 172.22.3.15. It is necessary to make the page 172.22.3.15 open at
the request of examplesite.com/new. I realized that in order for the page to
be processed correctly, you need to edit the paths to static files.
To do this, a configuration was added to the nginx1 file location:
location /new {
proxy_pass http://172.22.3.15/;
}
The following settings were added to the nginx2 configuration file:
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Server $host;
proxy_redirect off;
server {
listen 80;
server_name 172.22.3.15;
client_max_body_size 32m;
charset utf-8;
location /public/ {
alias /usr/src/app/public/;
}
location /static/ {
alias /usr/src/app/static/;
}
location / {
proxy_pass http://172.22.3.16:8000;
proxy_redirect / /new;
sub_filter '"/' '"/new/';
sub_filter "'/" "'/new/";
sub_filter_types *;
sub_filter_once off;
sub_filter_last_modified on;
proxy_set_header Accept-Encoding "";
}
}
But the problem is that in the CSS and JS files themselves, the paths are
not changed. I tried to disable gzip in my configuration, but it did not
bring any results. Version nginx1 1.17, and version nginx2 1.16.
P.S. I configure sub_filter to nginx2, since I do not have access to nginx1,
since this is the customer?s server. I recreated a similar situation on test
projects (by analogy with the production environment, let's name nginxes as
nginx1 and nginx2) and set up sub_filter in the nginx1 configuration file (I
didn?t make any additional settings on nginx2). But got the same problem.
The test setup of nginx1 looks like this:
location /test {
proxy_pass http://192.168.29.32/;
proxy_redirect / /test;
sub_filter '"/' '"/test/';
sub_filter "'/" "'/test/";
sub_filter_types *;
sub_filter_once off;
sub_filter_last_modified on;
proxy_set_header Accept-Encoding "";
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286944,286944#msg-286944
From mdounin at mdounin.ru Wed Feb 5 12:22:44 2020
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 5 Feb 2020 15:22:44 +0300
Subject: error code 494
In-Reply-To:
References:
<20200203164721.GW12894@mdounin.ru>
Message-ID: <20200205122244.GF12894@mdounin.ru>
Hello!
On Tue, Feb 04, 2020 at 10:03:23AM -0800, Frank Liu wrote:
> Thanks Maxim for the quick fix!
> Based on https://tools.ietf.org/html/rfc6585#section-5 , shall we by
> default return 431 instead of 400?
That's a separate question (and I don't think changing the code
from 494/400 to 431 actually worth the effort).
--
Maxim Dounin
http://mdounin.ru/
From s.schabbach at fluent-software.de Wed Feb 5 13:12:49 2020
From: s.schabbach at fluent-software.de (s.schabbach at fluent-software.de)
Date: Wed, 5 Feb 2020 14:12:49 +0100
Subject: Adjust ajax urls
Message-ID: <002201d5dc25$f879f320$e96dd960$@fluent-software.de>
Dear,
I just have a server where multiple web applications are running. The web applications are running on several ports and are distinguished by a virtual path. For example, the web application for our scenario is running using the virtual path ?/webui?.
Each virtual path passed the request to a kestrel webserver using proxy_pass to localhost. I just used the sub_filter directive to modify the response DOM (to change the url e.g. from ?/? to ?/webui?). Same I do with the redirects. Everything work fine, except of AJAX request. Inside of some javascript files, I use jquery to make an AJAX Call to my webservice and puts the HTTP response somewhere. Of course these jquery calls knows about fix urls, which needs to be arranged.
While the AJAX request does not know anything about the NGINX Proxy, they does not know anything about the ?webui? path. So I need to find a solution to manipulate these javascript code:
$.ajax({url: /Devices/Test, data: $(?#test?).serialize(),method: "POST", ?})
And have to be translated to
$.ajax({url: /webui/Devices/Test, data: $(?#test?).serialize(),method: "POST", ?})
Does anyone knows how to achive that?
Kind regards,
Sebastian.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From r at roze.lv Wed Feb 5 15:12:02 2020
From: r at roze.lv (Reinis Rozitis)
Date: Wed, 5 Feb 2020 17:12:02 +0200
Subject: Adjust ajax urls
In-Reply-To: <002201d5dc25$f879f320$e96dd960$@fluent-software.de>
References: <002201d5dc25$f879f320$e96dd960$@fluent-software.de>
Message-ID: <003001d5dc36$9effda10$dcff8e30$@roze.lv>
> While the AJAX request does not know anything about the NGINX Proxy, they does not know anything about the ?webui? path. So I need to find a solution to manipulate these javascript code:
If the javascript files are proxied the same way (from origin server) as the application you can use the same sub_filter approach, just you have to tune the sub_filter_types (http://nginx.org/en/docs/http/ngx_http_sub_module.html#sub_filter_types ) to include javascript (by default only 'text/html' is processed).
So something like (depending on what content type your upstream sends for js files):
sub_filter_types text/html text/js text/css application/javascript text/javascript;
p.s. note also that sub_filter module doesn't support compressed content so the upstream server shouldn't gzip/deflate the responses.
rr
From nginx-forum at forum.nginx.org Wed Feb 5 19:43:38 2020
From: nginx-forum at forum.nginx.org (AshleyinSpain)
Date: Wed, 05 Feb 2020 14:43:38 -0500
Subject: Nginx Valid Referer - Access Control - Help Wanted
Message-ID:
I want to use nginx referer module to restrict access to a URL containing a
directory /radio/
Only allowing access to it from a link with http referer *.mysite.*
(anysubdomain.mysite.anytld) otherwise redirect to 403
Is the referer module already loaded in nginx if not how do I load it
I found various code examples to add to the conf file and coupled this
together and added it to the end of the conf file, but it doesn't work,
entering a URL directly into the browser serves it
server {
location /radio/ {
valid_referers none blocked server_names ~\.mysite\.;
if ($invalid_referer) {
return 403;
}
}
}
Should just the location bit of code be added inside an existing server
block
My set up is nginx in docker on Ubunto 18 on a Digital Ocean droplet
Any help would be appreciated, been working on this for days
Best regards
Ashley
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286958,286958#msg-286958
From themadbeaker at gmail.com Wed Feb 5 21:40:16 2020
From: themadbeaker at gmail.com (J.R.)
Date: Wed, 5 Feb 2020 15:40:16 -0600
Subject: Nginx Valid Referer - Access Control - Help Wanted
Message-ID:
> I found various code examples to add to the conf file and coupled this
> together and added it to the end of the conf file, but it doesn't work,
> entering a URL directly into the browser serves it
> server {
> location /radio/ {
> valid_referers none blocked server_names ~\.mysite\.;
> if ($invalid_referer) { return 403; }
> }
> }
If you enter a URL directly into your browser, there will be no
referrer. You have "none" set as a valid value, thus that is why it is
working...
If you only want to accept requests with your server's name in the
referrer field, remove "none" & "blocked"...
From nginx-forum at forum.nginx.org Thu Feb 6 07:39:47 2020
From: nginx-forum at forum.nginx.org (erik)
Date: Thu, 06 Feb 2020 02:39:47 -0500
Subject: Using Yubikey/PKCS11 for Upstream Client Certificates
In-Reply-To: <7a74e961a5df3da5ae7fc7e5469f4af5.NginxMailingListEnglish@forum.nginx.org>
References: <9650714a9e61e7a0a6645e5dc1d06a02.NginxMailingListEnglish@forum.nginx.org>
<7a74e961a5df3da5ae7fc7e5469f4af5.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
According to the documentation
(http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_certificate_key),
proxy_ssl_certificate_key supports syntax for ssl-engine specific backends:
> The value engine:name:id can be specified instead of the file (1.7.9),
which loads a secret key with a specified id from
> the OpenSSL engine name.
which implies that at least for the private key we should be able to
configure a pluggable ssl engine backend.
I've got my private key loaded in aYubikey and have the pkcs11 engine loaded
in openssl:
$ openssl engine -t pkcs11
(pkcs11) pkcs11 engine
[ available ]
However, when I specify:
location /upstream {
proxy_pass https://10.16.1.21:443/;
proxy_ssl_certificate /etc/nginx/ssl/cert.pem;
proxy_ssl_certificate_key
"engine:pkcs11:pkcs11:id=%01;type=private";
}
and hit the endpoint with debug error logging turned on, it fails during the
upstream TLS handshake:
2020/02/05 07:40:28 [debug] 25199#25199: *1 http upstream request:
"/upstream?"
2020/02/05 07:40:28 [debug] 25199#25199: *1 http upstream send request
handler
2020/02/05 07:40:28 [debug] 25199#25199: *1 malloc: 000055AB2AB745C0:72
2020/02/05 07:40:28 [debug] 25199#25199: *1 set session:
0000000000000000
2020/02/05 07:40:28 [debug] 25199#25199: *1 tcp_nodelay
2020/02/05 07:40:28 [debug] 25199#25199: *1 SSL_do_handshake: -1
2020/02/05 07:40:28 [debug] 25199#25199: *1 SSL_get_error: 2
2020/02/05 07:40:28 [debug] 25199#25199: *1 SSL handshake handler: 0
2020/02/05 07:40:28 [debug] 25199#25199: *1 SSL_do_handshake: -1
2020/02/05 07:40:28 [debug] 25199#25199: *1 SSL_get_error: 5
2020/02/05 07:40:28 [error] 25199#25199: *1 peer closed connection in
SSL handshake (104: Connection reset by peer) while SSL handshaking to
upstream, client: ::1, server: _, request: "GET /upstream HTTP/1.1",
upstream: "https://10.16.1.21:443/", host: "localhost"
Cheers,
Erik van Zijst
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286922,286957#msg-286957
From vbart at nginx.com Thu Feb 6 17:12:02 2020
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Thu, 06 Feb 2020 20:12:02 +0300
Subject: Unit 1.15.0 release
Message-ID: <5690570.egQRmoEAmV@vbart-workstation>
Hi,
I'm glad to announce a new release of NGINX Unit.
This is mostly a bugfix release that eliminates a few nasty issues.
Also, it addresses incompatibilities caused by a minor API change in
the recently released major version of Ruby.
Changes with Unit 1.15.0 06 Feb 2020
*) Change: extensions of dynamically requested PHP scripts were
restricted to ".php".
*) Feature: compatibility with Ruby 2.7.
*) Bugfix: segmentation fault might have occurred in the router process
with multiple application processes under load; the bug had appeared
in 1.14.0.
*) Bugfix: receiving request body over TLS connection might have
stalled.
More features are planned for the next release that is expected in the
beginning of March. Among them are basic load balancing in the proxy module
and "try_files"-like functionality for more sophisticated request routing.
Stay tuned!
wbr, Valentin V. Bartenev
From francis at daoine.org Thu Feb 6 19:54:11 2020
From: francis at daoine.org (Francis Daly)
Date: Thu, 6 Feb 2020 19:54:11 +0000
Subject: Nginx's sub_filter does not replace paths in CSS and JS
In-Reply-To:
References:
Message-ID: <20200206195411.GZ26683@daoine.org>
On Wed, Feb 05, 2020 at 07:04:09AM -0500, !QVAS! wrote:
Hi there,
> location /new {
> proxy_pass http://172.22.3.15/;
> }
> server {
> listen 80;
> server_name 172.22.3.15;
>
> location / {
> proxy_pass http://172.22.3.16:8000;
> proxy_redirect / /new;
> sub_filter '"/' '"/new/';
> sub_filter "'/" "'/new/";
> sub_filter_types *;
> sub_filter_once off;
> sub_filter_last_modified on;
> proxy_set_header Accept-Encoding "";
> }
> }
> But the problem is that in the CSS and JS files themselves, the paths are
> not changed.
When I use your config, it seems to work for me.
Can you show one request and the matching incorrect response?
Small is good; something like
curl -v http://examplesite.com/new/a.js
should work -- the matching file a.js in the 172.22.3.16:8000 document
root should have suitable content.
curl -v http://172.22.3.15/a.js
curl -v http://172.22.3.16:8000/a.js
might also be useful, to compare.
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Thu Feb 6 23:02:50 2020
From: nginx-forum at forum.nginx.org (AshleyinSpain)
Date: Thu, 06 Feb 2020 18:02:50 -0500
Subject: Nginx Valid Referer - Access Control - Help Wanted
In-Reply-To:
References:
Message-ID: <729db2cb000dc5a629a132d2dd563f4c.NginxMailingListEnglish@forum.nginx.org>
J.R. Wrote:
-------------------------------------------------------
> > I found various code examples to add to the conf file and coupled
> this
> > together and added it to the end of the conf file, but it doesn't
> work,
> > entering a URL directly into the browser serves it
>
> > server {
> > location /radio/ {
> > valid_referers none blocked server_names ~\.mysite\.;
> > if ($invalid_referer) { return 403; }
> > }
> > }
>
> If you enter a URL directly into your browser, there will be no
> referrer. You have "none" set as a valid value, thus that is why it is
> working...
>
> If you only want to accept requests with your server's name in the
> referrer field, remove "none" & "blocked"...
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
I deleted the 'none' and 'blocked' and no difference still not blocking
direct access to the URL
Tried adding it in its own block and adding it to the end of an existing
block neither worked
Is the location /radio/ part ok
I am trying to block direct access to any URL with a directory /radio/
The URLs look like sub.domain.tld/radio/1234/mytrack.mp3?45678901
I need it so the URL is only served if a link on *.mysite.* is clicked ie
the track is only played through an html5 audio player on mysite
Anyone have anymore ideas
Best regards
Ashley
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286958,286965#msg-286965
From nginx-forum at forum.nginx.org Thu Feb 6 23:46:39 2020
From: nginx-forum at forum.nginx.org (erik)
Date: Thu, 06 Feb 2020 18:46:39 -0500
Subject: Using Yubikey/PKCS11 for Upstream Client Certificates
In-Reply-To:
References: <9650714a9e61e7a0a6645e5dc1d06a02.NginxMailingListEnglish@forum.nginx.org>
<7a74e961a5df3da5ae7fc7e5469f4af5.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <1ec676a98373270966e97175150a575e.NginxMailingListEnglish@forum.nginx.org>
I figured it out and thought I'd post back for anyone else looking at this
post in the future.
My problem had nothing to do with the PKCS#11 engine. It persisted when I
pointed proxy_ssl_certificate_key directly at the non-encrypted,
password-less rsa key file.
Instead, the problem was SNI. By default, Nginx uses the inbound request's
Host header as the upstream SNI name. Since I was hitting Nginx with curl on
localhost, it ended up sending "localhost" as the upstream virtual host.
It's even in the debug error log:
2020/02/05 07:40:28 [error] 25199#25199: *1 peer closed connection in SSL
handshake (104: Connection reset by peer) while SSL handshaking to upstream,
client: ::1, server: _, request: "GET /upstream HTTP/1.1", upstream:
"https://10.16.1.21:443/", host: "localhost"
Since the upstream server does not have localhost as its SNI name, the TLS
connection failed to get established. By fixing the value for SNI it went
through:
proxy_ssl_server_name on;
proxy_ssl_name upstream.example.org:443;
I had to do a similar thing for the upstream HTTP Host header, which was
also being set to the value of the incoming request (again, localhost for
me):
proxy_set_header Host upstream.example.org:443;
Now to get the full PKCS#11 uri for the Yubikey I ran:
$ p11tool --provider /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
--list-privkeys --login
Token 'PIV Card Holder pin (PIV_II)' with URL
'pkcs11:model=PKCS%2315%20emulated;manufacturer=piv_II;serial=00000000;token=PIV%20Card%20Holder%20pin%20%28PIV_II%29'
requires user PIN
Enter PIN:
Object 0:
URL:
pkcs11:model=PKCS%2315%20emulated;manufacturer=piv_II;serial=00000000;token=PIV%20Card%20Holder%20pin%20%28PIV_II%29;id=%01;object=PIV%20AUTH%20key;type=private
Type: Private key
Label: PIV AUTH key
Flags: CKA_WRAP/UNWRAP; CKA_PRIVATE; CKA_NEVER_EXTRACTABLE;
CKA_SENSITIVE;
ID: 01
Prepending that with "engine:pkcs11:" and plugging that into
proxy_ssl_certificate_key:
proxy_ssl_certificate /etc/nginx/ssl/cert.pem;
proxy_ssl_certificate_key
"engine:pkcs11:pkcs11:model=PKCS%2315%20emulated;manufacturer=piv_II;serial=00000000;token=PIV%20Card%20Holder%20pin%20%28PIV_II%29;id=%01;object=PIV%20AUTH%20key;type=private;pin-value=123456";
And that made the whole thing work. Note that the client certificate itself
is still read from a file as proxy_ssl_certificate does not support pkcs11
uri's.
I can now access the remote TLS server through the local proxy:
$ curl http://localhost/foo/bar
Erik van Zijst
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286922,286966#msg-286966
From nginx-forum at forum.nginx.org Thu Feb 6 23:48:16 2020
From: nginx-forum at forum.nginx.org (erik)
Date: Thu, 06 Feb 2020 18:48:16 -0500
Subject: Using Yubikey/PKCS11 for Upstream Client Certificates
In-Reply-To: <26bcdd04-397e-7ecf-d881-704ba605d101@nginx.com>
References: <26bcdd04-397e-7ecf-d881-704ba605d101@nginx.com>
Message-ID:
Thanks, I got it working in the end though. I realize a Yubikey isn't
terribly performant but for my particular use case I don't expect that to be
a problem.
Cheers,
Erik
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286922,286967#msg-286967
From francis at daoine.org Fri Feb 7 00:09:02 2020
From: francis at daoine.org (Francis Daly)
Date: Fri, 7 Feb 2020 00:09:02 +0000
Subject: Nginx Valid Referer - Access Control - Help Wanted
In-Reply-To: <729db2cb000dc5a629a132d2dd563f4c.NginxMailingListEnglish@forum.nginx.org>
References:
<729db2cb000dc5a629a132d2dd563f4c.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200207000902.GA26683@daoine.org>
On Thu, Feb 06, 2020 at 06:02:50PM -0500, AshleyinSpain wrote:
Hi there,
> > > server {
> > > location /radio/ {
> > > valid_referers none blocked server_names ~\.mysite\.;
> > > if ($invalid_referer) { return 403; }
> > > }
> > > }
> I deleted the 'none' and 'blocked' and no difference still not blocking
> direct access to the URL
>
> Tried adding it in its own block and adding it to the end of an existing
> block neither worked
>
> Is the location /radio/ part ok
>
> I am trying to block direct access to any URL with a directory /radio/
>
> The URLs look like sub.domain.tld/radio/1234/mytrack.mp3?45678901
In nginx, one request is handled in one location.
If /radio/ is the location that you configured to handle this request,
then the config should apply.
If you have, for example, "location ~ mp3", then *that* would probably
be the location that is configured to handle this request (and so that
is where this "return 403;" should be.
You could try changing the line to be "location ^~ /radio/ {", but
without knowing your full config, it is hard to know if that will fix
things or break them.
http://nginx.org/r/location
> I need it so the URL is only served if a link on *.mysite.* is clicked ie
> the track is only played through an html5 audio player on mysite
That is not a thing that can be done reliably.
If "unreliable" is good enough for you, then carry on. Otherwise, come
up with a new requirement that can be done.
Cheers,
f
--
Francis Daly francis at daoine.org
From lists at lazygranch.com Fri Feb 7 12:09:38 2020
From: lists at lazygranch.com (lists)
Date: Fri, 07 Feb 2020 04:09:38 -0800
Subject: Nginx Valid Referer - Access Control - Help Wanted
In-Reply-To: <729db2cb000dc5a629a132d2dd563f4c.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
If you are going to block one thing, eventually you will block two, then three, etc.
I suggest learning how to use "map".
https://www.edmondscommerce.co.uk/handbook/Servers/Config/Nginx/Blocking-URLs-in-batch-using-nginx-map/
? Original Message ?
From: nginx-forum at forum.nginx.org
Sent: February 6, 2020 3:03 PM
To: nginx at nginx.org
Reply-to: nginx at nginx.org
Subject: Re: Nginx Valid Referer - Access Control - Help Wanted
J.R. Wrote:
-------------------------------------------------------
> > I found various code examples to add to the conf file and coupled
> this
> > together and added it to the end of the conf file, but it doesn't
> work,
> > entering a URL directly into the browser serves it
>
> > server {
> >??? location /radio/ {
> >??????? valid_referers none blocked server_names ~\.mysite\.;
> >??????? if ($invalid_referer) { return 403; }
> >??? }
> > }
>
> If you enter a URL directly into your browser, there will be no
> referrer. You have "none" set as a valid value, thus that is why it is
> working...
>
> If you only want to accept requests with your server's name in the
> referrer field, remove "none" & "blocked"...
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
I deleted the 'none' and 'blocked' and no difference still not blocking
direct access to the URL
Tried adding it in its own block and adding it to the end of an existing
block neither worked
Is the location /radio/ part ok
I am trying to block direct access to any URL with a directory /radio/
The URLs look like sub.domain.tld/radio/1234/mytrack.mp3?45678901
I need it so the URL is only served if a link on *.mysite.* is clicked ie
the track is only played through an html5 audio player on mysite
Anyone have anymore ideas
Best regards
Ashley
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286958,286965#msg-286965
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From aleongalix at gmail.com Fri Feb 7 14:07:50 2020
From: aleongalix at gmail.com (Antony)
Date: Fri, 7 Feb 2020 15:07:50 +0100
Subject: Error during nginx restart
Message-ID:
Hello,
I'm currently in charge of installing Lemonldap on a nginx infrastructure.
In the official Lemonldap installation documentation, there is this
command: "ln -s /etc/lemonldap-ng/nginx-lmlog.conf
/etc/nginx/conf.d/llng-lmlog.conf".
The problem is that when the symbolic link between the two files is
created, if I want to restart nginx this is impossible because an error
appears: duplicate "log_format" name "lm_combined" in
/etc/lemonldap-ng/nginx-lmlog.conf
Do you have a solution for me?
Have a nice day,
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From vbl5968 at gmail.com Sat Feb 8 17:21:34 2020
From: vbl5968 at gmail.com (Vincent Blondel)
Date: Sat, 8 Feb 2020 18:21:34 +0100
Subject: =?UTF-8?B?4oCYbmd4X2xpYmNfY3J5cHTigJkgZXJyb3I6IGltcGxpY2l0IGRlY2xhcmF0aW9u?=
=?UTF-8?B?IG9mIGZ1bmN0aW9uIOKAmGNyeXB04oCZ?=
Message-ID:
Hi all,
anybody know why make nginx 1.17.8 on cygwin fails with ...
$ make
make -f objs/Makefile
make[1]: Entering directory '/home/devel/nginx-1.17.8'
cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g
-I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \
-o objs/src/os/unix/ngx_user.o \
src/os/unix/ngx_user.c
src/os/unix/ngx_user.c: In function ?ngx_libc_crypt?:
src/os/unix/ngx_user.c:53:13: error: implicit declaration of function
?crypt?; did you mean ?creat?? [-Werror=implicit-function-declaration]
value = crypt((char *) key, (char *) salt);
^~~~~
creat
src/os/unix/ngx_user.c:53:11: error: assignment makes pointer from integer
without a cast [-Werror=int-conversion]
value = crypt((char *) key, (char *) salt);
^
cc1: all warnings being treated as errors
make[1]: *** [objs/Makefile:788: objs/src/os/unix/ngx_user.o] Error 1
make[1]: Leaving directory '/home/devel/nginx-1.17.8'
make: *** [Makefile:8: build] Error 2
Thanks in advance for your help :-)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Sun Feb 9 09:34:02 2020
From: francis at daoine.org (Francis Daly)
Date: Sun, 9 Feb 2020 09:34:02 +0000
Subject: Error during nginx restart
In-Reply-To:
References:
Message-ID: <20200209093402.GB26683@daoine.org>
On Fri, Feb 07, 2020 at 03:07:50PM +0100, Antony wrote:
Hi there,
> In the official Lemonldap installation documentation, there is this
> command: "ln -s /etc/lemonldap-ng/nginx-lmlog.conf
> /etc/nginx/conf.d/llng-lmlog.conf".
Can you give a link to that documentation; and maybe to the content of
the config files?
> The problem is that when the symbolic link between the two files is
> created, if I want to restart nginx this is impossible because an error
> appears: duplicate "log_format" name "lm_combined" in
> /etc/lemonldap-ng/nginx-lmlog.conf
>
> Do you have a solution for me?
My guess is that one part of a config file does "include
/etc/lemonldap-ng/nginx-lmlog.conf"; and another part does "include
/etc/nginx/conf.d/*.conf".
With the symlink in place, the same contents are included twice; and
they have a piece that must only exist once.
If that is the case, then omitting the symlink should work -- the explicit
"include /etc/lemonldap-ng/nginx-lmlog.conf" should get the content in
the right place once.
With a working config file, "nginx -T" can help -- perhaps with "|
grep include" or "| grep '^# con'" -- to see what file are read.
Without a working config file, something like "grep include nginx.conf"
can show the first set of extra files that will be read. Duplicates
there are probably bad.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Sun Feb 9 09:46:35 2020
From: francis at daoine.org (Francis Daly)
Date: Sun, 9 Feb 2020 09:46:35 +0000
Subject: =?UTF-8?B?UmU6IOKAmG5neF9saWJjX2NyeXB04oCZIGVycm9yOiBpbXBsaWNpdCBkZWNsYXJh?=
=?UTF-8?B?dGlvbiBvZiBmdW5jdGlvbiDigJhjcnlwdOKAmQ==?=
In-Reply-To:
References:
Message-ID: <20200209094635.GC26683@daoine.org>
On Sat, Feb 08, 2020 at 06:21:34PM +0100, Vincent Blondel wrote:
Hi there,
> anybody know why make nginx 1.17.8 on cygwin fails with ...
> src/os/unix/ngx_user.c: In function ?ngx_libc_crypt?:
> src/os/unix/ngx_user.c:53:13: error: implicit declaration of function
> ?crypt?; did you mean ?creat?? [-Werror=implicit-function-declaration]
On a Linux system here, "man crypt" includes
"""
SYNOPSIS
#define _XOPEN_SOURCE
#include
char *crypt(const char *key, const char *salt);
"""
And "implicit declaration of function" means that that explicit declaration
did not happen.
So - for some reason, at this stage in your build system, that "#include"
is not present; or the suitable "#define" is not in place; or maybe it
is reading a different unistd.h that does not include "crypt" at all.
That's not an answer; but maybe points you where to look more.
Perhaps your cygwin is different from this; perhaps your "configure"
log or output contains something interesting about unistd.h or crypt.h?
Mine shows, for example,
checking for crypt() ... not found
checking for crypt() in libcrypt ... found
There is more information (probably) in objs/autoconf.err
Good luck with it,
f
--
Francis Daly francis at daoine.org
From vbl5968 at gmail.com Sun Feb 9 13:08:30 2020
From: vbl5968 at gmail.com (Vincent Blondel)
Date: Sun, 9 Feb 2020 14:08:30 +0100
Subject: =?UTF-8?B?UmU6IOKAmG5neF9saWJjX2NyeXB04oCZIGVycm9yOiBpbXBsaWNpdCBkZWNsYXJh?=
=?UTF-8?B?dGlvbiBvZiBmdW5jdGlvbiDigJhjcnlwdOKAmQ==?=
In-Reply-To: <20200209094635.GC26683@daoine.org>
References:
<20200209094635.GC26683@daoine.org>
Message-ID:
thank You for the Support ... libcrypt-devel was the package missing in my
cygwin install. this is now working :-)
On Sun, Feb 9, 2020 at 10:46 AM Francis Daly wrote:
> On Sat, Feb 08, 2020 at 06:21:34PM +0100, Vincent Blondel wrote:
>
> Hi there,
>
> > anybody know why make nginx 1.17.8 on cygwin fails with ...
>
> > src/os/unix/ngx_user.c: In function ?ngx_libc_crypt?:
> > src/os/unix/ngx_user.c:53:13: error: implicit declaration of function
> > ?crypt?; did you mean ?creat?? [-Werror=implicit-function-declaration]
>
> On a Linux system here, "man crypt" includes
>
> """
> SYNOPSIS
> #define _XOPEN_SOURCE
> #include
>
> char *crypt(const char *key, const char *salt);
> """
>
> And "implicit declaration of function" means that that explicit declaration
> did not happen.
>
> So - for some reason, at this stage in your build system, that "#include"
> is not present; or the suitable "#define" is not in place; or maybe it
> is reading a different unistd.h that does not include "crypt" at all.
>
> That's not an answer; but maybe points you where to look more.
>
> Perhaps your cygwin is different from this; perhaps your "configure"
> log or output contains something interesting about unistd.h or crypt.h?
>
> Mine shows, for example,
>
> checking for crypt() ... not found
> checking for crypt() in libcrypt ... found
>
> There is more information (probably) in objs/autoconf.err
>
> Good luck with it,
>
> f
> --
> Francis Daly francis at daoine.org
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Tue Feb 11 19:22:50 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Tue, 11 Feb 2020 14:22:50 -0500
Subject: net::ERR_SSL_PROTOCOL_ERROR
Message-ID: <93cb153f0c7f9c258be66fc790403fd1.NginxMailingListEnglish@forum.nginx.org>
Today I issued and installed SSL certificates for my website.
This is the rating assigned by https://www.ssllabs.com/ssltest/analyze.html
:
https://drive.google.com/open?id=1-Fb4h1dmdJ8kN68JxKROWwu4ezGmjm6R
This is the result of https://check-your-website.server-daten.de/ which
indicates "only" content problems: mixed, content, missing files, but
nothing related to SSL_PROTOCOL
https://drive.google.com/open?id=19i-AwXwgf8tBY9p0srfHX5scN5Q0j-UH
When I connect to the local IP address, everything goes smoothly with no
errors:
- after stopping nginx server:
https://drive.google.com/open?id=1k4hmYpgRwCW6NyhK7ZoK39-giF9MfPAY
and
- also after restarting nginx server:
(base) marco at pc01:~$ sudo systemctl start nginx
(base) marco at pc01:~$ sudo systemctl reload nginx
(base) marco at pc01:~$ sudo systemctl status nginx
? nginx.service - A high performance web server and a reverse proxy
server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor
preset: en
Active: active (running) since Tue 2020-02-11 19:06:58 CET; 10s ago
Docs: man:nginx(8)
Process: 6124 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry
QUIT/5 -
Process: 8843 ExecReload=/usr/sbin/nginx -g daemon on; master_process
on; -s r
Process: 8779 ExecStart=/usr/sbin/nginx -g daemon on; master_process
on; (code
Process: 8770 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on;
master_process
Main PID: 8784 (nginx)
Tasks: 9 (limit: 4915)
CGroup: /system.slice/nginx.service
??8784 nginx: master process /usr/sbin/nginx -g daemon
on; master_pro
??8844 nginx: worker process
??8846 nginx: worker process
??8847 nginx: worker process
??8849 nginx: worker process
??8850 nginx: worker process
??8851 nginx: worker process
??8852 nginx: worker process
??8853 nginx: worker process
the output is fine:
https://drive.google.com/open?id=1-Sz1udhZfrM9bGaIhImORRnwRznXihK7
But when I connect to my website's through website name I get
net::ERR_SSL_PROTOCOL_ERROR :
https://drive.google.com/open?id=10MYySDKhPx9L-QucqzxN5NTratJEOJZR
This is my /etc/nginx/nginx.conf :
user www-data;
worker_processes auto;
pid /run/nginx.pid;
#include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref:
POODLE
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json
application/javascript text/xml application/xml application/xml+rss
text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
#include /etc/nginx/sites-enabled/*;
}
And this is my /etc/nginx/conf.d/default.conf :
server {
listen 443 ssl http2 default_server;
server_name ggc.world;
ssl_certificate /etc/letsencrypt/live/ggc.world/fullchain.pem; #
managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/ggc.world/privkey.pem; #
managed by Certbot
ssl_trusted_certificate /etc/letsencrypt/live/ggc.world/chain.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_session_timeout 5m;
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-
draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
error_page 497 https://$host:$server_port$request_uri;
server_name www.ggc.world;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
upstream websocket {
ip_hash;
server localhost:3000;
}
server {
listen 81;
server_name ggc.world www.ggc.world;
#location / {
location ~ ^/(websocket|websocket\/socket-io) {
proxy_pass http://127.0.0.1:4201;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwared-For $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
}
}
What is going on? What might be the causes of this SSL_PROTOCOL?
How to solve it? What do I have to modify in
/etc/nginx/conf.d/default.conf?
Looking forward to your kind help.
Marco
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286991,286991#msg-286991
From nginx-forum at forum.nginx.org Tue Feb 11 19:28:45 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Tue, 11 Feb 2020 14:28:45 -0500
Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx
with Socket.io?
In-Reply-To: <20200202110624.GX26683@daoine.org>
References: <20200202110624.GX26683@daoine.org>
Message-ID: <393bd3e4292e5a4c5decd0f09feb7511.NginxMailingListEnglish@forum.nginx.org>
Hi Francis,
I "solved" this problem installing the Desktop version of Ubuntu 18.04, as I
described here:
https://askubuntu.com/questions/1207812/webapp-fails-with-neterr-connection-refused-with-ubuntu-18-04-4-server-edition
Now I have a different, but may be, similar, problem, which I described in
this post:
https://forum.nginx.org/read.php?2,286991
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286992#msg-286992
From nginx-forum at forum.nginx.org Tue Feb 11 19:31:52 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Tue, 11 Feb 2020 14:31:52 -0500
Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx
with Socket.io?
In-Reply-To: <20200202110624.GX26683@daoine.org>
References: <20200202110624.GX26683@daoine.org>
Message-ID: <8e8613c95500765bd7cf33c4e0b83206.NginxMailingListEnglish@forum.nginx.org>
Hi Francis,
I "solved" this problem installing the Desktop version of Ubuntu 18.04 as I
described here:
https://askubuntu.com/questions/1207812/webapp-fails-with-neterr-connection-refused-with-ubuntu-18-04-4-server-edition
Now I've got a different, but may be, similar, problem, which I described in
this post in the Nginx Forum:
https://forum.nginx.org/read.php?2,286991
Marco
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,286993#msg-286993
From themadbeaker at gmail.com Wed Feb 12 00:14:32 2020
From: themadbeaker at gmail.com (J.R.)
Date: Tue, 11 Feb 2020 18:14:32 -0600
Subject: net::ERR_SSL_PROTOCOL_ERROR
Message-ID:
> But when I connect to my website's through website name I get
> net::ERR_SSL_PROTOCOL_ERROR :
Guessing based on the "Certificate Common Name Invalid" is because you
are connecting with "localhost" and "129.168.1.7" whereas your
certificate has the actual DNS hostname...
From nginx-forum at forum.nginx.org Wed Feb 12 06:53:31 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Wed, 12 Feb 2020 01:53:31 -0500
Subject: net::ERR_SSL_PROTOCOL_ERROR
In-Reply-To:
References:
Message-ID: <2b8800760c62f61d32ab70546f50855a.NginxMailingListEnglish@forum.nginx.org>
Hi!,
I do not understand what should I modify.
If I should use ggc,world when connecting with the browser, this is what I
already do:
https://drive.google.com/open?id=10MYySDKhPx9L-QucqzxN5NTratJEOJZR
If instead I should put ggc.world instead of local (127.0.0.1) in
/etc/nginx/conf.d/default.conf , this is the result of my trial:
/etc/nginx/conf.d/default.conf :
server {
listen 443 ssl http2 default_server;
server_name ggc.world;
ssl_certificate /etc/letsencrypt/live/ggc.world/fullchain.pem; #
managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/ggc.world/privkey.pem; #
managed by Certbot
ssl_trusted_certificate /etc/letsencrypt/live/ggc.world/chain.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_session_timeout 5m;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-
draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
#ssl_stapling on;
#ssl_stapling_verify on;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
#proxy_pass http://127.0.0.1:8080;
proxy_pass http://ggc.world:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
error_page 497 https://$host:$server_port$request_uri;
server_name www.ggc.world;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
#proxy_pass http://127.0.0.1:8080;
proxy_pass http://ggc.world:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
upstream websocket {
ip_hash;
#server localhost:3000;
server ggc.world:3000;
}
server {
listen 81;
server_name ggc.world www.ggc.world;
#location / {
location ~ ^/(websocket|websocket\/socket-io) {
#proxy_pass http://127.0.0.1:4201;
proxy_pass http://ggc.world:4201;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwared-For $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
}
}
I get this output:
https://drive.google.com/open?id=1zUSN9wP6h9svizahMjhhFFbY0CLN71Aw
Can you please explain me?
Thank you for your kind help
Marco
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286991,286996#msg-286996
From r at roze.lv Wed Feb 12 08:03:27 2020
From: r at roze.lv (Reinis Rozitis)
Date: Wed, 12 Feb 2020 10:03:27 +0200
Subject: net::ERR_SSL_PROTOCOL_ERROR
In-Reply-To: <2b8800760c62f61d32ab70546f50855a.NginxMailingListEnglish@forum.nginx.org>
References:
<2b8800760c62f61d32ab70546f50855a.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <001801d5e17a$e87159a0$b9540ce0$@roze.lv>
> Hi!,
> I do not understand what should I modify.
The problem is your backend application (I assume node app) which listens on the 8080 port. While nginx is doing everything right the app responds and constructs the urls using internal ip and/or 'localhost'.
Depending on what the app uses for the urls you could try to add:
proxy_set_header Host $host;
in the location / { proxy_pass ... } block (for some reason you have it only in the server block which listens on port 81).
rr
From francis at daoine.org Wed Feb 12 08:13:02 2020
From: francis at daoine.org (Francis Daly)
Date: Wed, 12 Feb 2020 08:13:02 +0000
Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx
with Socket.io?
In-Reply-To: <393bd3e4292e5a4c5decd0f09feb7511.NginxMailingListEnglish@forum.nginx.org>
References: <20200202110624.GX26683@daoine.org>
<393bd3e4292e5a4c5decd0f09feb7511.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200212081302.GD26683@daoine.org>
On Tue, Feb 11, 2020 at 02:28:45PM -0500, MarcoI wrote:
Hi there,
> I "solved" this problem installing the Desktop version of Ubuntu 18.04, as I
> described here:
> https://askubuntu.com/questions/1207812/webapp-fails-with-neterr-connection-refused-with-ubuntu-18-04-4-server-edition
I don't think that's a solution; but if you now have a working system,
then it's all good.
> Now I have a different, but may be, similar, problem, which I described in
> this post:
> https://forum.nginx.org/read.php?2,286991
That looks like the same problem to me.
Change your vue config so that it can work.
Perhaps the "public" piece at
https://forum.vuejs.org/t/vue-with-nginx/26843/3 is relevant.
See also https://webpack.js.org/configuration/dev-server/#devserver-public
and maybe "publicPath" there too.
I see no nginx issue here, or there, other than what was previously
mentioned.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Wed Feb 12 09:05:26 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Wed, 12 Feb 2020 04:05:26 -0500
Subject: net::ERR_SSL_PROTOCOL_ERROR
In-Reply-To: <001801d5e17a$e87159a0$b9540ce0$@roze.lv>
References: <001801d5e17a$e87159a0$b9540ce0$@roze.lv>
Message-ID: <02277eb25f7e68cfbf3d77fd0ca1c978.NginxMailingListEnglish@forum.nginx.org>
Hi Reinis,
setting in /etc/nginx/conf.d/default.conf proxy_set_header Host $host in the
location / as follows:
server {
listen 443 ssl http2 default_server;
server_name ggc.world;
ssl_certificate /etc/letsencrypt/live/ggc.world/fullchain.pem; #
managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/ggc.world/privkey.pem; #
managed by Certbot
ssl_trusted_certificate /etc/letsencrypt/live/ggc.world/chain.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_session_timeout 5m;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-
draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
#ssl_stapling on;
#ssl_stapling_verify on;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
error_page 497 https://$host:$server_port$request_uri;
server_name www.ggc.world;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
upstream websocket {
ip_hash;
server localhost:3000;
}
server {
listen 81;
server_name ggc.world www.ggc.world;
#location / {
location ~ ^/(websocket|websocket\/socket-io) {
proxy_pass http://127.0.0.1:4201;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwared-For $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
}
}
I get "Invalid Host header" :
https://drive.google.com/open?id=1Y8-PsrB7QdTD--TtTHxnYW_dzaxrRKuc
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286991,286999#msg-286999
From nginx-forum at forum.nginx.org Wed Feb 12 09:24:27 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Wed, 12 Feb 2020 04:24:27 -0500
Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx
with Socket.io?
In-Reply-To: <20200212081302.GD26683@daoine.org>
References: <20200212081302.GD26683@daoine.org>
Message-ID: <557eea63dabf0007fb6f82c33a86946e.NginxMailingListEnglish@forum.nginx.org>
Thank you very much Francis!!! Your indications solved this problem:
with vue.config.js :
// vue.config.js
module.exports = {
// options...
publicPath: '',
devServer: {
host: '0.0.0.0',
port: 8080,
public: 'ggc.world'
},
}
now it works fine:
https://drive.google.com/open?id=1PUctgdYLoVmJRvYyG040BFNGOev2yhRX
The previous problem looked similar but I guess it was somewhat different,
because it disappeared once moving from Server Edition to Desktop edition of
Ubuntu 18.04.4 .
Thank you very much again for your kind help.
Marco
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,287000#msg-287000
From nginx-forum at forum.nginx.org Wed Feb 12 09:28:19 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Wed, 12 Feb 2020 04:28:19 -0500
Subject: net::ERR_SSL_PROTOCOL_ERROR
In-Reply-To: <93cb153f0c7f9c258be66fc790403fd1.NginxMailingListEnglish@forum.nginx.org>
References: <93cb153f0c7f9c258be66fc790403fd1.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Solved the problem thanks to Francis Daly who pointed me to the right
direction:
https://forum.vuejs.org/t/vue-with-nginx/26843/3
// vue.config.js
module.exports = {
// options...
publicPath: '',
devServer: {
host: '0.0.0.0',
port: 8080,
public: 'ggc.world'
},
}
Now it works fine:
https://drive.google.com/open?id=1PUctgdYLoVmJRvYyG040BFNGOev2yhRX
Besides to Francis, whose contribution was resolutive, I thank J.S. and
Reinis for their kind help.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286991,287001#msg-287001
From nginx-forum at forum.nginx.org Wed Feb 12 12:43:11 2020
From: nginx-forum at forum.nginx.org (adrian.hilt)
Date: Wed, 12 Feb 2020 07:43:11 -0500
Subject: Nginx php reverse proxy problem
Message-ID: <21acaa46f5163492fa2b714a00968c35.NginxMailingListEnglish@forum.nginx.org>
Hi,
I'm running a reverse proxy with nginx and using certbot for ssl. It's been
working great but recently with an php server installation it's been giving
me problems.
I get access to the index but any other page I get a 404 error from nginx.
404 Not Found
nginx/1.14.0 (Ubuntu)
This is my conf file
server {
root /var/www/YOUR_DIRECTORY;
index index.php index.html index.htm;
###################################################
# Change "yoururl.com" to your host name
server_name my-domain;
# location / {
# try_files $uri $uri/ /index.php?q=$uri&$args;
# }
location /site/ {
if (!-e $request_filename){
rewrite ^/site/(.*)$ /site/index.php break;
}
}location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param PATH_TO_FRAMEWORKS /var/www/frameworks/;
fastcgi_param CORE_TYPE frameworks;
fastcgi_param IS_DEV true;
include fastcgi_params;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ /\. {
deny all;
}
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}location / {
proxy_pass http://my-server-ip/;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/my-domain/fullchain.pem; #$
ssl_certificate_key /etc/letsencrypt/live/my-domain/privkey.pem;$
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = my-domain) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name my-domain;
return 404; # managed by Certbot
}
Any had similar problems? It's nginx having a different configuration for
php?
Just in case I have tried commenting the last lines which appears to send
the 404, but it did the same thing.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287003,287003#msg-287003
From nginx-forum at forum.nginx.org Wed Feb 12 15:36:54 2020
From: nginx-forum at forum.nginx.org (chocholo3)
Date: Wed, 12 Feb 2020 10:36:54 -0500
Subject: Failed disk + proxy_intercept_errors
Message-ID:
Hi,
In our deployment we do have configuration of proxy cache with multiple hard
drives. Because of performance we don't have any RAID on these devices. That
means we have to handle even a situation when drive dies, sometime.
After disk failure of proxy_cache_path device nginx usually starts serving
users with http500. So I've had an idea we may use proxy_intercept_errors
but I end up with inconsistent state: ~60 files are handled as expected, but
after that every connection is terminated prematurely without a single byte
sent. In access.log there is http 200.
I broke just ext4 FS (dd if=/dev/zero of=/dev/sdc bs=1k count=$((1024*100)))
and I'm using nginx 1.17.7 on Linux
Relevant snippet from my configuration:
```
location ~ ^/mylocation/ {
set $spindle_bucket cache_01;
include "snippets/spindle_cache_locations_uspinclude";
proxy_cache DISK_$spindle_bucket;
proxy_pass $backend;
proxy_cache_key $uri;
proxy_cache_revalidate on;
proxy_cache_use_stale off;
recursive_error_pages on;
proxy_intercept_errors on;
error_page 400 401 402 403 405 406 407 408 409 410 411 412 413 414
415 416 417 418 420 422 423 424 426 428 429 431 444 449 450 451 =
@myotherlocation;
error_page 500 501 502 503 504 505 506 507 508 510 511 =
@myotherlocation;
}
# same as ^/mylocation/ but without proxy_intercept_errors and with a single
spare drive only
location @myotherlocation {
include "snippets/spindle_cache_locations_uspinclude";
set $spindle_bucket "spare_01";
proxy_cache DISK_$spindle_bucket;
proxy_pass $backend;
proxy_cache_key $uri;
proxy_cache_revalidate on;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
}
```
In snippets/spindle_cache_locations_uspinclude I do have:
```
proxy_buffers 64 8k;
proxy_set_header Host our.akamaized.net;
proxy_cache_valid 200 720d;
proxy_cache_valid 206 720d;
#proxy_cache_valid 301 1d;
#proxy_cache_valid 302 10m;
#proxy_cache_valid any 1s;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
```
```
$ curl -v
http://myserver/mylocation/66666666-1111-1111-0000-01234567/2ab5355f-5508-438d-acfc-686469877fb3.ism/2ab5355f-5508-438d-acfc-686469877fb3-video_2=1481000-177.m4s
* Trying myipv6...
* TCP_NODELAY set
* Connected to myserver (myipv6) port 80 (#0)
> GET
/mylocation/66666666-1111-1111-0000-01234567/2ab5355f-5508-438d-acfc-686469877fb3.ism/2ab5355f-55$
8-438d-acfc-686469877fb3-video_2=1481000-177.m4s HTTP/1.1
> Host: myserver
> User-Agent: curl/7.52.1
> Accept: */*
>
* Curl_http_done: called premature == 0
* Empty reply from server
* Connection #0 to host myserver left intact
curl: (52) Empty reply from server
```
Am I doing something wrong or is this a bug? Because of the inconsistency I
tend to the 2nd. But I'm not sure at all :-)
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287005,287005#msg-287005
From mdounin at mdounin.ru Thu Feb 13 14:58:05 2020
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 13 Feb 2020 17:58:05 +0300
Subject: Failed disk + proxy_intercept_errors
In-Reply-To:
References:
Message-ID: <20200213145805.GQ12894@mdounin.ru>
Hello!
On Wed, Feb 12, 2020 at 10:36:54AM -0500, chocholo3 wrote:
> Hi,
> In our deployment we do have configuration of proxy cache with multiple hard
> drives. Because of performance we don't have any RAID on these devices. That
> means we have to handle even a situation when drive dies, sometime.
>
> After disk failure of proxy_cache_path device nginx usually starts serving
> users with http500. So I've had an idea we may use proxy_intercept_errors
> but I end up with inconsistent state: ~60 files are handled as expected, but
> after that every connection is terminated prematurely without a single byte
> sent. In access.log there is http 200.
>
> I broke just ext4 FS (dd if=/dev/zero of=/dev/sdc bs=1k count=$((1024*100)))
> and I'm using nginx 1.17.7 on Linux
[...]
> Am I doing something wrong or is this a bug? Because of the inconsistency I
> tend to the 2nd. But I'm not sure at all :-)
First of all, the proxy_intercept_errors directive is only
relevant to errors returned by upstream servers. As long as the
error is generated by nginx itself, only the error_page directives
are relevant - as long as you have error_page 500 configured,
nginx will appropriately redirect processing of errors with code
500.
As for the inconsistency you observe, this depends on the exact
moment the error happens. For some errors nginx might be able to
generate friendly 500, for some it won't and will close the
connection as long as an error happens.
For example, if an error happens when reading cache header, nginx
should be able to return 500. But if an error happens later, when
reading the response body from the cache file, when the response
headers are already processed (and either sent to the client or
buffered due to postpone_output), it certainly won't be possible
to return a friendly error page, so nginx will close the
connection.
Given the nature of your test, I suspect that the inconsistency
you observe is due to errors happening at different moments.
In the real life, using "error_page 500" is certainly not enough
to protect users from broken responses due to failing disks.
Further, I don't think there is way to fully protect users, except
by providing redundancy at the disk level. For example, consider
an error when reading some response body data from disk, with 1GB
of the response body already sent to the client. There is more or
less nothing to be done here, and the only option is to close the
connection.
--
Maxim Dounin
http://mdounin.ru/
From zxcvbn4038 at gmail.com Thu Feb 13 20:46:53 2020
From: zxcvbn4038 at gmail.com (CJ Ess)
Date: Thu, 13 Feb 2020 15:46:53 -0500
Subject: fast-cgi Oddness
Message-ID:
I am running with Nginx 1.16. I have a really simple configuration for
wordpress, seen below.
I have one test case:
curl -H "Host: x.com" "http://127.0.0.1/wp-admin/"
Which succeeds - I can see in the php-fpm log that it does "GET
/wp-admin/index.php"
I have a second test case:
curl -H "Host: x.com" "http://127.0.0.1/wp-admin/load-styles.php"
Which unexpectedly returns a 404 error, even though the file does exist at
wp-admin/load-styles.php, but in the php-fpm log I am seeing GET
/load-styles.php
I can not figure out why the path is altered for the failing test case and
not the passing one.
If I hard code SCRIPT_NAME to $request_uri and SCRIPT_FILENAME
to $document_root$request_uri then failing test case works which I think
shows the script would work if the path were set correctly, but the first
test case fails because index.html doesn't get added to $request_uri.
I can't find anything similar searching Google, does anyone have a solution
or workaround?
server {
listen 80;
server_name x.com;
index index.php;
if (!-e $request_filename) {
rewrite ^/[_0-9a-zA-Z-]+(/wp-(content|admin|includes).*) $1 break;
rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 break;
}
location ~* (.*\.(js|css|svg|png|gif|ttf|woff|woff2))$ {
root /x/wordpress;
index index.html index.htm index.php;
}
location / {
rewrite ^/wp-admin$ /wp-admin/ permanent;
root /x;
index index.php;
try_files $uri @wordpress;
}
location @wordpress {
root /x/wordpress;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param aetn_env devtest;
}
}
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Fri Feb 14 09:14:28 2020
From: nginx-forum at forum.nginx.org (chocholo3)
Date: Fri, 14 Feb 2020 04:14:28 -0500
Subject: Failed disk + proxy_intercept_errors
In-Reply-To: <20200213145805.GQ12894@mdounin.ru>
References: <20200213145805.GQ12894@mdounin.ru>
Message-ID: <3f4145664f6f78cb30a87450f73d49df.NginxMailingListEnglish@forum.nginx.org>
Thanks a lot for your response.
>From what you are saying I understand another option:
current server section - Server A
copy the same configuration and use some other spare drive as a
proxy_cache_path configure that as - Server B
Configure both server sections to listen on unix socket instead of network.
Create a third server C configuration that will listen on network and will
proxy_path to Server A with proxy_intercept_errors on and error_page served
from location that will proxy_path to Server B.
Is something like this supposed to work? Or it would be better to have the
there completely independent configuration (like to use some other software
for server C).
(I'm asking because I did something like that i the past and it broke a bad
way. It started serving 500 to everyone. I kind of fear to try it in
production again.)
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287005,287020#msg-287020
From nginx-forum at forum.nginx.org Mon Feb 17 10:32:35 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Mon, 17 Feb 2020 05:32:35 -0500
Subject: =?UTF-8?Q?This_site_can=E2=80=99t_be_reached=3A_what_aspects=2C_elements?=
=?UTF-8?Q?=2C_configurations=2C_do_I_have_to_check_and_verify=3F?=
Message-ID: <81d881039c20dfd9afc795c84ed8e065.NginxMailingListEnglish@forum.nginx.org>
Hi all,
just the day after getting everything fine:
https://forum.nginx.org/read.php?2,286991,287001#msg-287001, during
installation of another software in the PC I got problems with grub system
in Ubuntu 18.04.4 Desktop, which I resolved copying, from an installation
iso image, the /bin/bash folder:
https://askubuntu.com/questions/1210267/missing-operating-system-error-unknown-filesystem-grub-rescue
But to my surprise, now I'm not able to see my webpage anymore, even if,
apparently, everything seems ok.
"
This site can?t be reached https://ggc.world/ is unreachable.
ERR_ADDRESS_UNREACHABLE " :
https://drive.google.com/open?id=1NYLyXP5zpFN5_9TkWspY6FymaMauKUGv
And this is remarked with the digicert's and check-your-website-server's
checks:
- https://drive.google.com/open?id=1EeOYkWzHXaMmOJKBnBbPYS-F9fcyPFXM
- https://drive.google.com/open?id=1DzuQlm6wkWvH7_-yfRowM45sAm2FKMWa
But apparently seems everything ok:
(base) marco at pc01:~/webMatters/vueMatters/testproject$ npm run serve
> testproject at 0.1.0 serve /home/marco/webMatters/vueMatters/testproject
> vue-cli-service serve
INFO Starting development server...
98% after emitting CopyPlugin
DONE Compiled successfully in 906ms
9:36:31 AM
App running at:
- Local: http://localhost:8080
- Network: http://ggc.world/
Note that the development build is not optimized.
To create a production build, run npm run build.
(base) marco at pc01:~$ sudo nano /etc/nginx/nginx.conf
[sudo] password for marco:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# SSL Settings
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
# Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
}
sudo nano /etc/nginx/conf.d/default.conf :
server {
listen 443 ssl http2 default_server;
server_name ggc.world;
ssl_certificate /etc/letsencrypt/live/ggc.world/fullchain.pem; # managed
by Certbot
ssl_certificate_key /etc/letsencrypt/live/ggc.world/privkey.pem; #
managed by Certbot
ssl_trusted_certificate /etc/letsencrypt/live/ggc.world/chain.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_session_timeout 5m;
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers
EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
#ssl_stapling on;
#ssl_stapling_verify on;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
#proxy_set_header Host $host;
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
error_page 497 https://$host:$server_port$request_uri;
server_name www.ggc.world;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/ggcworld-access.log combined;
add_header Strict-Transport-Security "max-age=31536000";
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
#proxy_set_header Host $host;
}
}
upstream websocket {
ip_hash;
server localhost:3000;
}
server {
listen 81;
server_name ggc.world www.ggc.world;
#location / {
location ~ ^/(websocket|websocket\/socket-io) {
proxy_pass http://127.0.0.1:4201;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwared-For $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
}
}
(base) marco at pc01:~$ sudo systemctl status nginx
? nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor
preset: enabled)
Active: active (running) since Mon 2020-02-17 07:52:51 CET; 1h 58min ago
Docs: man:nginx(8)
Process: 8830 ExecReload=/usr/sbin/nginx -g daemon on; master_process on;
-s reload (code=exited, status=0/SUCCESS)
Process: 1090 ExecStart=/usr/sbin/nginx -g daemon on; master_process on;
(code=exited, status=0/SUCCESS)
Process: 1067 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on;
master_process on; (code=exited, status=0/SUCCESS)
Main PID: 1092 (nginx)
Tasks: 9 (limit: 4915)
CGroup: /system.slice/nginx.service
??1092 nginx: master process /usr/sbin/nginx -g daemon on;
master_process on;
??8831 nginx: worker process
??8832 nginx: worker process
??8833 nginx: worker process
??8835 nginx: worker process
??8836 nginx: worker process
??8837 nginx: worker process
??8838 nginx: worker process
??8840 nginx: worker process
feb 17 07:52:51 pc01 systemd[1]: Starting A high performance web server and
a reverse proxy server...
feb 17 07:52:51 pc01 systemd[1]: Started A high performance web server and a
reverse proxy server.
feb 17 09:51:29 pc01 systemd[1]: Reloading A high performance web server and
a reverse proxy server.
feb 17 09:51:29 pc01 systemd[1]: Reloaded A high performance web server and
a reverse proxy server.
(base) marco at pc01:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Before trying to access the website, I cleaned all the content of the
web-browser's cache and history.
The only Google Chrome's extension installed is Vue.js devtools
The DNS provider configuration matches with the Internet Service Provider
configuration:
- https://drive.google.com/open?id=1YHhRqMcyHj0wSRVjjtQXOjrO-Zj1wlbz
- https://drive.google.com/open?id=1vk9YEwl_PW4uifFMAo9m5nnkzEFewM7I
The internet connection is working fine.
What aspects, elements, configurations, do I have to check and verify?
How could I solve this problem?
Looking forward to your kind help.
Marco
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287025,287025#msg-287025
From info at fabianflu.ch Mon Feb 17 10:40:45 2020
From: info at fabianflu.ch (=?utf-8?q?Fabian_Jo=C3=ABl_Fl=C3=BCckiger?=)
Date: Mon, 17 Feb 2020 11:40:45 +0100
Subject: NGINX Mailproxy
Message-ID: <53-5e4a6d80-5-61597280@63489549>
Hello
I am trying to use nginx as reverse-mailproxy for multiple mailservers.Whenever I have a client which connects to the nginx-mailproxy via STARTLS or SSL, the NGINX passes a malformed LOGIN packets to the backend mailserver, per example:
(nginx = nginx, mails = backend mailserver, in the first case MailEnable, in the second case Dovecot)
nginx>5 LOGIN {18}
mails>+ go ahead
nginx>user at domain.tld?{8}
mails>+ go ahead
nginx>PASSWORD
mails>BAD UNKNOWN Command
nginx>3 LOGIN {17}
mails> + OK
nginx>?user at domain.tld?{8}
mails> + OK
nginx>PASSWORD
mails>3 NO [AUTHENTICATIONFAILED] Authentication failed.
As you can see, nginx adds a suffix to the username, which lets the backendserver fail. Wireshark displays this additional data as {number}, I can also provide the hex variant of the packets.
NGINX also adds this suffix, if the username is passed via NGX auth header.
I've tested this with the nginx-full binary from the ubuntu repositories, as well as a self-compiled binary.
Used configuration:
? server_name server.domain.tld;
? auth_http url;
? proxy on;
? proxy_pass_error_message on;
? imap_capabilities "IMAP4rev1" "UIDPLUS" "IDLE" "LITERAL +" "QUOTA" "SASL-IR" "ID" "ENABLE";
? pop3_auth plain apop;
? pop3_capabilities "LAST" "TOP" "USER" "PIPELINING" "UIDL";
? smtp_capabilities "SIZE 31457280" ENHANCEDSTATUSCODES 8BITMIME DSN;
? ssl_certificate /path/to/cert.crt;
? ssl_certificate_key /path/to/privkey.key;
? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
? ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
? ssl_prefer_server_ciphers on;
? error_log /var/log/nginx/mailerror.log info;
? xclient on;
#?????????????????????? POP3??????????????????????????? #
? server {
??? listen 143;
??? protocol imap;
??? starttls on;
??? imap_auth plain login;
??? auth_http_header X-Auth-Port 143;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
? }
? server {
??? protocol pop3;
??? listen 110;
??? starttls on;
??? pop3_auth plain;
??? proxy on;
??? auth_http_header X-Auth-Port 110;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
? }
#?????????????????????? IMAP??????????????????????????? #
? server {
??? listen 993;
??? ssl on;
??? protocol imap;
??? imap_auth plain login;
??? auth_http_header X-Auth-Port 993;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
? }
? server {
??? protocol pop3;
??? listen 995;
??? ssl on;
??? pop3_auth plain;
??? auth_http_header X-Auth-Port 995;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
? }
#?????????????????????? SMTP??????????????????????????? #
server {
??? listen??? 25;
??? xclient off;
??? protocol? smtp;
??? starttls on;
??? smtp_auth login plain cram-md5;
??? auth_http_header X-Auth-Port 25;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
??? auth_http_header X-Real-IP $remote_addr;
}
server {
??? listen??? 587;
??? xclient off;
??? protocol? smtp;
??? starttls on;
??? smtp_auth login plain cram-md5;
??? auth_http_header X-Auth-Port 587;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
}
server {
??? listen??? 465;
??? xclient off;
??? protocol? smtp;
??? ssl on;
??? smtp_auth login plain cram-md5;
??? auth_http_header X-Auth-Port 465;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
}
Is this a configuration-related issue? How can I fix this?
Thank you very much!
Fabian
Hello
I am trying to use nginx as reverse-mailproxy for multiple mailservers.
Whenever I have a client which connects to the nginx-mailproxy via STARTLS or SSL, the NGINX passes a malformed LOGIN packets to the backend mailserver, per example:
(nginx = nginx, mails = backend mailserver, in the first case MailEnable, in the second case Dovecot)
nginx>5 LOGIN {18}
mails>+ go ahead
nginx>user at domain.tld {8}
mails>+ go ahead
nginx>PASSWORD
mails>BAD UNKNOWN Command
nginx>3 LOGIN {17}
mails> + OK
nginx> user at domain.tld {8}
mails> + OK
nginx>PASSWORD
mails>3 NO [AUTHENTICATIONFAILED] Authentication failed.
As you can see, nginx adds a suffix to the username, which lets the backendserver fail. Wireshark displays this additional data as {number}, I can also provide the hex variant of the packets.
NGINX also adds this suffix, if the username is passed via NGX auth header.
I've tested this with the nginx-full binary from the ubuntu repositories, as well as a self-compiled binary.
Used configuration:
? server_name server.domain.tld;
? auth_http url;
? proxy on;
? proxy_pass_error_message on;
? imap_capabilities "IMAP4rev1" "UIDPLUS" "IDLE" "LITERAL +" "QUOTA" "SASL-IR" "ID" "ENABLE";
? pop3_auth plain apop;
? pop3_capabilities "LAST" "TOP" "USER" "PIPELINING" "UIDL";
? smtp_capabilities "SIZE 31457280" ENHANCEDSTATUSCODES 8BITMIME DSN;
? ssl_certificate /path/to/cert.crt;
? ssl_certificate_key /path/to/privkey.key;
? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
? ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
? ssl_prefer_server_ciphers on;
? error_log /var/log/nginx/mailerror.log info;
? xclient on;
#?????????????????????? POP3??????????????????????????? #
? server {
??? listen 143;
??? protocol imap;
??? starttls on;
??? imap_auth plain login;
??? auth_http_header X-Auth-Port 143;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
? }
? server {
??? protocol pop3;
??? listen 110;
??? starttls on;
??? pop3_auth plain;
??? proxy on;
??? auth_http_header X-Auth-Port 110;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
? }
#?????????????????????? IMAP??????????????????????????? #
? server {
??? listen 993;
??? ssl on;
??? protocol imap;
??? imap_auth plain login;
??? auth_http_header X-Auth-Port 993;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
? }
? server {
??? protocol pop3;
??? listen 995;
??? ssl on;
??? pop3_auth plain;
??? auth_http_header X-Auth-Port 995;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
? }
#?????????????????????? SMTP??????????????????????????? #
server {
??? listen??? 25;
??? xclient off;
??? protocol? smtp;
??? starttls on;
??? smtp_auth login plain cram-md5;
??? auth_http_header X-Auth-Port 25;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
??? auth_http_header X-Real-IP $remote_addr;
}
server {
??? listen??? 587;
??? xclient off;
??? protocol? smtp;
??? starttls on;
??? smtp_auth login plain cram-md5;
??? auth_http_header X-Auth-Port 587;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
}
server {
??? listen??? 465;
??? xclient off;
??? protocol? smtp;
??? ssl on;
??? smtp_auth login plain cram-md5;
??? auth_http_header X-Auth-Port 465;
??? auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
}
Is this a configuration-related issue? How can I fix this?
Thank you very much!
Fabian
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Mon Feb 17 13:51:37 2020
From: nginx-forum at forum.nginx.org (MarcoI)
Date: Mon, 17 Feb 2020 08:51:37 -0500
Subject: =?UTF-8?Q?Re=3A_This_site_can=E2=80=99t_be_reached=3A_what_aspects=2C_elem?=
=?UTF-8?Q?ents=2C_configurations=2C_do_I_have_to_check_and_verify=3F?=
In-Reply-To: <81d881039c20dfd9afc795c84ed8e065.NginxMailingListEnglish@forum.nginx.org>
References: <81d881039c20dfd9afc795c84ed8e065.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <2667d0d1d462e15dbc195798bc25ed59.NginxMailingListEnglish@forum.nginx.org>
Big Update:SOLVED (but still do not know why)...
I fixed the PC's IP Address, and everything went fine.... But I do not
actually understand why everything went fine, if, before the problem with
the software installation, the local ip address was dynamic and it was
already working fine.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287025,287027#msg-287027
From mdounin at mdounin.ru Mon Feb 17 14:56:41 2020
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 17 Feb 2020 17:56:41 +0300
Subject: NGINX Mailproxy
In-Reply-To: <53-5e4a6d80-5-61597280@63489549>
References: <53-5e4a6d80-5-61597280@63489549>
Message-ID: <20200217145641.GS12894@mdounin.ru>
Hello!
On Mon, Feb 17, 2020 at 11:40:45AM +0100, Fabian Jo?l Fl?ckiger wrote:
> I am trying to use nginx as reverse-mailproxy for multiple
> mailservers.Whenever I have a client which connects to the
> nginx-mailproxy via STARTLS or SSL, the NGINX passes a malformed
> LOGIN packets to the backend mailserver, per example:
> (nginx = nginx, mails = backend mailserver, in the first case
> MailEnable, in the second case Dovecot)
>
> nginx>5 LOGIN {18}
> mails>+ go ahead
> nginx>user at domain.tld?{8}
> mails>+ go ahead
> nginx>PASSWORD
> mails>BAD UNKNOWN Command
>
> nginx>3 LOGIN {17}
> mails> + OK
> nginx>?user at domain.tld?{8}
> mails> + OK
> nginx>PASSWORD
> mails>3 NO [AUTHENTICATIONFAILED] Authentication failed.
>
>
> As you can see, nginx adds a suffix to the username, which lets
> the backendserver fail. Wireshark displays this additional data
> as {number}, I can also provide the hex variant of the packets.
> NGINX also adds this suffix, if the username is passed via NGX
> auth header.
> I've tested this with the nginx-full binary from the ubuntu
> repositories, as well as a self-compiled binary.
The "{18}" part is a part of the IMAP literal syntax, see here:
https://tools.ietf.org/html/rfc3501#section-4.3
It is certainly recognized by both backends used in the examples
above. While MailEnable response looks incorrect (it should start
with tag "5" followed by a space), this is probably an artifact of
manual translation of packets into the message. Dovecot's one is
perfectly correct and clearly says that authentication failed.
Most likely the problem is that LOGIN authentication is disabled
on your backends, or it requires SSL or STARTTLS to work (and it
does not work, since nginx uses plaintext connections to the
backends). Check the configuration and logs of your backends to
find out the exact reason.
--
Maxim Dounin
http://mdounin.ru/
From mdounin at mdounin.ru Mon Feb 17 15:54:38 2020
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 17 Feb 2020 18:54:38 +0300
Subject: Failed disk + proxy_intercept_errors
In-Reply-To: <3f4145664f6f78cb30a87450f73d49df.NginxMailingListEnglish@forum.nginx.org>
References: <20200213145805.GQ12894@mdounin.ru>
<3f4145664f6f78cb30a87450f73d49df.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200217155438.GT12894@mdounin.ru>
Hello!
On Fri, Feb 14, 2020 at 04:14:28AM -0500, chocholo3 wrote:
> Thanks a lot for your response.
> From what you are saying I understand another option:
>
> current server section - Server A
> copy the same configuration and use some other spare drive as a
> proxy_cache_path configure that as - Server B
>
> Configure both server sections to listen on unix socket instead of network.
>
> Create a third server C configuration that will listen on network and will
> proxy_path to Server A with proxy_intercept_errors on and error_page served
> from location that will proxy_path to Server B.
>
> Is something like this supposed to work? Or it would be better to have the
> there completely independent configuration (like to use some other software
> for server C).
>
> (I'm asking because I did something like that i the past and it broke a bad
> way. It started serving 500 to everyone. I kind of fear to try it in
> production again.)
As I already tried to explain in the previous message, there are
edge cases which cannot be solved by error handling, at all.
As long as you only care about disk errors which happen before
sending response headers and result in serving 500, just
error_page should be enough. Using additional proxying layer
might help to cover some additional errors which happen during
sending first bytes of the response body.
The only perfect solution, however, is to use disk-level
redundancy - for example, a simple software mirror on two disks is
trivial to configure and will help a lot.
--
Maxim Dounin
http://mdounin.ru/
From erictang121170 at hotmail.com Tue Feb 18 02:00:39 2020
From: erictang121170 at hotmail.com (eric tang)
Date: Tue, 18 Feb 2020 02:00:39 +0000
Subject: Google QUIC support in nginx
In-Reply-To:
References:
<423d86fdb50880a10d4a8312ce7072c0.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Test
From: nginx On Behalf Of Mathew Heard
Sent: 2020?1?10???? ??10:30
To: nginx at nginx.org
Subject: Fwd: Google QUIC support in nginx
Hey nginx team,
How does the roadmap now look given the Cloudflare Quiche "experiment" release? Is QUIC/HTTP3 still scheduled for mainline?
On Fri, 31 May 2019 at 16:54, George > wrote:
Roadmap suggests it is in Nginx 1.17 mainline QUIC = HTTP/3
https://trac.nginx.org/nginx/roadmap :)
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,256352,284367#msg-284367
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Tue Feb 18 09:53:17 2020
From: nginx-forum at forum.nginx.org (svgkraju)
Date: Tue, 18 Feb 2020 04:53:17 -0500
Subject: Two sites listening on same port simultaneously with different
location (context) not workig
Message-ID: <64cc78bb14259c98a6c7da5169b3e3a7.NginxMailingListEnglish@forum.nginx.org>
I installed gitlab in behind nginx reverse proxy and it worked fine. Added
the below configuration. Note that location path was changed.
upstream gitlab-workhorse {
server unix:/var/opt/gitlab/gitlab-workhorse/socket;
}
server {
listen 0.0.0.0:80 default_server;
listen [::]:80 default_server;
server_name abcd.com;
server_tokens off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log debug;
location /gitlab {
client_max_body_size 0;
gzip off;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
}
}
While testing gitlab, I deleted default file soft link from nginx
sites-enabled.
I created the soft link again the configuration is as follows: Note that
default_server is removed her and location path is "/"
server {
listen 0.0.0.0:80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
access_log /var/log/nginx/default_access.log;
error_log /var/log/nginx/default_error.log debug;
location / {
try_files $uri $uri/ =404;
}
}
Now while accessing my VM ip http://x.y.z.a, I am getting "403 Forbidden"
error in the browser. However gitlab still working. How to get both the
sites working listening on port 80 but with different context of location?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287036,287036#msg-287036
From r at roze.lv Tue Feb 18 11:16:18 2020
From: r at roze.lv (Reinis Rozitis)
Date: Tue, 18 Feb 2020 13:16:18 +0200
Subject: Two sites listening on same port simultaneously with different
location (context) not workig
In-Reply-To: <64cc78bb14259c98a6c7da5169b3e3a7.NginxMailingListEnglish@forum.nginx.org>
References: <64cc78bb14259c98a6c7da5169b3e3a7.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <002a01d5e64c$d76b8af0$8642a0d0$@roze.lv>
> Now while accessing my VM ip http://x.y.z.a, I am getting "403 Forbidden"
> error in the browser. However gitlab still working. How to get both the sites
> working listening on port 80 but with different context of location?
First of all you should check the error log to see why the 403 is returned and it usually gives the idea what is happening and which server {} block nginx is using and what file/location is being served.
> While testing gitlab, I deleted default file soft link from nginx sites-enabled.
> I created the soft link again the configuration is as follows: Note that default_server is removed her and location path is "/"
Second, it's a bit unclear what is your current nginx configuration is (nginx -T should display the "compiled" configuration with all the includes)
(depending on where 'include sites-enabled/*' is placed in the main config the behavior can change)
But in general virtualhosts (sites listening on the same ip/port) are distinguished by the 'server_name' - if a non-defined domain (or bare ip) is pointed to nginx - then either the virtualhost with 'default' / 'default_server' OR by order in the configuration first server {} block is used.
Most likely your gitlab server{} comes first and when you open your vm ip http://x.y.z.a nginx is trying to use /opt/gitlab/embedded/service/gitlab-rails/public but since there is probably no index file and/or maybe has no permissions 403 is returned.
Each server {} uses its own location blocks .. So if you want to have both '/' and '/gitlab' from the same ip/host you need to place them in the same server{}.
Somethine like:
server {
listen 0.0.0.0:80;
listen [::]:80;
root /var/www/html;
...
location / {
try_files $uri $uri/ =404;
}
location /gitlab {
proxy_pass http://gitlab-workhorse;
...
}
}
p.s. from my own experience it's more simple to host Gitlab from own (sub)domain rather than to configure as a subdirectory.
rr
From nginx-forum at forum.nginx.org Tue Feb 18 17:58:26 2020
From: nginx-forum at forum.nginx.org (trstringer)
Date: Tue, 18 Feb 2020 12:58:26 -0500
Subject: Problem creating CRL
Message-ID: <9f9682230153d2408399ebccb61565d9.NginxMailingListEnglish@forum.nginx.org>
I am attempting to add CRL support to my nginx proxy, and it seems to not be
working due to the following error:
client SSL certificate verify error: (3:unable to get certificate CRL) while
reading client request headers
>From my research, this is because nginx senses a missing CRL. But here is
the structure of my client certificate (it has the full chain of
certificates in it):
Certificate:
Data:
...
X509v3 extensions:
...
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
Certificate:
Data:
...
X509v3 extensions:
...
X509v3 CRL Distribution Points:
Full Name:
URI:http://uri1
Certificate:
Data:
...
X509v3 extensions:
...
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
Certificate:
Data:
...
X509v3 extensions:
...
X509v3 CRL Distribution Points:
Full Name:
URI:http://uri2
URI:http://uri3
URI:http://uri4
I take the following steps:
1. curl and convert output from url1 to PEM.
2. curl and convert output from url2 to PEM.
3. Concat the two outputs into the same file.
4. Specify this file in nginx config for ssl_crl.
But I get the above error.
Any thoughts on what I'm doing wrong? My understanding is that I should be
able to safely ignore url3, and url4.
Any thoughts? Thank you!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287045,287045#msg-287045
From mdounin at mdounin.ru Tue Feb 18 18:42:28 2020
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 18 Feb 2020 21:42:28 +0300
Subject: Problem creating CRL
In-Reply-To: <9f9682230153d2408399ebccb61565d9.NginxMailingListEnglish@forum.nginx.org>
References: <9f9682230153d2408399ebccb61565d9.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200218184228.GW12894@mdounin.ru>
Hello!
On Tue, Feb 18, 2020 at 12:58:26PM -0500, trstringer wrote:
> I am attempting to add CRL support to my nginx proxy, and it seems to not be
> working due to the following error:
>
> client SSL certificate verify error: (3:unable to get certificate CRL) while
> reading client request headers
>
> From my research, this is because nginx senses a missing CRL. But here is
> the structure of my client certificate (it has the full chain of
> certificates in it):
>
> Certificate:
> Data:
> ...
> X509v3 extensions:
> ...
> X509v3 Key Usage: critical
> Certificate Sign, CRL Sign
>
> Certificate:
> Data:
> ...
> X509v3 extensions:
> ...
> X509v3 CRL Distribution Points:
> Full Name:
> URI:http://uri1
>
> Certificate:
> Data:
> ...
> X509v3 extensions:
> ...
> X509v3 Key Usage: critical
> Certificate Sign, CRL Sign
>
> Certificate:
> Data:
> ...
> X509v3 extensions:
> ...
> X509v3 CRL Distribution Points:
> Full Name:
> URI:http://uri2
> URI:http://uri3
> URI:http://uri4
>
> I take the following steps:
>
> 1. curl and convert output from url1 to PEM.
> 2. curl and convert output from url2 to PEM.
> 3. Concat the two outputs into the same file.
> 4. Specify this file in nginx config for ssl_crl.
>
> But I get the above error.
>
> Any thoughts on what I'm doing wrong? My understanding is that I should be
> able to safely ignore url3, and url4.
You need CRLs for all certificates in the chain.
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Tue Feb 18 18:50:36 2020
From: nginx-forum at forum.nginx.org (trstringer)
Date: Tue, 18 Feb 2020 13:50:36 -0500
Subject: Problem creating CRL
In-Reply-To: <20200218184228.GW12894@mdounin.ru>
References: <20200218184228.GW12894@mdounin.ru>
Message-ID: <0211aa6f9124b192cd4b8c1396a325ab.NginxMailingListEnglish@forum.nginx.org>
Thanks for the quick response! How am I supposed to get the CRL for the
other 2 certs in the chain if there is no CRL URI?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287045,287047#msg-287047
From mdounin at mdounin.ru Tue Feb 18 19:05:42 2020
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 18 Feb 2020 22:05:42 +0300
Subject: Problem creating CRL
In-Reply-To: <0211aa6f9124b192cd4b8c1396a325ab.NginxMailingListEnglish@forum.nginx.org>
References: <20200218184228.GW12894@mdounin.ru>
<0211aa6f9124b192cd4b8c1396a325ab.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200218190542.GX12894@mdounin.ru>
Hello!
On Tue, Feb 18, 2020 at 01:50:36PM -0500, trstringer wrote:
> Thanks for the quick response! How am I supposed to get the CRL for the
> other 2 certs in the chain if there is no CRL URI?
Contact CA administrators.
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Tue Feb 18 19:46:45 2020
From: nginx-forum at forum.nginx.org (trstringer)
Date: Tue, 18 Feb 2020 14:46:45 -0500
Subject: Problem creating CRL
In-Reply-To: <20200218190542.GX12894@mdounin.ru>
References: <20200218190542.GX12894@mdounin.ru>
Message-ID: <5738e83e4f68c71ceed4d56604d710c3.NginxMailingListEnglish@forum.nginx.org>
Thanks again for the information! So that's typical for a cert to not
include the distribution point URIs in the certificate itself? Seems strange
to have to contact a certificate authority to find out the CRL distribution
point, no?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287045,287049#msg-287049
From francis at daoine.org Tue Feb 18 20:13:07 2020
From: francis at daoine.org (Francis Daly)
Date: Tue, 18 Feb 2020 20:13:07 +0000
Subject: fast-cgi Oddness
In-Reply-To:
References:
Message-ID: <20200218201307.GF26683@daoine.org>
On Thu, Feb 13, 2020 at 03:46:53PM -0500, CJ Ess wrote:
Hi there,
> curl -H "Host: x.com" "http://127.0.0.1/wp-admin/"
> Which succeeds - I can see in the php-fpm log that it does "GET
> /wp-admin/index.php"
>
> I have a second test case:
> curl -H "Host: x.com" "http://127.0.0.1/wp-admin/load-styles.php"
> Which unexpectedly returns a 404 error, even though the file does exist at
> wp-admin/load-styles.php, but in the php-fpm log I am seeing GET
> /load-styles.php
> I can not figure out why the path is altered for the failing test case and
> not the passing one.
> if (!-e $request_filename) {
> rewrite ^/[_0-9a-zA-Z-]+(/wp-(content|admin|includes).*) $1 break;
> rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 break;
> }
Assuming that /usr/local/nginx/html/wp-admin does not exist:
/wp-admin/ does not match either rewrite, so is unchanged.
/wp-admin/load-styles.php matches the second, so is rewritten to
/load-styles.php.
> location / {
> rewrite ^/wp-admin$ /wp-admin/ permanent;
> root /x;
> index index.php;
> try_files $uri @wordpress;
> }
>
> location @wordpress {
> root /x/wordpress;
> include /etc/nginx/fastcgi_params;
> fastcgi_pass 127.0.0.1:9000;
> fastcgi_index index.php;
>
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
That is /x/wordpress/wp-admin/index.php in the first case, and
/x/wordpress/load-styles.php in the second.
Why do the first rewrites exist? Are they breaking your setup?
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Tue Feb 18 22:22:37 2020
From: nginx-forum at forum.nginx.org (svgkraju)
Date: Tue, 18 Feb 2020 17:22:37 -0500
Subject: Two sites listening on same port simultaneously with
different location (context) not workig
In-Reply-To: <002a01d5e64c$d76b8af0$8642a0d0$@roze.lv>
References: <002a01d5e64c$d76b8af0$8642a0d0$@roze.lv>
Message-ID:
Thank you for your suggestion. I changed gitlab as follows and removed
default from sites-enabled. This worked.
server {
listen 0.0.0.0:80 default_server;
listen [::]:80 default_server;
server_name abcd.com; ## Replace this with something like
gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best
practice
root /var/www/html;
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log debug;
location / {
try_files $uri $uri/ =404;
}
location /gitlab {
client_max_body_size 0;
gzip off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
}
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287036,287051#msg-287051
From francis at daoine.org Tue Feb 18 23:37:45 2020
From: francis at daoine.org (Francis Daly)
Date: Tue, 18 Feb 2020 23:37:45 +0000
Subject: Nginx php reverse proxy problem
In-Reply-To: <21acaa46f5163492fa2b714a00968c35.NginxMailingListEnglish@forum.nginx.org>
References: <21acaa46f5163492fa2b714a00968c35.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200218233745.GG26683@daoine.org>
On Wed, Feb 12, 2020 at 07:43:11AM -0500, adrian.hilt wrote:
Hi there,
> I'm running a reverse proxy with nginx and using certbot for ssl. It's been
> working great but recently with an php server installation it's been giving
> me problems.
> I get access to the index but any other page I get a 404 error from nginx.
> location /site/ {
> if (!-e $request_filename){
> rewrite ^/site/(.*)$ /site/index.php break;
> }
You might be happier there with "last" instead of "break".
http://nginx.org/r/rewrite
But you might be happier still, replacing the three lines with something
like
try_files $uri $uri/ /site/index.php;
which is the usual nginx way to "fall back" to php processing.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From satayepic at gmail.com Wed Feb 19 06:09:57 2020
From: satayepic at gmail.com (Satay Epic)
Date: Tue, 18 Feb 2020 22:09:57 -0800
Subject: DNS load balancing issue
Message-ID:
Hello,
I've NGINX 1.12-1 running on CentOS 7.2 and being used for DNS load
balancing. However I'm seeing lots of errors in the "dns.log".. I've
no access to the system right now but error was like.. "No response
received from the upstream server" The occurrence of the errors is no
continuous but it's pretty frequent.
The three backend servers are Windows 2012 R2 DNS servers
I will provide the nginx.conf details soon.
I may have to do some tuning besides upgrade to an appropriate version
I guess. Can anyone help to suggest/recommend tuning to minimize the
errors or rather not to have them at all?
Thanks
From maxim at nginx.com Wed Feb 19 07:55:14 2020
From: maxim at nginx.com (Maxim Konovalov)
Date: Wed, 19 Feb 2020 10:55:14 +0300
Subject: DNS load balancing issue
In-Reply-To:
References:
Message-ID:
Hi Satay,
On 19.02.2020 09:09, Satay Epic wrote:
> Hello,
>
> I've NGINX 1.12-1 running on CentOS 7.2 and being used for DNS load
> balancing. However I'm seeing lots of errors in the "dns.log".. I've
> no access to the system right now but error was like.. "No response
> received from the upstream server" The occurrence of the errors is no
> continuous but it's pretty frequent.
>
> The three backend servers are Windows 2012 R2 DNS servers
>
> I will provide the nginx.conf details soon.
>
> I may have to do some tuning besides upgrade to an appropriate version
> I guess. Can anyone help to suggest/recommend tuning to minimize the
> errors or rather not to have them at all?
>
What I can recommend is to upgrade nginx 1.12 to the most recent 1.17
version (i.e. 1.17.8). We added tons of improvements into the tcp/udp
load balancing capabilities since 1.12 times.
--
Maxim Konovalov
From amillar2012 at gmail.com Wed Feb 19 15:26:04 2020
From: amillar2012 at gmail.com (Andy Millar)
Date: Wed, 19 Feb 2020 07:26:04 -0800
Subject: please delete me from your mailing list
Message-ID:
--
R. A. "Andy" Millar
920 S. Main
PO Box 388
Milton-Freewater, OR 97862
541 938-4485 fax 541 938-0328
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed Feb 19 19:08:25 2020
From: nginx-forum at forum.nginx.org (cyoho)
Date: Wed, 19 Feb 2020 14:08:25 -0500
Subject: "Client prematurely closed connection" error
Message-ID: <70561d703f4119c7a68530b86d82d3c3.NginxMailingListEnglish@forum.nginx.org>
I am using nginx version 1.16.1 as a forward proxy on a CentOS 7 server.
Our order inventory server (ORDINV) connects through the nginx server
(TAXPROXY) to request sales tax info from a cloud server. We are in test
mode with this configuration. This all has been working well, except when
they stop testing for a while, the taxproxy server stops working. The
error log shows
2020/02/17 08:33:07 [info] 879#879: 42826 epoll_wait() reported that
client prematurely closed connection, so upstream connection is closed too
while SSL handshaking to upstream, client: xxx.xx.x.xx, server:
taxproxy.umpublishing.org, request: "POST / HTTP/1.1", upstream:
"https://yy.yyy.yyy.yy:443/services/CalculateTax70", host:
"zzz.zzz.z.zz:80"
After a couple of these info there is [error] "peer closed connection in SSL
handshake (104:Connection reset by peer) while SSL handshaking to upstream"
The nginx service is running, and if I issue a systemctl restart nginx,
everything starts working again fine. Any ideas what might be wrong?
Googling turned up several sites where people reported similar problems but
no one had gotten an answer....
TIA~
Cindy
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287065,287065#msg-287065
From nginx-forum at forum.nginx.org Wed Feb 19 19:57:24 2020
From: nginx-forum at forum.nginx.org (svgkraju)
Date: Wed, 19 Feb 2020 14:57:24 -0500
Subject: simple file based authorization
Message-ID:
My nginx config file is as follows:
server {
...
location / {
....
auth_request /custom_auth
....
}
location /custom_auth {
proxy_pass http://localhost:9123;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header Host $host;
proxy_set_header X-Original-URI $request_uri;
}
}
Client will provide URL (URL is 3rd party application) and username. URL
contains project name. I have a simple file in the same server with project
and username mapping. If mapping exists allow the URL to execute, otherwise
fail it.
How can I implement this in http://localhost:9123? Using oauth2? when I
checked many sample codes, it talks about passwords, tokens, etc., Can this
done in a much simpler manner?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287066,287066#msg-287066
From nginx-forum at forum.nginx.org Wed Feb 19 23:30:39 2020
From: nginx-forum at forum.nginx.org (AshleyinSpain)
Date: Wed, 19 Feb 2020 18:30:39 -0500
Subject: Nginx Valid Referer - Access Control - Help Wanted
In-Reply-To: <20200207000902.GA26683@daoine.org>
References: <20200207000902.GA26683@daoine.org>
Message-ID: <0f91e5a3a1dc6d73acfbe296df65d07b.NginxMailingListEnglish@forum.nginx.org>
Francis Daly Wrote:
-------------------------------------------------------
> On Thu, Feb 06, 2020 at 06:02:50PM -0500, AshleyinSpain wrote:
>
> Hi there,
>
> > > > server {
> > > > location /radio/ {
> > > > valid_referers none blocked server_names ~\.mysite\.;
> > > > if ($invalid_referer) { return 403; }
> > > > }
> > > > }
>
> > I deleted the 'none' and 'blocked' and no difference still not
> blocking
> > direct access to the URL
> >
> > Tried adding it in its own block and adding it to the end of an
> existing
> > block neither worked
> >
> > Is the location /radio/ part ok
> >
> > I am trying to block direct access to any URL with a directory
> /radio/
> >
> > The URLs look like sub.domain.tld/radio/1234/mytrack.mp3?45678901
>
> In nginx, one request is handled in one location.
>
> If /radio/ is the location that you configured to handle this request,
> then the config should apply.
>
> If you have, for example, "location ~ mp3", then *that* would probably
> be the location that is configured to handle this request (and so that
> is where this "return 403;" should be.
>
> You could try changing the line to be "location ^~ /radio/ {", but
> without knowing your full config, it is hard to know if that will fix
> things or break them.
>
> http://nginx.org/r/location
>
> > I need it so the URL is only served if a link on *.mysite.* is
> clicked ie
> > the track is only played through an html5 audio player on mysite
>
> That is not a thing that can be done reliably.
>
> If "unreliable" is good enough for you, then carry on. Otherwise, come
> up with a new requirement that can be done.
>
> Cheers,
>
> f
> --
> Francis Daly francis at daoine.org
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Hi Francis
I've added further comments here, it's getting a bit messy above
I added, as you suggested, the ^~ to /radio/ and it now blocks it
redirecting to where I put in the invalid_referer bit
The valid_referer part doesn't work though,
valid_referers server_names
*.mysite.com mysite.com dev.mysite.* can.mysite.*
can.mysite.com/dashboard
~\.mysite\.;
it doesn't recognise the parameters or urls
I copied the examples in the docs and I have tried loads of variations taken
from various suggestions etc online
When you say above - That is not a thing that can be done reliably is that
because the headers can be 'forged' or it just doesn't work properly
I am only trying to stop casual copy stream url and paste it into browser to
listen for free - I realise any determined person can get around it, but not
trying to stop that with this - ultimately I will have to add more robust
controls with JS and passwords but that will be later on down the line
Do you need me to copy the entire nginx config here
Thanks for your help
Ashley
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286958,287068#msg-287068
From nginx-forum at forum.nginx.org Thu Feb 20 01:25:38 2020
From: nginx-forum at forum.nginx.org (satay)
Date: Wed, 19 Feb 2020 20:25:38 -0500
Subject: DNS load balancing issue
In-Reply-To:
References:
Message-ID: <8ec551028f688343155f4b84e364326a.NginxMailingListEnglish@forum.nginx.org>
Hi Maxim,
Thanks for responding. I agree with your recommendation. I guess a direct
upgrade from 1.12 to 1.16 (free community version) is possible and
shouldn't break it.
I'm preferring 1.61 since it's the latest stable version. Beside the
upgrade, do you recommend any performance tuning should be done?
Thanks
FYI - This is the error I see in the "dns.log" occurring frequently.
2020/02/19 16:47:28 [error] 19509#0: *4852298929 no live upstreams while
connecting to upstream, udp client: x.x.x.x, server: 0.0.0.0:53, upstream:
"dns_servers", bytes from/to client:50/0, bytes from/to upstream:0/0
This is the nginx.conf ---
worker_processes auto;
error_log /var/log/nginx/error.log;
include /usr/share/nginx/modules/*.conf;
events {
}
stream {
upstream dns_servers {
server x.x.x.x:53 fail_timeout=60s;
server x.x.x.x:53 fail_timeout=60s;
server x.x.x.x:53 fail_timeout=60s;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log info;
proxy_responses 1;
proxy_timeout 5s;
}
}
http {
index index.html;
server {
listen 80 default_server;
server_name _;
access_log /var/log/nginx/access.log;
server_name_in_redirect off;
root /var/www/default/htdocs;
allow x.x.x.x;
deny all;
location /nginx_status {
stub_status on;
access_log off;
}
}
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287053,287069#msg-287069
From stephane.durieux at univ-lyon1.fr Thu Feb 20 12:20:35 2020
From: stephane.durieux at univ-lyon1.fr (DURIEUX STEPHANE)
Date: Thu, 20 Feb 2020 12:20:35 +0000
Subject: shared location for fastcgi_temp_path and client_body_temp_path
Message-ID: <72f5b42d82d6451e8abd61fbf48bcbdf@BPMBX2013-02.univ-lyon1.fr>
Hello,
We are running several instances of nginx for the same application. We would like to knox if it is safe to share a common storage location for thoses instances for fastcgi_temp_path and client_body_temp_path parameters.
Are the chunk files generated unique for each instance ?
Regards
Stephane Durieux
DSI - P?le infrastructure
Universit? Claude Bernard
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Thu Feb 20 13:10:14 2020
From: nginx-forum at forum.nginx.org (adrian.hilt)
Date: Thu, 20 Feb 2020 08:10:14 -0500
Subject: Nginx php reverse proxy problem
In-Reply-To: <20200218233745.GG26683@daoine.org>
References: <20200218233745.GG26683@daoine.org>
Message-ID:
Thanks, I just tried and it didn't work.
If I use the ip to access I don't have any problem, when it goes throw nginx
is the problem.
Maybe are there some parameters in the php config of my server that I need
to change?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287003,287071#msg-287071
From francis at daoine.org Thu Feb 20 14:20:30 2020
From: francis at daoine.org (Francis Daly)
Date: Thu, 20 Feb 2020 14:20:30 +0000
Subject: Nginx Valid Referer - Access Control - Help Wanted
In-Reply-To: <0f91e5a3a1dc6d73acfbe296df65d07b.NginxMailingListEnglish@forum.nginx.org>
References: <20200207000902.GA26683@daoine.org>
<0f91e5a3a1dc6d73acfbe296df65d07b.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200220142030.GL26683@daoine.org>
On Wed, Feb 19, 2020 at 06:30:39PM -0500, AshleyinSpain wrote:
> Francis Daly Wrote:
> > On Thu, Feb 06, 2020 at 06:02:50PM -0500, AshleyinSpain wrote:
Hi there,
> > > I am trying to block direct access to any URL with a directory
> > /radio/
> > >
> > > The URLs look like sub.domain.tld/radio/1234/mytrack.mp3?45678901
> > > I need it so the URL is only served if a link on *.mysite.* is
> > clicked ie
> > > the track is only played through an html5 audio player on mysite
> >
> > That is not a thing that can be done reliably.
> The valid_referer part doesn't work though,
>
> valid_referers server_names
> *.mysite.com mysite.com dev.mysite.* can.mysite.*
> can.mysite.com/dashboard
> ~\.mysite\.;
>
> it doesn't recognise the parameters or urls
Can you show exactly what you means by "doesn't work"? It seems to work
for me.
That is, if I use
===
server {
listen 8080 default_server;
server_name three;
location ^~ /radio/ {
valid_referers server_names
*.mysite.com mysite.com dev.mysite.* can.mysite.*
can.mysite.com/dashboard ~\.mysite\.;
if ($invalid_referer) { return 403; }
return 200 "This request is allowed: $request_uri, $http_referer\n";
}
}
===
then I see (403 is "blocked"; 200 is "allowed"):
# no Referer
$ curl -i http://127.0.0.1:8080/radio/one
403
# Referer that matches can.mysite.*
$ curl -i -H Referer:http://can.mysite.cxx http://127.0.0.1:8080/radio/one
200
# Referer that does not match can.mysite.com/dashboard
curl -i -H Referer:http://can.mysite.com/dashboar http://127.0.0.1:8080/radio/one
403
# Referer that matches can.mysite.com/dashboard
curl -i -H Referer:http://can.mysite.com/dashboards http://127.0.0.1:8080/radio/one
200
# Referer that matches a server_name
$ curl -i -H Referer:https://three http://127.0.0.1:8080/radio/one
200
> I copied the examples in the docs and I have tried loads of variations taken
> from various suggestions etc online
If you can show one specific config that you use; and one specific
request that you make; and the response that you get and how it is not
the response that you want; it will probably be easier to identify where
the problem is.
> When you say above - That is not a thing that can be done reliably is that
> because the headers can be 'forged' or it just doesn't work properly
The headers can be forged, just like I do above in the "curl" commands.
All the best,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Thu Feb 20 14:25:56 2020
From: francis at daoine.org (Francis Daly)
Date: Thu, 20 Feb 2020 14:25:56 +0000
Subject: Nginx php reverse proxy problem
In-Reply-To:
References: <20200218233745.GG26683@daoine.org>
Message-ID: <20200220142556.GM26683@daoine.org>
On Thu, Feb 20, 2020 at 08:10:14AM -0500, adrian.hilt wrote:
Hi there,
> Thanks, I just tried and it didn't work.
What config do you use?
What request do you make?
What response do you get?
What response do you want instead?
> If I use the ip to access I don't have any problem, when it goes throw nginx
> is the problem.
I don't understand what that means.
Can you copy-paste the (e.g.) "curl -v" output for a working and failing
request? Feel free to edit any private data; but if you do, please edit
it consistently.
> Maybe are there some parameters in the php config of my server that I need
> to change?
Maybe.
But guessing may not be the most efficient way to resolve the problem.
Cheers,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Fri Feb 21 00:31:12 2020
From: nginx-forum at forum.nginx.org (satay)
Date: Thu, 20 Feb 2020 19:31:12 -0500
Subject: Live Activity dashboard
Message-ID:
Hi,
Does the opensource nginx provide the live activity dashboard similar to the
plus veriosn?
All I know is "stub_status".
Thanks
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287078,287078#msg-287078
From nginx-forum at forum.nginx.org Fri Feb 21 21:19:46 2020
From: nginx-forum at forum.nginx.org (bdarbro)
Date: Fri, 21 Feb 2020 16:19:46 -0500
Subject: Nginx - 56 day old reverse-proxy suddenly unable to connect upstream.
Message-ID: <7e5d3f9e8fe2720cc2a544e6a454a4f0.NginxMailingListEnglish@forum.nginx.org>
I have nginx configured as a reverse proxy to Amazon's AWS IoT MQTT service.
This was functioning well for almost 2 months, when suddenly 20 out of 32
instances of this stopped being able to connect upstream. We started seeing
sporadic upstream SSL connection errors, followed by sporadic upstream
connection refused, and then finally, mostly connection timeouts to
upstream. Nothing short of a restart or reload of Nginx fixes this. Debug
logging is not enabled, and trying to enable it replaces the worker
processes, and effectively ends the issue. Over the next 3 days, the
remaining nodes started exhibiting this problem as well. Rather than
restarting nginx on these remaining nodes, I isolated them for study, and
stood up new nodes to replace them.
But in studying these, I cannot find any indicator as to why this is
happening. Now that these have been removed from client traffic, and I can
test with curl's... I can hit one of these 5 times, and by the 5th call, I
get a repro. Connection timeout to the upstream, resulting in a timeout to
me.
==========================================================
Here is the version information for nginx, as it comes from Ubuntu 18.04:
nginx version: nginx/1.14.0 (Ubuntu)
built with OpenSSL 1.1.1 11 Sep 2018
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2
-fdebug-prefix-map=/build/nginx-GkiujU/nginx-1.14.0=.
-fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time
-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro
-Wl,-z,now -fPIC' --prefix=/usr/share/nginx
--conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log
--error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock
--pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules
--http-client-body-temp-path=/var/lib/nginx/body
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-scgi-temp-path=/var/lib/nginx/scgi
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit
--with-http_ssl_module --with-http_stub_status_module
--with-http_realip_module --with-http_auth_request_module
--with-http_v2_module --with-http_dav_module --with-http_slice_module
--with-threads --with-http_addition_module --with-http_geoip_module=dynamic
--with-http_gunzip_module --with-http_gzip_static_module
--with-http_image_filter_module=dynamic --with-http_sub_module
--with-http_xslt_module=dynamic --with-stream=dynamic
--with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module
==========================================================
nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
worker_rlimit_nofile 30500;
events {
worker_connections 10000;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
#IPV6 also disabled via kernel boot option and sysctl, too.
#Couldn't get nginx to stop AAAA lookups without doing that.
resolver 8.8.8.8 8.8.4.4 valid=3s ipv6=off;
resolver_timeout 10;
# enable reverse proxy
proxy_redirect off;
proxy_set_header Host CENSORED.amazonaws.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log error;
gzip on;
# Nginx-lua-prometheus
# Prometheus metric library for Nginx
lua_shared_dict prometheus_metrics 10M;
lua_package_path "/etc/nginx/nginx-lua-prometheus/?.lua";
init_by_lua '
prometheus = require("prometheus").init("prometheus_metrics")
metric_requests = prometheus:counter(
"nginx_http_requests_total", "Number of HTTP requests", {"host",
"status"})
metric_latency = prometheus:histogram(
"nginx_http_request_duration_seconds", "HTTP request latency",
{"host"})
metric_connections = prometheus:gauge(
"nginx_http_connections", "Number of HTTP connections", {"state"})
';
log_by_lua '
metric_requests:inc(1, {ngx.var.server_name, ngx.var.status})
metric_latency:observe(tonumber(ngx.var.request_time),
{ngx.var.server_name})
';
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
==========================================================
iot-proxy config file:
# Define group of backend / upstream servers:
upstream iot-backend
{
server CENSORED.amazonaws.com:443;
}
server
{
#listen 443 default ssl;
listen 443 ssl;
server_name CENSORED.something.com;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 86400;
ssl_certificate /etc/nginx/ssl/CENSORED.crt;
ssl_certificate_key /etc/nginx/ssl/CENSORED.key;
ssl_verify_client off;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location /
{
proxy_pass https://iot-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host "CENSORED.amazonaws.com:443";
proxy_read_timeout 86400;
proxy_ssl_session_reuse off;
}
}
==========================================================
nginx-lua-prometheus config file:
server {
listen 9145;
allow 0.0.0.0/0;
allow 127.0.0.1/32;
deny all;
location /metrics {
content_by_lua '
metric_connections:set(ngx.var.connections_reading, {"reading"})
metric_connections:set(ngx.var.connections_waiting, {"waiting"})
metric_connections:set(ngx.var.connections_writing, {"writing"})
prometheus:collect()
';
}
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287081,287081#msg-287081
From osa at freebsd.org.ru Fri Feb 21 21:40:25 2020
From: osa at freebsd.org.ru (Sergey A. Osokin)
Date: Sat, 22 Feb 2020 00:40:25 +0300
Subject: Nginx - 56 day old reverse-proxy suddenly unable to connect
upstream.
In-Reply-To: <7e5d3f9e8fe2720cc2a544e6a454a4f0.NginxMailingListEnglish@forum.nginx.org>
References: <7e5d3f9e8fe2720cc2a544e6a454a4f0.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200221214025.GB70003@FreeBSD.org.ru>
Hi there,
thanks for the report!
Is there any third-party module there?
Could you exlplain a reason to use SSLv3 in this case.
Thanks.
--
Sergey Osokin
On Fri, Feb 21, 2020 at 04:19:46PM -0500, bdarbro wrote:
> I have nginx configured as a reverse proxy to Amazon's AWS IoT MQTT service.
> This was functioning well for almost 2 months, when suddenly 20 out of 32
> instances of this stopped being able to connect upstream. We started seeing
> sporadic upstream SSL connection errors, followed by sporadic upstream
> connection refused, and then finally, mostly connection timeouts to
> upstream. Nothing short of a restart or reload of Nginx fixes this. Debug
> logging is not enabled, and trying to enable it replaces the worker
> processes, and effectively ends the issue. Over the next 3 days, the
> remaining nodes started exhibiting this problem as well. Rather than
> restarting nginx on these remaining nodes, I isolated them for study, and
> stood up new nodes to replace them.
>
> But in studying these, I cannot find any indicator as to why this is
> happening. Now that these have been removed from client traffic, and I can
> test with curl's... I can hit one of these 5 times, and by the 5th call, I
> get a repro. Connection timeout to the upstream, resulting in a timeout to
> me.
>
> ==========================================================
> Here is the version information for nginx, as it comes from Ubuntu 18.04:
> nginx version: nginx/1.14.0 (Ubuntu)
> built with OpenSSL 1.1.1 11 Sep 2018
> TLS SNI support enabled
> configure arguments: --with-cc-opt='-g -O2
> -fdebug-prefix-map=/build/nginx-GkiujU/nginx-1.14.0=.
> -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time
> -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro
> -Wl,-z,now -fPIC' --prefix=/usr/share/nginx
> --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log
> --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock
> --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules
> --http-client-body-temp-path=/var/lib/nginx/body
> --http-fastcgi-temp-path=/var/lib/nginx/fastcgi
> --http-proxy-temp-path=/var/lib/nginx/proxy
> --http-scgi-temp-path=/var/lib/nginx/scgi
> --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit
> --with-http_ssl_module --with-http_stub_status_module
> --with-http_realip_module --with-http_auth_request_module
> --with-http_v2_module --with-http_dav_module --with-http_slice_module
> --with-threads --with-http_addition_module --with-http_geoip_module=dynamic
> --with-http_gunzip_module --with-http_gzip_static_module
> --with-http_image_filter_module=dynamic --with-http_sub_module
> --with-http_xslt_module=dynamic --with-stream=dynamic
> --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module
>
> ==========================================================
> nginx.conf:
> user www-data;
> worker_processes auto;
> pid /run/nginx.pid;
> include /etc/nginx/modules-enabled/*.conf;
> worker_rlimit_nofile 30500;
>
> events {
> worker_connections 10000;
> # multi_accept on;
> }
>
> http {
> sendfile on;
> tcp_nopush on;
> tcp_nodelay on;
> keepalive_timeout 65;
> types_hash_max_size 2048;
>
> include /etc/nginx/mime.types;
> default_type application/octet-stream;
>
> #IPV6 also disabled via kernel boot option and sysctl, too.
> #Couldn't get nginx to stop AAAA lookups without doing that.
> resolver 8.8.8.8 8.8.4.4 valid=3s ipv6=off;
> resolver_timeout 10;
> # enable reverse proxy
> proxy_redirect off;
> proxy_set_header Host CENSORED.amazonaws.com;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
>
> ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
> ssl_prefer_server_ciphers on;
>
> access_log /var/log/nginx/access.log;
> error_log /var/log/nginx/error.log error;
>
> gzip on;
>
> # Nginx-lua-prometheus
> # Prometheus metric library for Nginx
> lua_shared_dict prometheus_metrics 10M;
> lua_package_path "/etc/nginx/nginx-lua-prometheus/?.lua";
> init_by_lua '
> prometheus = require("prometheus").init("prometheus_metrics")
> metric_requests = prometheus:counter(
> "nginx_http_requests_total", "Number of HTTP requests", {"host",
> "status"})
> metric_latency = prometheus:histogram(
> "nginx_http_request_duration_seconds", "HTTP request latency",
> {"host"})
> metric_connections = prometheus:gauge(
> "nginx_http_connections", "Number of HTTP connections", {"state"})
> ';
> log_by_lua '
> metric_requests:inc(1, {ngx.var.server_name, ngx.var.status})
> metric_latency:observe(tonumber(ngx.var.request_time),
> {ngx.var.server_name})
> ';
>
> include /etc/nginx/conf.d/*.conf;
> include /etc/nginx/sites-enabled/*;
> }
>
> ==========================================================
> iot-proxy config file:
> # Define group of backend / upstream servers:
> upstream iot-backend
> {
> server CENSORED.amazonaws.com:443;
> }
>
> server
> {
> #listen 443 default ssl;
> listen 443 ssl;
> server_name CENSORED.something.com;
>
> ssl_session_cache shared:SSL:1m;
> ssl_session_timeout 86400;
> ssl_certificate /etc/nginx/ssl/CENSORED.crt;
> ssl_certificate_key /etc/nginx/ssl/CENSORED.key;
> ssl_verify_client off;
> ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
> ssl_ciphers RC4:HIGH:!aNULL:!MD5;
> ssl_prefer_server_ciphers on;
>
> location /
> {
> proxy_pass https://iot-backend;
> proxy_http_version 1.1;
> proxy_set_header Upgrade $http_upgrade;
> proxy_set_header Connection "upgrade";
> proxy_set_header Host "CENSORED.amazonaws.com:443";
> proxy_read_timeout 86400;
> proxy_ssl_session_reuse off;
> }
> }
>
> ==========================================================
> nginx-lua-prometheus config file:
> server {
> listen 9145;
> allow 0.0.0.0/0;
> allow 127.0.0.1/32;
> deny all;
> location /metrics {
> content_by_lua '
> metric_connections:set(ngx.var.connections_reading, {"reading"})
> metric_connections:set(ngx.var.connections_waiting, {"waiting"})
> metric_connections:set(ngx.var.connections_writing, {"writing"})
> prometheus:collect()
> ';
> }
> }
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287081,287081#msg-287081
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From nginx-forum at forum.nginx.org Fri Feb 21 21:46:42 2020
From: nginx-forum at forum.nginx.org (bdarbro)
Date: Fri, 21 Feb 2020 16:46:42 -0500
Subject: Nginx - 56 day old reverse-proxy suddenly unable to connect
upstream.
In-Reply-To: <20200221214025.GB70003@FreeBSD.org.ru>
References: <20200221214025.GB70003@FreeBSD.org.ru>
Message-ID: <3345ac6ed57391de38665fcb9d9c65fb.NginxMailingListEnglish@forum.nginx.org>
Yes. nginx-lua-prometheus
Installed in /etc/nginx/nginx-lua-prometheus and included in that included
prometheus config file.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287081,287083#msg-287083
From nginx-forum at forum.nginx.org Fri Feb 21 21:52:16 2020
From: nginx-forum at forum.nginx.org (bdarbro)
Date: Fri, 21 Feb 2020 16:52:16 -0500
Subject: Nginx - 56 day old reverse-proxy suddenly unable to connect
upstream.
In-Reply-To: <20200221214025.GB70003@FreeBSD.org.ru>
References: <20200221214025.GB70003@FreeBSD.org.ru>
Message-ID:
Oh, and SSLv3 enabled because of client firmware using an old stack,
something I can do nothing about.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287081,287084#msg-287084
From themadbeaker at gmail.com Sat Feb 22 14:38:05 2020
From: themadbeaker at gmail.com (J.R.)
Date: Sat, 22 Feb 2020 08:38:05 -0600
Subject: Nginx - 56 day old reverse-proxy suddenly unable to connect
upstream.
Message-ID:
> resolver 8.8.8.8 8.8.4.4 valid=3s ipv6=off;
I doubt this is related to your issue, but any reason you have 'valid'
set to only 3 seconds for your resolver conf? Seems like you could be
doing a lot of unnecessary repetitive lookups because that is set so
low.
> ssl_session_cache shared:SSL:1m;
> ssl_session_timeout 86400;
This also seems dubious. SSL session timeout is set to 24 hours, but
your session cache is only 1MB.
From nginx-forum at forum.nginx.org Sat Feb 22 16:42:46 2020
From: nginx-forum at forum.nginx.org (LilFag)
Date: Sat, 22 Feb 2020 11:42:46 -0500
Subject: Come on... Bring the Official PROPFIND and OPTIONS support!
Message-ID:
Same request: https://forum.nginx.org/read.php?2,227415,227487#msg-227487
I'm using Windows and I want Nginx to support these methods in their WebDAV.
There are two things that Nginx and Windows support together so I won't
change to Linux and I cannot compile Roman's ext-dav-module, because it
cannot be compiled on Windows.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287089,287089#msg-287089
From abbot at monksofcool.net Sat Feb 22 16:50:29 2020
From: abbot at monksofcool.net (Ralph Seichter)
Date: Sat, 22 Feb 2020 17:50:29 +0100
Subject: Come on... Bring the Official PROPFIND and OPTIONS support!
In-Reply-To:
References:
Message-ID: <87v9ny36sq.fsf@wedjat.horus-it.com>
* LilFag:
> I'm using Windows and I want Nginx to support these methods in their
> WebDAV.
Free software meets entitlement issues. What's the magic word?
-Ralph
From nginx-forum at forum.nginx.org Sat Feb 22 17:15:49 2020
From: nginx-forum at forum.nginx.org (LilFag)
Date: Sat, 22 Feb 2020 12:15:49 -0500
Subject: Come on... Bring the Official PROPFIND and OPTIONS support!
In-Reply-To: <87v9ny36sq.fsf@wedjat.horus-it.com>
References: <87v9ny36sq.fsf@wedjat.horus-it.com>
Message-ID:
Nginx handles symlinks 100x better than Apache regardless of settings.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287089,287091#msg-287091
From nginx-forum at forum.nginx.org Sun Feb 23 15:07:21 2020
From: nginx-forum at forum.nginx.org (adrian5)
Date: Sun, 23 Feb 2020 10:07:21 -0500
Subject: Phrasing in 'ngx_http_limit_req_module' documentation
Message-ID: <947382e5cac69706b61582f6661d5fcb.NginxMailingListEnglish@forum.nginx.org>
https://nginx.org/en/docs/http/ngx_http_limit_req_module.html
> If the zone storage is exhausted, the least recently used state is
removed. Even if after that a new state cannot be created, the request is
terminated with an error.
Unless I misunderstood the intention, I think this should read: "? state is
removed. If even after that, a new state cannot be created ?".
That way it makes more sense.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287092,287092#msg-287092
From nginx-forum at forum.nginx.org Mon Feb 24 17:15:19 2020
From: nginx-forum at forum.nginx.org (bdarbro)
Date: Mon, 24 Feb 2020 12:15:19 -0500
Subject: Nginx - 56 day old reverse-proxy suddenly unable to connect
upstream.
In-Reply-To:
References:
Message-ID: <056ee3644a8432e665b634570ab471d5.NginxMailingListEnglish@forum.nginx.org>
Sure. "valid=3s" This probably can be relaxed, yes. The intent was to
prevent grouping too many connections on the same 8 or so IP's we get in a
single DNS query.
> ssl_session_cache shared:SSL:1m;
> ssl_session_timeout 86400;
This is a nice pointer, thank you. From nginx.org documentation "One
megabyte of the cache contains about 4000 sessions." This should definitely
be higher. Don't know if it will play into the outage I had, but good to
get this set properly going forward.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287081,287097#msg-287097
From nginx-forum at forum.nginx.org Tue Feb 25 12:05:05 2020
From: nginx-forum at forum.nginx.org (stmx38)
Date: Tue, 25 Feb 2020 07:05:05 -0500
Subject: Nginx location - Distinguish requests by arguments or queries
Message-ID: <47a38746aadbfe03aba74942e0fe3940.NginxMailingListEnglish@forum.nginx.org>
Hello,
We have a case when we should permit different methods on the same location
but different requests.
For location /test we should permit only POST, for /test?doc we should
permit only GET methods.
Our config example:
----
location /test {
error_page 403 =405 /custom-error;
limit_except POST {
deny all;
}
proxy_pass http://test/in;
}
----
Map doesn't help us, it can't be used inside 'limit_except'. We also tried
to use 'if' to pass a request to named location using 'try_files' but it
also is not allowed inside 'if'.
Can't be used:
----
limit_except $method {
deny all;
}
if ($request_uri ~ \?doc) {
try_files @test-doc;
}
----
The question is, if there a way to implement this and how?
Thank you!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287101,287101#msg-287101
From francis at daoine.org Tue Feb 25 17:06:15 2020
From: francis at daoine.org (Francis Daly)
Date: Tue, 25 Feb 2020 17:06:15 +0000
Subject: Nginx location - Distinguish requests by arguments or queries
In-Reply-To: <47a38746aadbfe03aba74942e0fe3940.NginxMailingListEnglish@forum.nginx.org>
References: <47a38746aadbfe03aba74942e0fe3940.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200225170615.GR26683@daoine.org>
On Tue, Feb 25, 2020 at 07:05:05AM -0500, stmx38 wrote:
Hi there,
> We have a case when we should permit different methods on the same location
> but different requests.
The nginx model does "location" matching without reference to the
query string.
But you may be able to build your own "good-enough" config, for your
use case.
> For location /test we should permit only POST, for /test?doc we should
> permit only GET methods.
As you have seen - limit_except does not do what you want.
But if you can enumerate the things that you want to allow (or: want
to block), then you can make your own variable from "method" and "uri"
that you can test. Only checking it within the one location{} should
reduce the impact on any other requests.
If you change the list of things that should be allowed, then you will
need ongoing maintenance of the hardcoded list.
Something like:
==
map $request_method$request_uri $block_this {
default 1;
~*GET/test\?doc 0;
~*POST/test\?. 1;
~*POST/test 0;
}
==
along with
==
location /test {
error_page 403 =405 /custom-error;
if ($block_this = 1) { return 403; }
proxy_pass http://test/in;
}
==
(Set the return code to what you want.)
may be adaptable to do what you want.
The case-insensitive is because I don't know if your clients do GET or get
or gEt; and I don't know how important it is that *no* "should-be-blocked"
requests get to your back-end. Adjust to taste based on testing.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Wed Feb 26 08:48:06 2020
From: francis at daoine.org (Francis Daly)
Date: Wed, 26 Feb 2020 08:48:06 +0000
Subject: Nginx location - Distinguish requests by arguments or queries
In-Reply-To: <20200225170615.GR26683@daoine.org>
References: <47a38746aadbfe03aba74942e0fe3940.NginxMailingListEnglish@forum.nginx.org>
<20200225170615.GR26683@daoine.org>
Message-ID: <20200226084806.GS26683@daoine.org>
On Tue, Feb 25, 2020 at 05:06:15PM +0000, Francis Daly wrote:
> On Tue, Feb 25, 2020 at 07:05:05AM -0500, stmx38 wrote:
Hi there,
two corrections to my suggestion...
> > For location /test we should permit only POST, for /test?doc we should
> > permit only GET methods.
> Something like:
>
> ==
> map $request_method$request_uri $block_this {
> default 1;
> ~*GET/test\?doc 0;
> ~*POST/test\?. 1;
> ~*POST/test 0;
> }
> ==
> The case-insensitive is because I don't know if your clients do GET or get
> or gEt; and I don't know how important it is that *no* "should-be-blocked"
> requests get to your back-end. Adjust to taste based on testing.
"method" (GET, POST, etc) is case-sensitive -- so will always be
uppercase, so you don't need the case-insensitive matching.
And the patterns should be anchored to the start, to avoid some false
matches.
So - change each instance of "~*" to be "~^", and it should be righter.
Cheers,
f
--
Francis Daly francis at daoine.org
From larry.martell at gmail.com Wed Feb 26 22:55:02 2020
From: larry.martell at gmail.com (Larry Martell)
Date: Wed, 26 Feb 2020 17:55:02 -0500
Subject: Deploying django, channels and websockets with nginx and daphne
Message-ID:
I've posted this to the django mailing list and to stack overflow,
with no replies so trying here.
I am trying to deploy a django app that uses channels and websockets,
with nginx and daphne.
When I was using uwsgi, here was my nginx file:
upstream django {
server unix:/run/uwsgi/devAppReporting.sock;
}
server {
listen 8090;
server_name foo.bar.com;
charset utf-8;
location /static {
alias /var/dev-app-reporting/static;
}
location / {
uwsgi_pass django;
include /var/dev-app-reporting/uwsgi_params;
uwsgi_read_timeout 3600;
client_max_body_size 50m;
}
}
Now I changed it to this:
upstream django {
server unix:/run/daphne/devAppReporting.sock;
}
server {
listen 8090;
server_name foo.bar.com;
charset utf-8;
location /static {
alias /var/dev-app-reporting/static;
}
location / {
proxy_pass http://0.0.0.0:8090;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
Started daphne like this:
daphne -u /run/daphne/devAppReporting.sock app.dse.asgi:application
I get a 502 bad gateway error and this in the log:
2020/02/24 22:17:26 [alert] 29169#29169: 768 worker_connections are not enough
2020/02/24 22:17:26 [error] 29169#29169: *131545 recv() failed (104:
Connection reset by peer) while reading response header from upstream,
client: 127.0.0.1, server:
dse-portfolio-dev-assessments.md.virtualclarity.com, request: "GET /
HTTP/1.1", upstream: "http://0.0.0.0:8090/", host: "xx.xx.xx.xx"
Any ideas on what I should have in my config file for this to work?
From francis at daoine.org Wed Feb 26 23:39:19 2020
From: francis at daoine.org (Francis Daly)
Date: Wed, 26 Feb 2020 23:39:19 +0000
Subject: Deploying django, channels and websockets with nginx and daphne
In-Reply-To:
References:
Message-ID: <20200226233919.GV26683@daoine.org>
On Wed, Feb 26, 2020 at 05:55:02PM -0500, Larry Martell wrote:
Hi there,
> upstream django {
> server unix:/run/daphne/devAppReporting.sock;
> }
>
> server {
> listen 8090;
> location / {
> proxy_pass http://0.0.0.0:8090;
Changing that to "proxy_pass http://django;" is probably the best
first step.
Maybe there won't need to be a second step!
Cheers,
f
--
Francis Daly francis at daoine.org
From dmw at yubasolutions.com Thu Feb 27 00:04:19 2020
From: dmw at yubasolutions.com (Daniel Wilcox)
Date: Thu, 27 Feb 2020 00:04:19 +0000
Subject: Deploying django, channels and websockets with nginx and daphne
In-Reply-To:
References:
Message-ID:
At a quick glance -- your proxy_pass statement is pointed at the nginx
listener instead of at the upstream.
Change this:
proxy_pass http://0.0.0.0:8090;
To this:
proxy_pass django;
Hope that helps,
=D
On 2/26/20, Larry Martell wrote:
> I've posted this to the django mailing list and to stack overflow,
> with no replies so trying here.
>
> I am trying to deploy a django app that uses channels and websockets,
> with nginx and daphne.
>
> When I was using uwsgi, here was my nginx file:
>
> upstream django {
> server unix:/run/uwsgi/devAppReporting.sock;
> }
>
> server {
> listen 8090;
> server_name foo.bar.com;
> charset utf-8;
>
> location /static {
> alias /var/dev-app-reporting/static;
> }
>
> location / {
> uwsgi_pass django;
> include /var/dev-app-reporting/uwsgi_params;
> uwsgi_read_timeout 3600;
> client_max_body_size 50m;
> }
> }
>
> Now I changed it to this:
>
> upstream django {
> server unix:/run/daphne/devAppReporting.sock;
> }
>
> server {
> listen 8090;
> server_name foo.bar.com;
> charset utf-8;
>
> location /static {
> alias /var/dev-app-reporting/static;
> }
>
> location / {
> proxy_pass http://0.0.0.0:8090;
> proxy_http_version 1.1;
> proxy_set_header Upgrade $http_upgrade;
> proxy_set_header Connection "upgrade";
> proxy_redirect off;
> proxy_set_header Host $host;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_set_header X-Forwarded-Host $server_name;
> }
> }
>
> Started daphne like this:
>
> daphne -u /run/daphne/devAppReporting.sock app.dse.asgi:application
>
> I get a 502 bad gateway error and this in the log:
>
> 2020/02/24 22:17:26 [alert] 29169#29169: 768 worker_connections are not
> enough
> 2020/02/24 22:17:26 [error] 29169#29169: *131545 recv() failed (104:
> Connection reset by peer) while reading response header from upstream,
> client: 127.0.0.1, server:
> dse-portfolio-dev-assessments.md.virtualclarity.com, request: "GET /
> HTTP/1.1", upstream: "http://0.0.0.0:8090/", host: "xx.xx.xx.xx"
>
> Any ideas on what I should have in my config file for this to work?
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
From larry.martell at gmail.com Thu Feb 27 14:07:09 2020
From: larry.martell at gmail.com (Larry Martell)
Date: Thu, 27 Feb 2020 09:07:09 -0500
Subject: Deploying django, channels and websockets with nginx and daphne
In-Reply-To: <20200226233919.GV26683@daoine.org>
References:
<20200226233919.GV26683@daoine.org>
Message-ID:
On Wed, Feb 26, 2020 at 6:39 PM Francis Daly wrote:
>
> On Wed, Feb 26, 2020 at 05:55:02PM -0500, Larry Martell wrote:
>
> Hi there,
>
> > upstream django {
> > server unix:/run/daphne/devAppReporting.sock;
> > }
> >
> > server {
> > listen 8090;
>
> > location / {
> > proxy_pass http://0.0.0.0:8090;
>
> Changing that to "proxy_pass http://django;" is probably the best
> first step.
>
> Maybe there won't need to be a second step!
That was it! Thank you so much Francis.
From stefano.serano at ngway.it Thu Feb 27 14:41:07 2020
From: stefano.serano at ngway.it (Stefano Serano)
Date: Thu, 27 Feb 2020 14:41:07 +0000
Subject: R: problem with proxy pass
In-Reply-To:
References:
Message-ID: <67eae834efef46f9a31705b4a8c65edce1c25fc2b1494e6489f91b07bc8fe2a0@ngway.it>
Hi all.
I'm trying to use nginx as load balacer form my HIDS system (Wazuh). I've hosts that send logs from outside of my network and from inside throught port udp 1514.
>From the hosts outside i've no connection problem, but from inside they are unable to connect to the port. No firewall are enable on Nginx LB( Centos 7 machine by the way) and Selinux is disabled.
Can someone tell me how can i do to figure out whats wrong?
Here my nginx configuration:
--------------------------------------
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 10000;
}
stream {
upstream master {
server 10.0.0.7:1515;
}
upstream mycluster {
hash $remote_addr consistent;
server 10.0.0.7:1514;
server 10.0.0.6:1514;
}
server {
listen 1515;
proxy_pass master;
}
server {
listen 1514 udp;
proxy_pass mycluster;
}
#error_log /var/log/nginx/error.log debug;
}
--------------------------------------
have a nice day
Ai sensi dell'art. 13 del Regolamento UE 2016/679 (GDPR), si informa che gli eventuali dati personali indicati in questo documento sono trattati dallo Scrivente secondo i principi di correttezza liceit? e trasparenza. L?informativa completa ? disponibile a richiesta presso i ns uffici o all?indirizzo email: info at ngway.it. Si informa inoltre che le informazioni contenute nella presente comunicazione e i relativi allegati possono essere riservate e sono, comunque, destinate esclusivamente alle persone o alla Societ? destinatari. La diffusione, distribuzione e/o copiatura del documento trasmesso da parte di qualsiasi soggetto diverso dal destinatario ? proibita, ai sensi dell?art. 616 c.p. Se avete ricevuto questo messaggio per errore, vi preghiamo di distruggerlo.
Ai sensi dell'art. 13 del Regolamento UE 2016/679 (GDPR), si informa che gli eventuali dati personali indicati in questo documento sono trattati dallo Scrivente secondo i principi di correttezza liceit? e trasparenza. L?informativa completa ? disponibile a richiesta presso i ns uffici o all?indirizzo email: info at ngway.it. Si informa inoltre che le informazioni contenute nella presente comunicazione e i relativi allegati possono essere riservate e sono, comunque, destinate esclusivamente alle persone o alla Societ? destinatari. La diffusione, distribuzione e/o copiatura del documento trasmesso da parte di qualsiasi soggetto diverso dal destinatario ? proibita, ai sensi dell?art. 616 c.p. Se avete ricevuto questo messaggio per errore, vi preghiamo di distruggerlo.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From r at roze.lv Thu Feb 27 17:46:00 2020
From: r at roze.lv (Reinis Rozitis)
Date: Thu, 27 Feb 2020 19:46:00 +0200
Subject: problem with proxy pass
In-Reply-To: <67eae834efef46f9a31705b4a8c65edce1c25fc2b1494e6489f91b07bc8fe2a0@ngway.it>
References:
<67eae834efef46f9a31705b4a8c65edce1c25fc2b1494e6489f91b07bc8fe2a0@ngway.it>
Message-ID: <000901d5ed95$c669a520$533cef60$@roze.lv>
> From the hosts outside i've no connection problem, but from inside they are unable to connect to the port. No firewall are enable on Nginx LB( Centos 7 machine by the way) and Selinux is disabled.
By "from inside" you mean other hosts in LAN or the same centos machine?
If first then it's most likely firewall (limited outbond udp on the clients) or routing related.
Without knowing the details/network topology there is not much to suggest - try to test if the clients can connect to any other (open) port, icmp ping the centos machine or inspect the network activity with tcpdump.
rr
From kaushalshriyan at gmail.com Thu Feb 27 18:33:51 2020
From: kaushalshriyan at gmail.com (Kaushal Shriyan)
Date: Fri, 28 Feb 2020 00:03:51 +0530
Subject: Prevent Arbitary HTTP Host header in nginx
Message-ID:
Hi,
Is there a way to prevent Arbitrary HTTP Host header in Nginx? Penetration
test has reported accepting arbitrary host headers. Thanks in Advance and I
look forward to hearing from you.
More Information as below:-
https://www.acunetix.com/blog/articles/automated-detection-of-host-header-attacks/
https://www.skeletonscribe.net/2013/05/practical-http-host-header-attacks.html
Best Regards,
Kaushal
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Thu Feb 27 19:20:45 2020
From: nginx-forum at forum.nginx.org (stmx38)
Date: Thu, 27 Feb 2020 14:20:45 -0500
Subject: Nginx location - Distinguish requests by arguments or queries
In-Reply-To: <20200226084806.GS26683@daoine.org>
References: <20200226084806.GS26683@daoine.org>
Message-ID: <19b298ec495533ff2a2eca4ddd1b62c6.NginxMailingListEnglish@forum.nginx.org>
Hello Francis,
It seems that your solution working as expected and I have started to test
it. Also, have some questions here:
1. "~*" to be "~^"
The first one looks like Nginx regexp we can use for locations, but the
second one not (^~):
https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms
2. It seems that the order of records in map is important. We pass required
queries with args, we block then all queries with args and allow without
args a then default value is applied. Maybe you can provide more details
here.
Thank you for help and for the ready solution!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287101,287136#msg-287136
From r at roze.lv Thu Feb 27 19:51:48 2020
From: r at roze.lv (Reinis Rozitis)
Date: Thu, 27 Feb 2020 21:51:48 +0200
Subject: Prevent Arbitary HTTP Host header in nginx
In-Reply-To:
References:
Message-ID: <000001d5eda7$58ceb200$0a6c1600$@roze.lv>
> Is there a way to prevent Arbitrary HTTP Host header in Nginx? Penetration test has reported accepting arbitrary host headers. Thanks in Advance and I look forward to hearing from you.
You can always define "catch all" server block with:
server {
listen 80 default_server;
server_name _;
return 444;
}
(444 is connection close without response)
And then just add valid host names to the other server blocks.
rr
From francis at daoine.org Thu Feb 27 23:31:14 2020
From: francis at daoine.org (Francis Daly)
Date: Thu, 27 Feb 2020 23:31:14 +0000
Subject: Nginx location - Distinguish requests by arguments or queries
In-Reply-To: <19b298ec495533ff2a2eca4ddd1b62c6.NginxMailingListEnglish@forum.nginx.org>
References: <20200226084806.GS26683@daoine.org>
<19b298ec495533ff2a2eca4ddd1b62c6.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20200227233114.GW26683@daoine.org>
On Thu, Feb 27, 2020 at 02:20:45PM -0500, stmx38 wrote:
Hi there,
> 1. "~*" to be "~^"
> The first one looks like Nginx regexp we can use for locations, but the
> second one not (^~):
"map" is documented at http://nginx.org/r/map
"~" means "this arg is a regex, not a string". "~*" means "and that regex
is case-insensitive". The rest of the argument is the regex; in that,
"^" means "match the start of the string".
So "~^GET/test" in this case will match all GET requests that start with
"/test". "~GET/test" would also match any requests that include the
8-character sting GET/test, which is probably not what you want.
> 2. It seems that the order of records in map is important. We pass required
> queries with args, we block then all queries with args and allow without
> args a then default value is applied. Maybe you can provide more details
> here.
Yes, the order is important. Per the docs: the first matching regex is
the regex that counts.
So - if you have some things which are subsets of some other things,
you must put the more specific ones earlier in the config list.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From kaushalshriyan at gmail.com Fri Feb 28 07:23:25 2020
From: kaushalshriyan at gmail.com (Kaushal Shriyan)
Date: Fri, 28 Feb 2020 12:53:25 +0530
Subject: Prevent Arbitary HTTP Host header in nginx
In-Reply-To: <000001d5eda7$58ceb200$0a6c1600$@roze.lv>
References:
<000001d5eda7$58ceb200$0a6c1600$@roze.lv>
Message-ID:
On Fri, Feb 28, 2020 at 1:21 AM Reinis Rozitis wrote:
> > Is there a way to prevent Arbitrary HTTP Host header in Nginx?
> Penetration test has reported accepting arbitrary host headers. Thanks in
> Advance and I look forward to hearing from you.
>
> You can always define "catch all" server block with:
>
> server {
> listen 80 default_server;
> server_name _;
> return 444;
> }
>
> (444 is connection close without response)
>
> And then just add valid host names to the other server blocks.
>
> rr
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Hi Reinis,
I have added the below server block in /etc/nginx/nginx.conf (
https://paste.centos.org/view/raw/d5e90b98)
server {
> listen 80;
> server_name _;
> return 444;
> }
When i try to run the below curl call, I am still receiving 200 OK
response.
#*curl --verbose --header 'Host: www.example.com '
> https://developer-nonprod.example.com
> *
> > GET / HTTP/1.1
> > Host: www.example.com
> > User-Agent: curl/7.64.1
> > Accept: */*
> >
> < HTTP/1.1 200 OK
> < Server: nginx
> < Content-Type: text/html; charset=UTF-8
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < X-Powered-By: PHP/7.2.27
> < Cache-Control: must-revalidate, no-cache, private
> < Date: Fri, 28 Feb 2020 07:02:00 GMT
> < X-Drupal-Dynamic-Cache: MISS
> < X-UA-Compatible: IE=edge
> < Content-language: en
> < X-Content-Type-Options: nosniff
> < X-Frame-Options: SAMEORIGIN
> < Expires: Sun, 19 Nov 1978 05:00:00 GMT
> < Vary:
> < X-Generator: Drupal 8 (https://www.drupal.org)
> < X-Drupal-Cache: MISS
> <
#*curl --verbose --header 'Host: www.evil.com
> ' https://developer-nonprod.example.com
> *
> > GET / HTTP/1.1
> > Host: www.evil.com
> > User-Agent: curl/7.64.1
> > Accept: */*
> >
> < HTTP/1.1 200 OK
> < Server: nginx
> < Content-Type: text/html; charset=UTF-8
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < X-Powered-By: PHP/7.2.27
> < Cache-Control: must-revalidate, no-cache, private
> < Date: Fri, 28 Feb 2020 06:59:41 GMT
> < X-Drupal-Dynamic-Cache: MISS
> < X-UA-Compatible: IE=edge
> < Content-language: en
> < X-Content-Type-Options: nosniff
> < X-Frame-Options: SAMEORIGIN
> < Expires: Sun, 19 Nov 1978 05:00:00 GMT
> < Vary:
> < X-Generator: Drupal 8 (https://www.drupal.org)
> < X-Drupal-Cache: MISS
> <
Any help will be highly appreciable. Thanks in Advance.
Best Regards,
Kaushal
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From r at roze.lv Fri Feb 28 07:53:34 2020
From: r at roze.lv (Reinis Rozitis)
Date: Fri, 28 Feb 2020 09:53:34 +0200
Subject: Prevent Arbitary HTTP Host header in nginx
In-Reply-To:
References:
<000001d5eda7$58ceb200$0a6c1600$@roze.lv>
Message-ID: <000001d5ee0c$2d9dc430$88d94c90$@roze.lv>
> I have added the below server block in /etc/nginx/nginx.conf (https://paste.centos.org/view/raw/d5e90b98)
>
> server {
> listen 80;
> server_name _;
> return 444;
> }
>
> When i try to run the below curl call, I am still receiving 200 OK response.
> #curl --verbose --header 'Host: www.example.com' https://developer-nonprod.example.com
> GET / HTTP/1.1
> Host: www.example.com
> User-Agent: curl/7.64.1
> Accept: */*
If you are testing 'https' then you have to add the 'listen 443;' to the catch all server{} block otherways it will only work for http requests.
Also your pasted configuration has:
server {
listen 80 default_server;
server_name developer-nonprod.example.com;
server_name_in_redirect off;
return 301 https://$host$request_uri;
}
server {
listen 80;
server_name _;
return 444;
}
}
In this case with non-defined Hosts (server_name's) the first server {} will be used since it has the default_server (and second is ignored) and you'll always get the redirect.
You could leave the existing http -> https redirect but then change the catch all to listen only on 443 .. so if there is no valid server_name definition the connection will be dropped.
rr
From kaushalshriyan at gmail.com Fri Feb 28 08:23:08 2020
From: kaushalshriyan at gmail.com (Kaushal Shriyan)
Date: Fri, 28 Feb 2020 13:53:08 +0530
Subject: Prevent Arbitary HTTP Host header in nginx
In-Reply-To: <000001d5ee0c$2d9dc430$88d94c90$@roze.lv>
References:
<000001d5eda7$58ceb200$0a6c1600$@roze.lv>
<000001d5ee0c$2d9dc430$88d94c90$@roze.lv>
Message-ID:
On Fri, Feb 28, 2020 at 1:23 PM Reinis Rozitis wrote:
> > I have added the below server block in /etc/nginx/nginx.conf (
> https://paste.centos.org/view/raw/d5e90b98)
> >
> > server {
> > listen 80;
> > server_name _;
> > return 444;
> > }
> >
> > When i try to run the below curl call, I am still receiving 200 OK
> response.
>
> > #curl --verbose --header 'Host: www.example.com'
> https://developer-nonprod.example.com
> > GET / HTTP/1.1
> > Host: www.example.com
> > User-Agent: curl/7.64.1
> > Accept: */*
>
> If you are testing 'https' then you have to add the 'listen 443;' to the
> catch all server{} block otherways it will only work for http requests.
>
>
> Also your pasted configuration has:
>
> server {
> listen 80 default_server;
>
> server_name developer-nonprod.example.com;
> server_name_in_redirect off;
> return 301 https://$host$request_uri;
> }
>
>
> server {
> listen 80;
> server_name _;
> return 444;
> }
> }
>
> In this case with non-defined Hosts (server_name's) the first server {}
> will be used since it has the default_server (and second is ignored) and
> you'll always get the redirect.
>
> You could leave the existing http -> https redirect but then change the
> catch all to listen only on 443 .. so if there is no valid server_name
> definition the connection will be dropped.
>
> rr
>
Hi Reinis,
I have added the below server block https://paste.centos.org/view/0c6f3195
server {
listen 80 default_server;
server_name developer-nonprod.example.com;
server_name_in_redirect off;
return 301 https://$host$request_uri;
}
# index index.html;
server {
listen 443;
server_name _;
# server_name_in_redirect off;
return 444;
}
}
It is still not working. I look forward to hearing from you and your help
is highly appreciated. Thanks in Advance.
Best Regards,
Kaushal
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From r at roze.lv Fri Feb 28 08:57:23 2020
From: r at roze.lv (Reinis Rozitis)
Date: Fri, 28 Feb 2020 10:57:23 +0200
Subject: Prevent Arbitary HTTP Host header in nginx
In-Reply-To:
References:
<000001d5eda7$58ceb200$0a6c1600$@roze.lv>
<000001d5ee0c$2d9dc430$88d94c90$@roze.lv>
Message-ID: <000601d5ee15$17b7e7f0$4727b7d0$@roze.lv>
> I have added the below server block https://paste.centos.org/view/0c6f3195
>
> It is still not working. I look forward to hearing from you and your help is highly appreciated. Thanks in Advance.
If you don't use the default_server for the catch all server{} block then you should place it as first in the configuration as otherways nginx will choose the first one using the order they come in configuration (for each listen port there can be a different default server).
In your case it will be the:
server {
..
listen 443 ssl;
ssl_protocols TLSv1.2;
server_name developer-nonprod.example.com;
.....
So either place it as first or add listen 443 default_server;
p.s you can read in more detail how nginx handles the Hosts and server_names in the documentation http://nginx.org/en/docs/http/server_names.html and http://nginx.org/en/docs/http/request_processing.html
rr
From r at roze.lv Fri Feb 28 08:59:00 2020
From: r at roze.lv (Reinis Rozitis)
Date: Fri, 28 Feb 2020 10:59:00 +0200
Subject: Prevent Arbitary HTTP Host header in nginx
References:
<000001d5eda7$58ceb200$0a6c1600$@roze.lv>
<000001d5ee0c$2d9dc430$88d94c90$@roze.lv>
Message-ID: <000701d5ee15$51bccfb0$f5366f10$@roze.lv>
> So either place it as first or add listen 443 default_server;
By first I mean the "catch all" server { server_name _; .. } block.
rr
From kaushalshriyan at gmail.com Fri Feb 28 09:59:15 2020
From: kaushalshriyan at gmail.com (Kaushal Shriyan)
Date: Fri, 28 Feb 2020 15:29:15 +0530
Subject: Prevent Arbitary HTTP Host header in nginx
In-Reply-To: <000701d5ee15$51bccfb0$f5366f10$@roze.lv>
References:
<000001d5eda7$58ceb200$0a6c1600$@roze.lv>
<000001d5ee0c$2d9dc430$88d94c90$@roze.lv>
<000701d5ee15$51bccfb0$f5366f10$@roze.lv>
Message-ID:
On Fri, Feb 28, 2020 at 2:29 PM Reinis Rozitis wrote:
> > So either place it as first or add listen 443 default_server;
>
> By first I mean the "catch all" server { server_name _; .. } block.
>
> rr
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Hi Reinis,
I did follow your steps. My nginx.conf file is
https://paste.centos.org/view/ae22889e when I run the curl call, I am still
receiving HTTP 200 OK response instead of HTTP 444 (No Response) as per the
below output
#*curl --verbose --header 'Host: www.example.com
> ' https://developer-nonprod.example.com
> *
> > GET / HTTP/1.1
> > Host: www.example.com
> > User-Agent: curl/7.64.1
> > Accept: */*
> >
> < HTTP/1.1 200 OK
> < Server: nginx
> < Content-Type: text/html; charset=UTF-8
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < X-Powered-By: PHP/7.2.27
> < Cache-Control: must-revalidate, no-cache, private
> < Date: Fri, 28 Feb 2020 07:02:00 GMT
> < X-Drupal-Dynamic-Cache: MISS
> < X-UA-Compatible: IE=edge
> < Content-language: en
> < X-Content-Type-Options: nosniff
> < X-Frame-Options: SAMEORIGIN
> < Expires: Sun, 19 Nov 1978 05:00:00 GMT
> < Vary:
> < X-Generator: Drupal 8 (https://www.drupal.org)
> < X-Drupal-Cache: MISS
> <
#*curl --verbose --header 'Host: www.evil.com
> ' https://developer-nonprod.example.com
> *
> > GET / HTTP/1.1
> > Host: www.evil.com
> > User-Agent: curl/7.64.1
> > Accept: */*
> >
> < HTTP/1.1 200 OK
> < Server: nginx
> < Content-Type: text/html; charset=UTF-8
> < Transfer-Encoding: chunked
> < Connection: keep-alive
> < X-Powered-By: PHP/7.2.27
> < Cache-Control: must-revalidate, no-cache, private
> < Date: Fri, 28 Feb 2020 06:59:41 GMT
> < X-Drupal-Dynamic-Cache: MISS
> < X-UA-Compatible: IE=edge
> < Content-language: en
> < X-Content-Type-Options: nosniff
> < X-Frame-Options: SAMEORIGIN
> < Expires: Sun, 19 Nov 1978 05:00:00 GMT
> < Vary:
> < X-Generator: Drupal 8 (https://www.drupal.org)
> < X-Drupal-Cache: MISS
> <
Thanks once again for all your help and I look forward to hearing from you.
Best Regards,
Kaushal
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From 2535191782 at qq.com Fri Feb 28 14:22:21 2020
From: 2535191782 at qq.com (=?ISO-8859-1?B?MjUzNTE5MTc4Mg==?=)
Date: Fri, 28 Feb 2020 22:22:21 +0800
Subject: Does nginx support multiple http {} blocks?
Message-ID:
In nginx.conf:http{
}
http{
}
can be allowed?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Fri Feb 28 14:27:55 2020
From: francis at daoine.org (Francis Daly)
Date: Fri, 28 Feb 2020 14:27:55 +0000
Subject: Does nginx support multiple http {} blocks?
In-Reply-To:
References:
Message-ID: <20200228142755.GY26683@daoine.org>
On Fri, Feb 28, 2020 at 10:22:21PM +0800, 2535191782 wrote:
Hi there,
> In nginx.conf:http{
>
>
> }
> http{
>
>
> }
> can be allowed?
It seems fairly easy to check.
$ sbin/nginx -t
nginx: [emerg] "http" directive is duplicate in /usr/local/nginx/conf/nginx.conf:8
nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed
So - no.
Is there a reason why you might want that? Perhaps the thing that you
want to achieve can be done some other way?
Cheers,
f
--
Francis Daly francis at daoine.org
From r at roze.lv Fri Feb 28 15:38:24 2020
From: r at roze.lv (Reinis Rozitis)
Date: Fri, 28 Feb 2020 17:38:24 +0200
Subject: Prevent Arbitary HTTP Host header in nginx
In-Reply-To:
References:
<000001d5eda7$58ceb200$0a6c1600$@roze.lv>
<000001d5ee0c$2d9dc430$88d94c90$@roze.lv>
<000701d5ee15$51bccfb0$f5366f10$@roze.lv>
Message-ID: <000001d5ee4d$1d6776b0$58366410$@roze.lv>
> I did follow your steps. My nginx.conf file is https://paste.centos.org/view/ae22889e when I run the curl call, I am still receiving HTTP 200 OK response instead of HTTP 444 (No Response) as per the below output
If you've just called config reload then most likely your nginx is still using an old configuration (you should always check with: nginx -t).
I tried to make a simple test case and turns out you can't have just 'listen 443;' directive (even there is no 'ssl' option) in one server block if another has ' listen 443 ssl;' nginx requires to specify a "ssl_certificate" (which is kind of understandable if you know that nginx has several caveats regarding listen ip:port pairs).
The error looks like:
nginx -t
nginx: [emerg] no "ssl_certificate" is defined for the "listen ... ssl" directive in nginx.conf:39
nginx: configuration file nginx.conf test failed
So before writing solutions out of head one should always note that and/or test your own suggestions :)
The correct configuration example should look like this (for somedummy.crt/key certificate you can either use some self signed or just any other valid certificate (since nginx checks the validity of ssl certificates at startup/config reload you can't place nonexisting/nonvalid certs here)):
server {
listen 443;
ssl_certificate somedummy.crt;
ssl_certificate_key somedummy.key;
server_name _;
return 444;
}
server {
listen 443 ssl;
ssl_certificate validdomain.crt;
ssl_certificate_key validdomain.key;
server_name validdomain;
return 200 'Works';
}
Then the curl requests with Host injects should work as expected:
curl --verbose https://validdomain
> GET / HTTP/1.1
> Host: validdomain
>
< HTTP/1.1 200 OK
* Connection #0 to host validdomain left intact
Works
curl --verbose --header 'Host: invalidhost' https://validdomain
> GET / HTTP/1.1
> Host: invalidhost
>
* Empty reply from server
* Connection #0 to host validdomain left intact
curl: (52) Empty reply from server
p.s. for further testing you should note also that curl doesn't use the Host header for SNI (https://github.com/curl/curl/issues/607 ) rather than the one in the url
So something like:
curl --verbose --header 'Host: validhostname' https://127.0.0.1
will fail with:
curl: (51) SSL: no alternative certificate subject name matches target host name '127.0.0.1'
will fail but on the other hand (if your somedummy.crt has an actual domain):
curl --verbose --header 'Host: validdomain' https://somedummy
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
> GET / HTTP/1.1
> Host: validdomain
< HTTP/1.1 200 OK
< Server: nginx/1.17.8
* Connection #0 to host somedummy left intact
Works
the dummy ssl certificate will be used but nginx will serve the validdoman virtualhost .
rr
From kaushalshriyan at gmail.com Fri Feb 28 17:49:30 2020
From: kaushalshriyan at gmail.com (Kaushal Shriyan)
Date: Fri, 28 Feb 2020 23:19:30 +0530
Subject: Prevent Arbitary HTTP Host header in nginx
In-Reply-To: <000001d5ee4d$1d6776b0$58366410$@roze.lv>
References:
<000001d5eda7$58ceb200$0a6c1600$@roze.lv>
<000001d5ee0c$2d9dc430$88d94c90$@roze.lv>
<000701d5ee15$51bccfb0$f5366f10$@roze.lv>
<000001d5ee4d$1d6776b0$58366410$@roze.lv>
Message-ID:
On Fri, Feb 28, 2020 at 9:08 PM Reinis Rozitis wrote:
> > I did follow your steps. My nginx.conf file is
> https://paste.centos.org/view/ae22889e when I run the curl call, I am
> still receiving HTTP 200 OK response instead of HTTP 444 (No Response) as
> per the below output
>
> If you've just called config reload then most likely your nginx is still
> using an old configuration (you should always check with: nginx -t).
>
>
> I tried to make a simple test case and turns out you can't have just
> 'listen 443;' directive (even there is no 'ssl' option) in one server block
> if another has ' listen 443 ssl;' nginx requires to specify a
> "ssl_certificate" (which is kind of understandable if you know that nginx
> has several caveats regarding listen ip:port pairs).
>
> The error looks like:
>
> nginx -t
> nginx: [emerg] no "ssl_certificate" is defined for the "listen ... ssl"
> directive in nginx.conf:39
> nginx: configuration file nginx.conf test failed
>
> So before writing solutions out of head one should always note that and/or
> test your own suggestions :)
>
>
>
> The correct configuration example should look like this (for
> somedummy.crt/key certificate you can either use some self signed or just
> any other valid certificate (since nginx checks the validity of ssl
> certificates at startup/config reload you can't place nonexisting/nonvalid
> certs here)):
>
>
>
> server {
> listen 443;
> ssl_certificate somedummy.crt;
> ssl_certificate_key somedummy.key;
> server_name _;
> return 444;
> }
>
> server {
> listen 443 ssl;
> ssl_certificate validdomain.crt;
> ssl_certificate_key validdomain.key;
> server_name validdomain;
> return 200 'Works';
> }
>
>
> Then the curl requests with Host injects should work as expected:
>
> curl --verbose https://validdomain
>
> > GET / HTTP/1.1
> > Host: validdomain
> >
> < HTTP/1.1 200 OK
> * Connection #0 to host validdomain left intact
> Works
>
>
> curl --verbose --header 'Host: invalidhost' https://validdomain
>
> > GET / HTTP/1.1
> > Host: invalidhost
> >
> * Empty reply from server
> * Connection #0 to host validdomain left intact
> curl: (52) Empty reply from server
>
>
>
>
> p.s. for further testing you should note also that curl doesn't use the
> Host header for SNI (https://github.com/curl/curl/issues/607 ) rather
> than the one in the url
>
> So something like:
>
> curl --verbose --header 'Host: validhostname' https://127.0.0.1
> will fail with:
> curl: (51) SSL: no alternative certificate subject name matches target
> host name '127.0.0.1'
>
>
> will fail but on the other hand (if your somedummy.crt has an actual
> domain):
>
> curl --verbose --header 'Host: validdomain' https://somedummy
>
> * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
> > GET / HTTP/1.1
> > Host: validdomain
>
> < HTTP/1.1 200 OK
> < Server: nginx/1.17.8
> * Connection #0 to host somedummy left intact
> Works
>
> the dummy ssl certificate will be used but nginx will serve the validdoman
> virtualhost .
>
> rr
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Thanks Reinis for a detailed explanation. It worked as expected. Thanks a
lot for all the help and much appreciated.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From themadbeaker at gmail.com Sat Feb 29 14:44:54 2020
From: themadbeaker at gmail.com (J.R.)
Date: Sat, 29 Feb 2020 08:44:54 -0600
Subject: Is it possible for nginx to decompress FILES on-the-fly (not proxied)?
Message-ID:
I did a lot of googling and only came up with answers from many years
ago, or unanswered questions. Maybe I'm just not using the right
search keywords, so I figured I would ask the following....
Here's my scenario... I have a bunch of static html files that would
be served directly via nginx.
Is it possible to gzip them all, storing ONLY the compressed version
and have everything work as expected?
I've done some experimenting and I'm coming up short getting things to work...
I am using the following directives (skipping non-relevant), and yes
the extra modules were compiled in:
gzip on;
gzip_static on;
gunzip on;
If I make a request and allow compression, then it reads from the .gz
file no problem. If I make a request without allowing compression,
then I get a "file not found"...
I've tried with & without "try_files". Also if I append something
like "$uri.gz" to try files, then it sends the compressed file as-is
(what should be decompressed on the client side is still compressed).
I would hate to have to add a wrapper script and complicate things
just to accommodate that 1% that might request the data
uncompressed...
>From what I can gather reading over the gunzip source, it appears that
decompression is only for when it's proxying (decompress the upstream
response).
Even if there's a non-standard patch available, I have no issues
recompiling to get this to work.
From themadbeaker at gmail.com Sat Feb 29 15:30:02 2020
From: themadbeaker at gmail.com (J.R.)
Date: Sat, 29 Feb 2020 09:30:02 -0600
Subject: Is it possible for nginx to decompress FILES on-the-fly (not
proxied)?
Message-ID:
Well, figured it out... I swear I tried this yesterday, but maybe I
didn't or my configuration was incomplete...
If you use "gzip_static always;" in combination with the below
statements, it works correctly! It sends the compressed response as
expected, and will decompress on-the-fly when necessary.
:)
> I am using the following directives (skipping non-relevant), and yes
> the extra modules were compiled in:
>
> gzip on;
> gzip_static on;
> gunzip on;
From nginx-forum at forum.nginx.org Sat Feb 29 23:25:18 2020
From: nginx-forum at forum.nginx.org (bsmither)
Date: Sat, 29 Feb 2020 18:25:18 -0500
Subject: No Release File
Message-ID:
I get the following lines when running :
sudo apt update
according to the instructions on this page:
https://nginx.org/en/linux_packages.html
-----
Err:10 http://nginx.org/packages/mainline/ubuntu tricia Release
404 Not Found [IP: 95.211.80.227 80]
E: The repository 'http://nginx.org/packages/mainline/ubuntu tricia Release'
does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration
details.
-----
Am I missing some understanding on following these instructions?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287202,287202#msg-287202
From nginx-forum at forum.nginx.org Sat Feb 29 23:31:12 2020
From: nginx-forum at forum.nginx.org (bsmither)
Date: Sat, 29 Feb 2020 18:31:12 -0500
Subject: No Release File
In-Reply-To:
References:
Message-ID:
So I try the Stable release:
-----
Err:12 http://nginx.org/packages/ubuntu tricia Release
404 Not Found [IP: 95.211.80.227 80]
E: The repository 'http://nginx.org/packages/ubuntu tricia Release' does not
have a Release file.
N: Updating from such a repository can't be done securely, and is therefore
disabled by default.
-----
Are there any other instructions available to get Nginx 1.17 downloaded?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287202,287203#msg-287203