Hi guys
a novice here so go easy on me with this question:
having multiple 'listen' with IPs or, just one 'listen' with a hostname
which resolves to more than one IP - is it possible to tell Nginx not
fail when one of IPs is absent, does not exist?
many thanks, L.
Hello,
Is there a way to log individual websocket messages going through a nginx
server setup to proxy websocket as explained here
https://nginx.org/en/docs/http/websocket.html ?
-Chinmay
I had a wordpress blog and was working on Apache. I migrated the blog to
Nginx + php-fpm. But i have a problem with this.
My blog has RSS with example.com/feed URL , and i could see the feeds with
paged like this example -> http://www.kodcu.com/feed/?paged=45.
But in Nginx, this paged RSS urls dont work with my config. /feed and
/feed/?paged=X URLs shows top 10 content.
My nginx.conf same as below. How can i handle this problem?
user root root;
worker_processes 2;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 2;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
error_log /var/log/nginx/error.log;
access_log off;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/html text/plain text/css application/json
application/x-javascript text/xml application/xml application/xml+rss
text/javascript;
##
# Virtual Host Configs
##
index index.php index.html index.htm;
## See here: http://wiki.nginx.org/WordPress
server {
server_name example.comwww.example.com;
root /var/www/example.com;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
}
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238692,238692#msg-238692
Dear,
I have a reverse-proxy in front of my two servers: web (apache2) and
email (nginx-iredmail).
The proxy-reverse is perfectly working with my web server running
Apache2, but I am not able to make it working for my email server.
The reverse-proxy and the email server are both running with the same
version of Nginx (1.9).
I have tried many configs without any success.
My last one:
***********************************************************************
server {
listen 446;
server_name email.domain.ltd;
location / {
proxy_pass https://email_server_ip:446;
proxy_ssl_certificate /etc/ssl/certs/cert.chained.crt;
proxy_ssl_certificate_key /etc/ssl/private/private.key;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
proxy_ssl_trusted_certificate /etc/ssl/certs/cert.chained.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
error_log /var/log/nginx/error-proxy.log;
access_log /var/log/nginx/access-proxy.log;
}
}
Can I please have some help ??
Thx
--
Cordialement,
Thierry e-mail : lenaigst(a)maelenn.org
PGP Key: 0xB7E3B9CD
Hi, I'm trying to building a syslog load balancer and I'm running into
issues with the failover of UDP messages. TCP works just fine, when the
server goes down, all messages failover to the active server. But with UDP,
that does not happen. Maybe someone can point me to what I'm doing wrong.
Below is the config.
upstream syssrv {
server 192.168.167.108:5500 max_fails=2 fail_timeout=15s;
server 192.168.167.109:5500 max_fails=2 fail_timeout=15s;
}
server {
listen 5500;
proxy_protocol on;
proxy_pass syssrv;
proxy_timeout 1s;
proxy_connect_timeout 1s;
}
server {
listen 5500 udp;
proxy_pass syssrv;
proxy_timeout 1s;
proxy_connect_timeout 1s;
proxy_bind $remote_addr transparent;
}
}
I have a script that enumerates each message (n) like this "Testing -proto:
udp - n"
I see both servers getting the message when they are online (even - odd
numbers) but when one goes down, once server continues to only get the even
numbers, so I'm losing 50% of the messages.
I tried to debug the setup and I see nginx marking that the udp packets
timed out. I see this:
2022/02/22 20:05:13 [info] 21362#21362: *777 udp client
192.168.167.101:51529 connected to 0.0.0.0:5500
2022/02/22 20:05:13 [info] 21362#21362: *777 udp proxy
192.168.167.101:34912 connected to 192.168.167.108:5500
2022/02/22 20:05:13 [info] 21362#21362: *779 udp client
192.168.167.101:53862 connected to 0.0.0.0:5500
2022/02/22 20:05:13 [info] 21362#21362: *779 udp proxy
192.168.167.101:35506 connected to 192.168.167.109:5500
Then this:
2022/02/22 20:05:14 [info] 21362#21362: *771 udp timed out, packets
from/to client:1/0, bytes from/to client:145/0, bytes from/to
upstream:0/145
But, it's not redirecting the connection to the healthy server. This seems
pretty simple but any ideas what I'm doing wrong? It would seem that the
non-commercial version should be able to do this, no?
Any help is appreciated. I also tried to add a backup, but it doesn't work
with UDP
--
Pawel
Hello guys,
I enabled OCSP Must-Staple, then I found that after restarting nginx, I
always get "MOZILLA_PKIX_ERROR_REQUIRED_TLS_FEATURE_MISSING" error when
visiting my website for the first time.
I think this error means that the server is not caching OCSP information.
My nginx.conf is as follows:
server {
listen 443 ssl http2 reuseport;
listen [::]:443 ssl http2;
server_name example.org;
ssl_certificate /path/to/ecc/fullchain.cer;
ssl_certificate_key /path/to/ecc/example.org.key;
ssl_certificate /path/to/rsa/fullchain.cer;
ssl_certificate_key /path/to/rsa/example.org.key;
ssl_stapling on;
resolver <internal dns1> <internal dns2> valid=300s;
ssl_stapling_verify on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers
ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256;
ssl_ecdh_curve secp384r1;
ssl_early_data on;
…
}
Since I have ECC and RSA dual certificates configured and they are
intact. Therefore I did not configure "ssl_trusted_certificate".
Do I need to configure other parameters like "ssl_ocsp" to solve the
problem I'm having now?
Also I found a small issue, I noticed that the latest version of Google
Chrome/Microsoft Edge will choose to get RSA certificate instead of ECC
certificate.
RSA 4096 R3
ECC 384 E1
Issuer Let's Encrypt
I wonder why Chromium made this choice. Thank you!
Best Regards,
wordlesswind
Hello Members,
I started using nginx a week before and all naive. My client want to access
CMS using domain-int.com/myapplication for which I need to set up nginx. But
I am getting error when I edit my conf file ( I change server section) as
shown below
server {
listen 443 ssl;
server_name domain-int.com;
ssl_certificate
C:/Users/me/Documents/domain-certificates-https/domain-int-crt.crt;
ssl_certificate_key
C:/Users/me/Documents/domain-certificates-https/domain-int-private.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
# Or whatever port your server is running on
proxy_pass http://127.0.0.1:4502;
}
}
Issue : when I run start nginx on cmd, it prompt me for pem password which
is admin for me and then I see error in log file "the event
"ngx_master_14268" was not signaled for 5s"
any help appreciated
Thanks
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293747,293747#msg-293747
I am trying use a Nginx map assigned variable in an upstream but it doesn't
seem to work?
The map is concatenated $uri$args assigning a PHP-FPM fastcgi PHP pool to
variable $pool and then setting the $pool variable in an upstream.
map $uri$args $pool {
default 127.0.0.1:9000;
"~/index.php/args" 127.0.0.1:9002;
}
upstream php {
zone php_zone 64k;
server $pool;
keepalive 2;
}
But if I try this, nginx config test gives me
nginx -t
nginx: [emerg] host not found in upstream "$pool" in ...
What am I missing?
cheers
George
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293738,293738#msg-293738
Hi
I've implemented sticky learn for my weblogic application running on 3
servers using Nginx as the reverse proxy and trying to get an understanding
of how nginx sticky learn works.
The JSESSIONID cookies are of the form <session identifier>!<primary server
id 1><!secondar server id 2>.
I understand that nginx learns from the 'sticky create' to determine which
upstream server the request should be routed to using lookup cookie. But it
is not clear from the documentation what happens when the upstream primary
server is down. Will nginx route to secondary server? Based on my initial
tests it looks like it does not route to the secondary server. I need
confirmation if this is expected behavior. What do I need to do to route
requests to secondary server if primary server is down using sticky learn?
upstream app1 {
least_conn;
zone app1 64k;
server srv1.example.com:5111;
server srv2.example.com:5111;
server srv3.example.com:5111;
sticky learn
create=$upstream_cookie_JSESSIONID
lookup=$cookie_JSESSIONID
zone=client-sessions:1m
timeout=2h;
}
Thanks.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293721,293721#msg-293721