Hello,
Is there a way to log individual websocket messages going through a nginx
server setup to proxy websocket as explained here
https://nginx.org/en/docs/http/websocket.html ?
-Chinmay
I had a wordpress blog and was working on Apache. I migrated the blog to
Nginx + php-fpm. But i have a problem with this.
My blog has RSS with example.com/feed URL , and i could see the feeds with
paged like this example -> http://www.kodcu.com/feed/?paged=45.
But in Nginx, this paged RSS urls dont work with my config. /feed and
/feed/?paged=X URLs shows top 10 content.
My nginx.conf same as below. How can i handle this problem?
user root root;
worker_processes 2;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 2;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
error_log /var/log/nginx/error.log;
access_log off;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/html text/plain text/css application/json
application/x-javascript text/xml application/xml application/xml+rss
text/javascript;
##
# Virtual Host Configs
##
index index.php index.html index.htm;
## See here: http://wiki.nginx.org/WordPress
server {
server_name example.comwww.example.com;
root /var/www/example.com;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
}
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238692,238692#msg-238692
Dear,
I have a reverse-proxy in front of my two servers: web (apache2) and
email (nginx-iredmail).
The proxy-reverse is perfectly working with my web server running
Apache2, but I am not able to make it working for my email server.
The reverse-proxy and the email server are both running with the same
version of Nginx (1.9).
I have tried many configs without any success.
My last one:
***********************************************************************
server {
listen 446;
server_name email.domain.ltd;
location / {
proxy_pass https://email_server_ip:446;
proxy_ssl_certificate /etc/ssl/certs/cert.chained.crt;
proxy_ssl_certificate_key /etc/ssl/private/private.key;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
proxy_ssl_trusted_certificate /etc/ssl/certs/cert.chained.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
error_log /var/log/nginx/error-proxy.log;
access_log /var/log/nginx/access-proxy.log;
}
}
Can I please have some help ??
Thx
--
Cordialement,
Thierry e-mail : lenaigst(a)maelenn.org
PGP Key: 0xB7E3B9CD
Hi Sergey,
I tried with clearing the connections header but NGINX is still sending the 5th response through a new source port. Let me give a more detailed configuration we have. Just to inform you, we have our own auth module instead of using the NGINX auth module. We call ngx_http_post_request to post subrequests and the code is almost the same as that of auth module. For the subrequest sent by auth module with the following configuration we expect NGINX to send requests through a new port for the first four connections and then reuse one of the ports for the fifth connection, especially when the requests are sequential.
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65s;
include /etc/nginx/conf.d/*.conf;
proxy_socket_keepalive on;
server {
listen 9000;
server_name front-service;
ext_auth_fail_allow on;
error_log /var/log/nginx/error.log debug;
location / {
ext_auth_request /auth;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8090;
location /auth {
internal;
proxy_set_header X-Req-Uri $request_uri;
proxy_set_header X-Method $request_method;
proxy_set_header X-Req-Host $host;
proxy_set_header X-Client-Addr $remote_addr:$remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 5000ms;
proxy_read_timeout 5000ms;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://ext-authz-upstream-server;
}
}
upstream ext-authz-upstream-server {
server 172.20.10.6:9006;
keepalive 4;
}
}
Could you please help on what we are missing?
Thanks,
Devashi
Date: Mon, 24 Jan 2022 17:56:33 +0300
From: "Sergey A. Osokin" <osa(a)freebsd.org.ru>
Subject: Re: Using single persistent socket to send subrequests
To: nginx(a)nginx.org
Message-ID: <Ye6+Ie0SM9YCKGby(a)FreeBSD.org.ru>
Content-Type: text/plain; charset=utf-8
Hi Devashi,
On Mon, Jan 24, 2022 at 05:52:56AM +0000, Devashi Tandon wrote:
>
> We have the following configuration:
>
> location / {
> proxy_http_version 1.1;
> proxy_pass http://ext-authz-upstream-server;
> }
>
> upstream ext-authz-upstream-server {
> server 172.20.10.6:9006;
> keepalive 4;
> }
>
> Do I need to add any other configuration to reuse the first four socket connections besides keepalive 4?
You'd need to review and slightly update the `location /' configuration
block by adding the following directive:
proxy_set_header Connection "";
Please visit the following link to get more details:
https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
--
Sergey Osokin
Hi,
Nginx version: nginx/1.20.2
Cache config:
proxy_cache_path /nginx/cache levels=1:2 keys_zone=bla:20m max_size=10g
inactive=20m use_temp_path=off;
I had a problem while using tmpfs mount for nginx cache dir, my automation
mounted it over and over again(so each time nginx created a new set of base
dirs [a-z]), but after a few mounts, it could no longer create new base
dirs(no more memory) and writes to the cache dir failed with:
[crit] 31334#31334: *128216 mkdir() "/nginx/cache/4" failed (13: Permission
denied) while reading upstream
But nginx still accepted connections and the client got a hangup during
transfer each time.
How can I fail nginx when this thing happens, so clients will not get to it
?
Thanks.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293477,293477#msg-293477
Hi--
I was exploring using auth_request from the ngx_http_auth_request_module,
and I have encountered some unexpected behavior with regard to HTTP
keepalive/connection reuse. I have some configuration that looks roughly
like this:
location = /auth_check {
proxy_pass_request_body off;
proxy_set_header Content-Length '';
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass https://upstream_with_keepalive_confugred;
}
location /private {
auth_request /auth_check;
proxy_pass http://some_backend;
}
When I make a series of requests to /auth_check, nginx uses an existing
connection as confirmed by tcpdump, but when I make a series of requests to
/private, each /auth_check is closing the TCP connection at the end and
then creating a new one for the following request. In my
particular use-case this leads to approximately double the latency of the
calls that use auth_request. Is this the expected behavior/a known issue
with auth_request/http subrequests in general?
Thank you,
Zach
Hi, while testing the latest NGINX source code around ~1.21.7, I’ve observed
that enabling "ssl_stapling" without configuring a “resolver”, makes NGINX
cache the OCSP responder IP indefinitely, so, if the CA later changes the
OCSP responder IP, NGINX is still going to try to get OCSP queries from the
old IP (possibly inoperative now), irrespective of the DNS record TTL.
Now, I'm aware of
https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_stapling
saying:
> For a resolution of the OCSP responder hostname, the resolver directive
should also be specified.
And effectively, using the “resolver” directive, OCSP DNS records are
refreshed, but it is not obvious at all what is going to happen if a
"resolver" is not configured. Is there any documentation on this?
Additionally, what is the reason to not use the default system DNS resolvers
in the standard way (i.e. respecting DNS TTLs) instead of performing the
resolution only once when no "resolver" is configured?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293525,293525#msg-293525