I had a wordpress blog and was working on Apache. I migrated the blog to
Nginx + php-fpm. But i have a problem with this.
My blog has RSS with example.com/feed URL , and i could see the feeds with
paged like this example -> http://www.kodcu.com/feed/?paged=45.
But in Nginx, this paged RSS urls dont work with my config. /feed and
/feed/?paged=X URLs shows top 10 content.
My nginx.conf same as below. How can i handle this problem?
user root root;
worker_processes 2;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 2;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
error_log /var/log/nginx/error.log;
access_log off;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/html text/plain text/css application/json
application/x-javascript text/xml application/xml application/xml+rss
text/javascript;
##
# Virtual Host Configs
##
index index.php index.html index.htm;
## See here: http://wiki.nginx.org/WordPress
server {
server_name example.comwww.example.com;
root /var/www/example.com;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
}
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238692,238692#msg-238692
Dear,
I have a reverse-proxy in front of my two servers: web (apache2) and
email (nginx-iredmail).
The proxy-reverse is perfectly working with my web server running
Apache2, but I am not able to make it working for my email server.
The reverse-proxy and the email server are both running with the same
version of Nginx (1.9).
I have tried many configs without any success.
My last one:
***********************************************************************
server {
listen 446;
server_name email.domain.ltd;
location / {
proxy_pass https://email_server_ip:446;
proxy_ssl_certificate /etc/ssl/certs/cert.chained.crt;
proxy_ssl_certificate_key /etc/ssl/private/private.key;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
proxy_ssl_trusted_certificate /etc/ssl/certs/cert.chained.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
error_log /var/log/nginx/error-proxy.log;
access_log /var/log/nginx/access-proxy.log;
}
}
Can I please have some help ??
Thx
--
Cordialement,
Thierry e-mail : lenaigst(a)maelenn.org
PGP Key: 0xB7E3B9CD
How does nginx caching handle multiple cache control headers sent from a
backend?
I had a situation where I was sending both expires and cache-control and
it seemed that the order in which they were sent controlled. I solved
that problem by ignoring the expires header.
I thought I recalled that x-accel-expires would override any other
headers regardless of order, but that doesn't seem to be the case.
Is there a priority, or does order control?
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219520,219520#msg-219520
On Tue, Feb 1, 2011 at 11:45 PM, Ryan Malayter <malayter(a)gmail.com> wrote:
>
> It does in fact work in production on nginx 0.7.6x. Below is my actual
> configuration (trimmed to the essentials and with a few substitutions
> of actual URIs).
>
Well, ngx_proxy module's directive inheritance is in action here,
which gives you nice side effects that you want :)
I'll analyze some examples here such that people *may* get some light.
[Case 1]
location /proxy {
set $a 32;
if ($a = 32) {
set $a 56;
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
}
location ~ /(\d+) {
echo $1;
}
Calling /proxy gives 76 because it works in the following steps:
1. Nginx runs all the rewrite phase directives in the order that
they're in the config file, i.e.,
set $a 32;
if ($a = 32) {
set $a 56;
}
set $a 76;
and $a gets the final value of 76.
2. Nginx traps into the "if" inner block because its condition $a = 32
was met in step 1.
3. The inner block does not has any content handler, ngx_proxy
inherits the content handler (that of ngx_proxy) in the outer scope
(see src/http/modules/ngx_http_proxy_module.c:2025).
4. Also the config specified by proxy_pass also gets inherited by the
inner "if" block (see src/http/modules/ngx_http_proxy_module.c:2015)
5. Request terminates (and the control flow never goes outside of the
"if" block).
That is, the proxy_pass directive in the outer scope will never run in
this example. It is "if" inner block that actually serves you.
Let's see what happens when we override the inner "if" block's content
handler with out own:
[Case 2]
location /proxy {
set $a 32;
if ($a = 32) {
set $a 56;
echo "a = $a";
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
}
location ~ /(\d+) {
echo $1;
}
You will get this while accessing /proxy:
a = 76
Looks counter-intuitive? Oh, well, let's see what's happening this time:
1. Nginx runs all the rewrite phase directives in the order that
they're in the config file, i.e.,
set $a 32;
if ($a = 32) {
set $a 56;
}
set $a 76;
and $a gets the final value of 76.
2. Nginx traps into the "if" inner block because its condition $a = 32
was met in step 1.
3. The inner block *does* has a content handler specified by "echo",
then the value of $a (76) gets emitted to the client side.
4. Request terminates (and the control flow never goes outside of the
"if" block), as in Case 1.
We do have a choice to make Case 2 work as we like:
[Case 3]
location /proxy {
set $a 32;
if ($a = 32) {
set $a 56;
break;
echo "a = $a";
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
}
location ~ /(\d+) {
echo $1;
}
This time, we just add a "break" directive inside the if block. This
will stop nginx from running the rest ngx_rewrite directives. So we
get
a = 56
So this time, nginx works this way:
1. Nginx runs all the rewrite phase directives in the order that
they're in the config file, i.e.,
set $a 32;
if ($a = 32) {
set $a 56;
break;
}
and $a gets the final value of 56.
2. Nginx traps into the "if" inner block because its condition $a = 32
was met in step 1.
3. The inner block *does* has a content handler specified by "echo",
then the value of $a (56) gets emitted to the client side.
4. Request terminates (and the control flow never goes outside of the
"if" block), just as in Case 1.
Okay, you see how ngx_proxy module's config inheritance among nested
locations take the key role here, and make you *believe* it works the
way that you want. But other modules (like "echo" mentioned in one of
my earlier emails) may not inherit content handlers in nested
locations (in fact, most content handler modules, including upstream
ones, don't).
And one must be careful about bad side effects of config inheritance
of "if" blocks in other cases, consider the following example:
[Case 5]
location /proxy {
set $a 32;
if ($a = 32) {
return 404;
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
more_set_headers "X-Foo: $a";
}
location ~ /(\d+) {
echo $1;
}
Here, ngx_header_more's "more_set_headers" will also be inherited by
the implicit location created by the "if" block. So you will get:
curl localhost/proxy
HTTP/1.1 404 Not Found
Server: nginx/0.8.54 (without pool)
Date: Mon, 14 Feb 2011 05:24:00 GMT
Content-Type: text/html
Content-Length: 184
Connection: keep-alive
X-Foo: 32
which may or may not what you want :)
BTW, the "add_header" directive will not emit a "X-Foo" header in this
case, and it does not mean no directive inheritance happens here, but
add_header's header filter will skip 404 responses.
You see, how tricky it is behind the scene! No wonder people keep
saying "nginx's if is evil".
Cheers,
-agentzh
Disclaimer: There may be other corner cases that I've missed here, and
other more knowledgeable people can correct me wherever I'm wrong :)
My first post here as we had never any problems with nginx.
We use 5 nginx server as loadbalancers for our spring boot application.
We were running them for years on debian 9 with the default nginx package
1.10.3
Now we switched three of our loadbalancers to debian 10 with nginx 1.14.2
First everything runs smoothly. Then, on high load we encountered some
problems. It starts with
2020/02/01 17:10:55 [crit] 5901#5901: *3325390 SSL_write() failed while
sending to client, client: ...
2020/02/01 17:10:55 [crit] 5901#5901: *3306981 SSL_write() failed while
sending to client, client: ...
In between we get lots of
2020/02/01 17:11:04 [error] 5902#5902: *3318748 upstream timed out (110:
Connection timed out) while connecting to upstream, ...
2020/02/01 17:11:04 [crit] 5902#5902: *3305656 SSL_write() failed while
sending response to client, client: ...
2020/02/01 17:11:30 [error] 5911#5911: unexpected response for
ocsp.int-x3.letsencrypt.org
It ends with
2020/02/01 17:11:33 [error] 5952#5952: unexpected response for
ocsp.int-x3.letsencrypt.org
The problem does only exits for 30-120 seconds on high load and disappears
afterwards.
In the kernel log we have sometimes:
Feb 1 17:11:04 kt104 kernel: [1033003.285044] TCP: request_sock_TCP:
Possible SYN flooding on port 443. Sending cookies. Check SNMP counters.
But on other occasions we don't see any kernel.log messages
On both debian 9 and debian 10 servers we did some identically TCP Tuning
# Kernel tuning settings
# https://www.nginx.com/blog/tuning-nginx/
net.core.rmem_max=26214400
net.core.wmem_max=26214400
net.ipv4.tcp_rmem=4096 524288 26214400
net.ipv4.tcp_wmem=4096 524288 26214400
net.core.somaxconn=1000
net.core.netdev_max_backlog=5000
net.ipv4.tcp_max_syn_backlog=10000
net.ipv4.ip_local_port_range=16000 61000
net.ipv4.tcp_max_tw_buckets=2000000
net.ipv4.tcp_fin_timeout=30
net.core.optmem_max=20480
The nginx config is exactly the same, so I just show some important parts:
user www-data;
worker_processes auto;
worker_rlimit_nofile 50000;
pid /run/nginx.pid;
events {
worker_connections 5000;
multi_accept on;
use epoll;
}
http {
root /var/www/loadbalancer;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
server_tokens off;
client_max_body_size 5m;
client_header_timeout 20s; # default 60s
client_body_timeout 20s; # default 60s
send_timeout 20s; # default 60s
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:100m;
ssl_buffer_size 4k;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers
'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_session_tickets on;
ssl_session_ticket_key /etc/nginx/ssl_session_ticket.key;
ssl_session_ticket_key /etc/nginx/ssl_session_ticket_old.key;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/ssl/rapidssl/intermediate-root.pem;
resolver 8.8.8.8;
log_format custom '$host $server_port $request_time
$upstream_response_time $remote_addr '
'"$http2" "$ssl_session_reused" $upstream_addr
$time_iso8601 '
'"$request" $status $body_bytes_sent
"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log custom;
error_log /var/log/nginx/error.log;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_path /var/cache/nginx/ levels=1:2
keys_zone=imagecache:10m inactive=7d use_temp_path=off;
proxy_connect_timeout 10s;
proxy_read_timeout 20s;
proxy_send_timeout 20s;
proxy_next_upstream off;
map $http_user_agent $outdated {
default 0;
"~MSIE [1-6]\." 1;
"~Mozilla.*Firefox/[1-9]\." 1;
"~Opera.*Version/[0-9]\." 1;
"~Chrome/[0-9]\." 1;
}
include sites/*.conf;
}
The upstream timeout signals some problems with our java machines. But at
the same time the debian9 nginx/loadbalancer is running fine and has no
problems connecting to any of the upstream servers.
And the problems with letsencrypt and SSL_write are signaling to me some
problems with nginx or TCP or whatever.
I really don't know how to debug this situation. But we can reliable
reproduce it most of the times we encounter high load on debian10 servers
and did never see it on debian 9.
Then I installed the stable version nginx 1.16 on debian10 to see if this is
a bug in nginx which is already fixed:
nginx version: nginx/1.16.1
built by gcc 8.3.0 (Debian 8.3.0-6)
built with OpenSSL 1.1.1c 28 May 2019 (running with OpenSSL 1.1.1d 10 Sep
2019)
TLS SNI support enabled
configure arguments: ...
But it didn't help.
Can somebody help me and give me some hints how to start further debugging
of this situation?
regards
Janning
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286893,286893#msg-286893
Hi,
Does anyone know a way to disable HTTP request pipelining on a same upstream
backend connection?
Let's say we have the below upstream backend that is configured with
keepalive and no connection close:
upstream http_backend {
server 127.0.0.1:8080;
keepalive 10;
}
server {
...
location /http/ {
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
According to this configuration: NGINX sets the maximum number of 10 idle
keepalive connections to upstream servers that are preserved in the cache of
each worker process. When this number is exceeded, the least recently used
connections are closed.
The question I have is: how can we disable NGIX to pipeline multiple HTTP
requests on the same upstream keepalive connection?
I would like to keep the upstream keepalive but just disable pipelining.
Please let me know how we could do that.
Thank you!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269248,269248#msg-269248
Hello,
I've NGINX 1.12-1 running on CentOS 7.2 and being used for DNS load
balancing. However I'm seeing lots of errors in the "dns.log".. I've
no access to the system right now but error was like.. "No response
received from the upstream server" The occurrence of the errors is no
continuous but it's pretty frequent.
The three backend servers are Windows 2012 R2 DNS servers
I will provide the nginx.conf details soon.
I may have to do some tuning besides upgrade to an appropriate version
I guess. Can anyone help to suggest/recommend tuning to minimize the
errors or rather not to have them at all?
Thanks
Hi
First time trying aio threads on linux, and I am getting this error
[emerg] 19909#0: unknown directive "thread_pool" in
/usr/local/nginx/conf/nginx.conf:7
Line7 reads:
thread_pool testpool threads=64 max_queue=65536;
Everything indicates it was built --with-threads, so I'm not sure where
to go from here
nginx -V:
/usr/local/nginx/sbin/nginx -V
nginx version: nginx/1.9.1 built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1)
configure arguments: --with-debug --with-file-aio --with-threads
from configure output:
Configuration summary
+ using threads
Any help appreciated
Thanks
Richard
Hello,
I'm dealing with a problem. When reloading the nginx configuration,
all keepalived connections receive the TCP reset flag after I send a
HUP signal to the master process. If I comment the line responsible for
enabling the keepalive feature in the configuration, the problem
disappear (nginx version is 0.9.7).
Thanks in advance,
Jocelyn Mocquant
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,197927,197927#msg-197927
I have nginx running in front of apache2/mod_wsgi and I'm not sure how
to resolve this error:
upstream timed out (110: Connection timed out) while reading response
header from upstream
any ideas on where to start?
J