I had a wordpress blog and was working on Apache. I migrated the blog to
Nginx + php-fpm. But i have a problem with this.
My blog has RSS with example.com/feed URL , and i could see the feeds with
paged like this example -> http://www.kodcu.com/feed/?paged=45.
But in Nginx, this paged RSS urls dont work with my config. /feed and
/feed/?paged=X URLs shows top 10 content.
My nginx.conf same as below. How can i handle this problem?
user root root;
worker_processes 2;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 2;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
error_log /var/log/nginx/error.log;
access_log off;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/html text/plain text/css application/json
application/x-javascript text/xml application/xml application/xml+rss
text/javascript;
##
# Virtual Host Configs
##
index index.php index.html index.htm;
## See here: http://wiki.nginx.org/WordPress
server {
server_name example.comwww.example.com;
root /var/www/example.com;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
}
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238692,238692#msg-238692
How does nginx caching handle multiple cache control headers sent from a
backend?
I had a situation where I was sending both expires and cache-control and
it seemed that the order in which they were sent controlled. I solved
that problem by ignoring the expires header.
I thought I recalled that x-accel-expires would override any other
headers regardless of order, but that doesn't seem to be the case.
Is there a priority, or does order control?
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219520,219520#msg-219520
On Tue, Feb 1, 2011 at 11:45 PM, Ryan Malayter <malayter(a)gmail.com> wrote:
>
> It does in fact work in production on nginx 0.7.6x. Below is my actual
> configuration (trimmed to the essentials and with a few substitutions
> of actual URIs).
>
Well, ngx_proxy module's directive inheritance is in action here,
which gives you nice side effects that you want :)
I'll analyze some examples here such that people *may* get some light.
[Case 1]
location /proxy {
set $a 32;
if ($a = 32) {
set $a 56;
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
}
location ~ /(\d+) {
echo $1;
}
Calling /proxy gives 76 because it works in the following steps:
1. Nginx runs all the rewrite phase directives in the order that
they're in the config file, i.e.,
set $a 32;
if ($a = 32) {
set $a 56;
}
set $a 76;
and $a gets the final value of 76.
2. Nginx traps into the "if" inner block because its condition $a = 32
was met in step 1.
3. The inner block does not has any content handler, ngx_proxy
inherits the content handler (that of ngx_proxy) in the outer scope
(see src/http/modules/ngx_http_proxy_module.c:2025).
4. Also the config specified by proxy_pass also gets inherited by the
inner "if" block (see src/http/modules/ngx_http_proxy_module.c:2015)
5. Request terminates (and the control flow never goes outside of the
"if" block).
That is, the proxy_pass directive in the outer scope will never run in
this example. It is "if" inner block that actually serves you.
Let's see what happens when we override the inner "if" block's content
handler with out own:
[Case 2]
location /proxy {
set $a 32;
if ($a = 32) {
set $a 56;
echo "a = $a";
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
}
location ~ /(\d+) {
echo $1;
}
You will get this while accessing /proxy:
a = 76
Looks counter-intuitive? Oh, well, let's see what's happening this time:
1. Nginx runs all the rewrite phase directives in the order that
they're in the config file, i.e.,
set $a 32;
if ($a = 32) {
set $a 56;
}
set $a 76;
and $a gets the final value of 76.
2. Nginx traps into the "if" inner block because its condition $a = 32
was met in step 1.
3. The inner block *does* has a content handler specified by "echo",
then the value of $a (76) gets emitted to the client side.
4. Request terminates (and the control flow never goes outside of the
"if" block), as in Case 1.
We do have a choice to make Case 2 work as we like:
[Case 3]
location /proxy {
set $a 32;
if ($a = 32) {
set $a 56;
break;
echo "a = $a";
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
}
location ~ /(\d+) {
echo $1;
}
This time, we just add a "break" directive inside the if block. This
will stop nginx from running the rest ngx_rewrite directives. So we
get
a = 56
So this time, nginx works this way:
1. Nginx runs all the rewrite phase directives in the order that
they're in the config file, i.e.,
set $a 32;
if ($a = 32) {
set $a 56;
break;
}
and $a gets the final value of 56.
2. Nginx traps into the "if" inner block because its condition $a = 32
was met in step 1.
3. The inner block *does* has a content handler specified by "echo",
then the value of $a (56) gets emitted to the client side.
4. Request terminates (and the control flow never goes outside of the
"if" block), just as in Case 1.
Okay, you see how ngx_proxy module's config inheritance among nested
locations take the key role here, and make you *believe* it works the
way that you want. But other modules (like "echo" mentioned in one of
my earlier emails) may not inherit content handlers in nested
locations (in fact, most content handler modules, including upstream
ones, don't).
And one must be careful about bad side effects of config inheritance
of "if" blocks in other cases, consider the following example:
[Case 5]
location /proxy {
set $a 32;
if ($a = 32) {
return 404;
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
more_set_headers "X-Foo: $a";
}
location ~ /(\d+) {
echo $1;
}
Here, ngx_header_more's "more_set_headers" will also be inherited by
the implicit location created by the "if" block. So you will get:
curl localhost/proxy
HTTP/1.1 404 Not Found
Server: nginx/0.8.54 (without pool)
Date: Mon, 14 Feb 2011 05:24:00 GMT
Content-Type: text/html
Content-Length: 184
Connection: keep-alive
X-Foo: 32
which may or may not what you want :)
BTW, the "add_header" directive will not emit a "X-Foo" header in this
case, and it does not mean no directive inheritance happens here, but
add_header's header filter will skip 404 responses.
You see, how tricky it is behind the scene! No wonder people keep
saying "nginx's if is evil".
Cheers,
-agentzh
Disclaimer: There may be other corner cases that I've missed here, and
other more knowledgeable people can correct me wherever I'm wrong :)
Hello,
I'm dealing with a problem. When reloading the nginx configuration,
all keepalived connections receive the TCP reset flag after I send a
HUP signal to the master process. If I comment the line responsible for
enabling the keepalive feature in the configuration, the problem
disappear (nginx version is 0.9.7).
Thanks in advance,
Jocelyn Mocquant
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,197927,197927#msg-197927
I have nginx running in front of apache2/mod_wsgi and I'm not sure how
to resolve this error:
upstream timed out (110: Connection timed out) while reading response
header from upstream
any ideas on where to start?
J
Hello,
I'm trying to avoid caching of small responses from upstreams using map:
map $upstream_http_content_length $dontcache {
default 0;
~^\d\d$ 1;
~^\d$ 1;
}
Unfortunatelly, nginx seems to ignore $upstream* variables at the map
processing stage, hence variables like $upstream_http_content_length or
$upstream_response_length stay empty when map directive is processed (this
can be observed in debug log as "http map started" message). In case I use
non-upstream related variables, a map works as expected.
Question: is there any way to use $upstream* vars inside the map directive,
or maybe someone can offer alternative way to detect small upstream response
in order to bypass cache?
Thank you.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249880,249880#msg-249880
Hello All,
I have a map block:
map $http_cookie $myVal {
"~adq_cnv(\d+)=($cmpid[^;]+#(?P<DC>\w{2}))(?:;|$)" $DC;
default "XYZ";
}
The $cmpid used in the left side of a map is to be interpolated from a
location block :
location ~ ^/cnv/(\d+)/ {
set $cmpid $1;
...
}
But that is not working. I tried different variations like : ${cmpid},
\$cmpid . But to no luck.
Any help would be appreciated.
Thanks
Harish
Hi,
I was able to setup nignx with client certificate authentication and OCSP
stapling. I however noticed that OCSP is used only for the nginx server ssl
certificate.
It does not use OCSP for validating client certificates to see if a client
is using a revoked certificate or not. Is ssl_crl the only way to checked
for revoked client certificates or can nginx be configured to use OCSP for
client certificates ?
Thanks.
I have a nginx server setup and everything is working fine. I have a
virtual server setup which allows directory listing (autoindex on;).
It appears to truncate the file names at around 50 characters.
Is it possible to disable this so that it will show the entire filename?
Or is there a way to increase it to say 100 or 150 characters?
Thanks!
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,124400,124400#msg-124400
Hi list,
we are seeing sporadic nginx errors "upstream prematurely closed
connection while reading response header from upstream" with nginx/1.6.2
which seems to be some kind of race condition.
For debugging purposes we only setup 1 upstream server on a public IP
address of the same server as nginx, there is no keepalive configured
between nginx and the upstream server. The upstream HTTP server is
written in a way that it forcibly closes the connection when the
response status code is 303. This may be part of the problem as well.
The error message in the logs is this:
2014/10/16 08:19:39 [error] 21664#0: *7504970 upstream prematurely
closed connection while reading response header from upstream, client:
109.3.1.2, server: my.avast.com, request: "GET /fr-fr/ HTTP/1.1",
upstream: "https://1.1.1.1:8888/ru-ru/", host: "my.upstream.com",
referrer: "https://id.upstream.com/ru-ru/confirm/registration?token=TOKEN"
The configuration looks like follows:
location / {
proxy_pass http://my-upstream;
proxy_read_timeout 90;
}
upstream my-upstream {
ip_hash ; #it was here because normally, we use more upstream
servers
server 1.1.1.1:8888;
}
Now, we tracked down, that this only happens when FIN packet from
upstream server reaches nginx sooner than it's finished with parsing the
response (headers) and thus sooner than nginx closes the connection
itself. For example this packet order will trigger the problem:
No. Time Source SrcPrt Destination Protocol
Length Info
25571 10.297569 1.1.1.1 35481 1.1.1.1 TCP 76 35481 > 8888 [SYN] Seq=0 Win=3072 Len=0 MSS=16396 SACK_PERM=1 TSval=1902164528 TSecr=0 WS=8192
25572 10.297580 1.1.1.1 8888 1.1.1.1 TCP 76 8888 > 35481 [SYN, ACK] Seq=0 Ack=1 Win=3072 Len=0 MSS=16396 SACK_PERM=1 TSval=1902164528 TSecr=1902164528 WS=8192
25573 10.297589 1.1.1.1 35481 1.1.1.1 TCP 68 35481 > 8888 [ACK] Seq=1 Ack=1 Win=8192 Len=0 TSval=1902164528 TSecr=1902164528
25574 10.297609 1.1.1.1 35481 1.1.1.1 HTTP 1533 GET / HTTP/1.0
25575 10.297617 1.1.1.1 8888 1.1.1.1 TCP 68 8888 > 35481 [ACK] Seq=1 Ack=1466 Win=8192 Len=0 TSval=1902164528 TSecr=1902164528
25596 10.323092 1.1.1.1 8888 1.1.1.1 HTTP 480 HTTP/1.1 303 See Other
25597 10.323106 1.1.1.1 35481 1.1.1.1 TCP 68 35481 > 8888 [ACK] Seq=1466 Ack=413 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
25598 10.323161 1.1.1.1 8888 1.1.1.1 TCP 68 8888 > 35481 [FIN, ACK] Seq=413 Ack=1466 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
25599 10.323167 1.1.1.1 35481 1.1.1.1 TCP 68 35481 > 8888 [FIN, ACK] Seq=1466 Ack=413 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
25600 10.323180 1.1.1.1 8888 1.1.1.1 TCP 68 8888 > 35481 [ACK] Seq=414 Ack=1467 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
25601 10.323189 1.1.1.1 35481 1.1.1.1 TCP 68 35481 > 8888 [ACK] Seq=1467 Ack=414 Win=8192 Len=0 TSval=1902164554 TSecr=1902164554
Note that the upstream HTTP (port 8888) sends the FIN packet sooner than
nginx (port 35481 in this case).
This is example of OK connection:
No. Time Source SrcPrt Destination
Protocol Length Info
27746 11.472853 1.1.1.1 35959 1.1.1.1 TCP 76 35959 > 8888 [SYN] Seq=0 Win=3072 Len=0 MSS=16396 SACK_PERM=1 TSval=1902165703 TSecr=0 WS=8192
27747 11.472867 1.1.1.1 8888 1.1.1.1 TCP 76 8888 > 35959 [SYN, ACK] Seq=0 Ack=1 Win=3072 Len=0 MSS=16396 SACK_PERM=1 TSval=1902165704 TSecr=1902165703 WS=8192
27748 11.472881 1.1.1.1 35959 1.1.1.1 TCP 68 35959 > 8888 [ACK] Seq=1 Ack=1 Win=8192 Len=0 TSval=1902165704 TSecr=1902165704
27749 11.472907 1.1.1.1 35959 1.1.1.1 HTTP 1187 GET /es-co/tab HTTP/1.0
27750 11.472917 1.1.1.1 8888 1.1.1.1 TCP 68 8888 > 35959 [ACK] Seq=1 Ack=1120 Win=8192 Len=0 TSval=1902165704 TSecr=1902165704
27751 11.473818 1.1.1.1 8888 1.1.1.1 HTTP 354 HTTP/1.1 303 See Other
27752 11.473830 1.1.1.1 35959 1.1.1.1 TCP 68 35959 > 8888 [ACK] Seq=1120 Ack=287 Win=8192 Len=0 TSval=1902165704 TSecr=1902165704
27753 11.473865 1.1.1.1 35959 1.1.1.1 TCP 68 35959 > 8888 [FIN, ACK] Seq=1120 Ack=287 Win=8192 Len=0 TSval=1902165705 TSecr=1902165704
27754 11.473877 1.1.1.1 8888 1.1.1.1 TCP 68 8888 > 35959 [FIN, ACK] Seq=287 Ack=1120 Win=8192 Len=0 TSval=1902165705 TSecr=1902165704
27755 11.473885 1.1.1.1 35959 1.1.1.1 TCP 68 35959 > 8888 [ACK] Seq=1121 Ack=288 Win=8192 Len=0 TSval=1902165705 TSecr=1902165705
27756 11.473892 1.1.1.1 8888 1.1.1.1 TCP 68 8888 > 35959 [ACK] Seq=288 Ack=1121 Win=8192 Len=0 TSval=1902165705 TSecr=1902165705
Example of the request and response from wireshark when the problem
occurred is attached below.
>From looking at the code, it seems to me that the error message is
printed only when recv() function returns 0 (i.e. there are no bytes to
read and the connection is closed):
src/http/ngx_http_upstream.c:
1653 n = c->recv(c, u->buffer.last, u->buffer.end - u->buffer.last);
1654
....
1669 if (n == 0) {
1670 ngx_log_error(NGX_LOG_ERR, c->log, 0,
1671 "upstream prematurely closed connection");
1672 }
>From my limited understanding, this only can happen when one has read
everything which was in the stream, so function:
1687 rc = u->process_header(r);
1688
should have had everything, i.e. complete header (verified in
wireshark), so it should never return NGX_AGAIN and thus reach the line
1670.
Any pointers will be much appreciated.
Regards
Jiri Horky
GET / HTTP/1.0
Host: my.upstream.com
X-Real-IP: 213.87.240.82
X-Forwarded-For: 213.87.240.82
Connection: close
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: __utma=1.1091156737.1413387695.1413387695.1413387695.1;
__utmb=1.2.10.1413387695; __utmc=1;
__utmz=1.1413387695.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);
ID-params-prod="registered=true&refreshIdSession=true&LOGIN_SUCCESS=true";
IDT2=IDTN-21229-MJfdls0NsfrjlHeztlBobHdPetEXXXXXX4; locale2=ru-ru;
osc_omcid=undefined; osc_ot=wd%3E%3Eun%3Eun; osc_v12=Website;
osc_v13=Website%20%7C%20Direct; osc_v14=Website%20%7C%20Direct%20%7C%20;
osc_v15=Website%20%7C%20Direct%20%7C%20; osc_v27=Website%20%7C%20Direct;
osc_v42=web; s_cc=true; s_fid=10F5314146A83D94-160DXXXXXX;
s_nr2=1413387748541-New; x-otid=wd%3E%3Eun%3Eun
User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X)
AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12A365
Safari/600.1.4
Accept-Language: ru
Referer: https://id.upstream.com/ru-ru/confirm/registration?token=TOKEN
Accept-Encoding: gzip, deflate
HTTP/1.1 303 See Other
Content-Length: 0
Content-Type: text/plain
Location: https://my.upstream.com/ru-ru/
Set-Cookie: mySessionId=3KNJXXXXXXqX; Expires=Wed, 15 Oct 2014 15:57:30
GMT; Path=/; Domain=.my.upstream.com; Secure; HTTPOnly
Set-Cookie:
myLocalIdSession="IDTN-21229-MJfdls0NsfrjlHeztlBobHdPetEXXXXXXXX4:2";
Expires=Wed, 15 Oct 2014 15:57:30 GMT; Path=/; Domain=.my.upstream.com;
Secure; HTTPOnly