I had a wordpress blog and was working on Apache. I migrated the blog to
Nginx + php-fpm. But i have a problem with this.
My blog has RSS with example.com/feed URL , and i could see the feeds with
paged like this example -> http://www.kodcu.com/feed/?paged=45.
But in Nginx, this paged RSS urls dont work with my config. /feed and
/feed/?paged=X URLs shows top 10 content.
My nginx.conf same as below. How can i handle this problem?
user root root;
worker_processes 2;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 2;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
error_log /var/log/nginx/error.log;
access_log off;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/html text/plain text/css application/json
application/x-javascript text/xml application/xml application/xml+rss
text/javascript;
##
# Virtual Host Configs
##
index index.php index.html index.htm;
## See here: http://wiki.nginx.org/WordPress
server {
server_name example.comwww.example.com;
root /var/www/example.com;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
}
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238692,238692#msg-238692
How does nginx caching handle multiple cache control headers sent from a
backend?
I had a situation where I was sending both expires and cache-control and
it seemed that the order in which they were sent controlled. I solved
that problem by ignoring the expires header.
I thought I recalled that x-accel-expires would override any other
headers regardless of order, but that doesn't seem to be the case.
Is there a priority, or does order control?
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219520,219520#msg-219520
On Tue, Feb 1, 2011 at 11:45 PM, Ryan Malayter <malayter(a)gmail.com> wrote:
>
> It does in fact work in production on nginx 0.7.6x. Below is my actual
> configuration (trimmed to the essentials and with a few substitutions
> of actual URIs).
>
Well, ngx_proxy module's directive inheritance is in action here,
which gives you nice side effects that you want :)
I'll analyze some examples here such that people *may* get some light.
[Case 1]
location /proxy {
set $a 32;
if ($a = 32) {
set $a 56;
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
}
location ~ /(\d+) {
echo $1;
}
Calling /proxy gives 76 because it works in the following steps:
1. Nginx runs all the rewrite phase directives in the order that
they're in the config file, i.e.,
set $a 32;
if ($a = 32) {
set $a 56;
}
set $a 76;
and $a gets the final value of 76.
2. Nginx traps into the "if" inner block because its condition $a = 32
was met in step 1.
3. The inner block does not has any content handler, ngx_proxy
inherits the content handler (that of ngx_proxy) in the outer scope
(see src/http/modules/ngx_http_proxy_module.c:2025).
4. Also the config specified by proxy_pass also gets inherited by the
inner "if" block (see src/http/modules/ngx_http_proxy_module.c:2015)
5. Request terminates (and the control flow never goes outside of the
"if" block).
That is, the proxy_pass directive in the outer scope will never run in
this example. It is "if" inner block that actually serves you.
Let's see what happens when we override the inner "if" block's content
handler with out own:
[Case 2]
location /proxy {
set $a 32;
if ($a = 32) {
set $a 56;
echo "a = $a";
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
}
location ~ /(\d+) {
echo $1;
}
You will get this while accessing /proxy:
a = 76
Looks counter-intuitive? Oh, well, let's see what's happening this time:
1. Nginx runs all the rewrite phase directives in the order that
they're in the config file, i.e.,
set $a 32;
if ($a = 32) {
set $a 56;
}
set $a 76;
and $a gets the final value of 76.
2. Nginx traps into the "if" inner block because its condition $a = 32
was met in step 1.
3. The inner block *does* has a content handler specified by "echo",
then the value of $a (76) gets emitted to the client side.
4. Request terminates (and the control flow never goes outside of the
"if" block), as in Case 1.
We do have a choice to make Case 2 work as we like:
[Case 3]
location /proxy {
set $a 32;
if ($a = 32) {
set $a 56;
break;
echo "a = $a";
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
}
location ~ /(\d+) {
echo $1;
}
This time, we just add a "break" directive inside the if block. This
will stop nginx from running the rest ngx_rewrite directives. So we
get
a = 56
So this time, nginx works this way:
1. Nginx runs all the rewrite phase directives in the order that
they're in the config file, i.e.,
set $a 32;
if ($a = 32) {
set $a 56;
break;
}
and $a gets the final value of 56.
2. Nginx traps into the "if" inner block because its condition $a = 32
was met in step 1.
3. The inner block *does* has a content handler specified by "echo",
then the value of $a (56) gets emitted to the client side.
4. Request terminates (and the control flow never goes outside of the
"if" block), just as in Case 1.
Okay, you see how ngx_proxy module's config inheritance among nested
locations take the key role here, and make you *believe* it works the
way that you want. But other modules (like "echo" mentioned in one of
my earlier emails) may not inherit content handlers in nested
locations (in fact, most content handler modules, including upstream
ones, don't).
And one must be careful about bad side effects of config inheritance
of "if" blocks in other cases, consider the following example:
[Case 5]
location /proxy {
set $a 32;
if ($a = 32) {
return 404;
}
set $a 76;
proxy_pass http://127.0.0.1:$server_port/$a;
more_set_headers "X-Foo: $a";
}
location ~ /(\d+) {
echo $1;
}
Here, ngx_header_more's "more_set_headers" will also be inherited by
the implicit location created by the "if" block. So you will get:
curl localhost/proxy
HTTP/1.1 404 Not Found
Server: nginx/0.8.54 (without pool)
Date: Mon, 14 Feb 2011 05:24:00 GMT
Content-Type: text/html
Content-Length: 184
Connection: keep-alive
X-Foo: 32
which may or may not what you want :)
BTW, the "add_header" directive will not emit a "X-Foo" header in this
case, and it does not mean no directive inheritance happens here, but
add_header's header filter will skip 404 responses.
You see, how tricky it is behind the scene! No wonder people keep
saying "nginx's if is evil".
Cheers,
-agentzh
Disclaimer: There may be other corner cases that I've missed here, and
other more knowledgeable people can correct me wherever I'm wrong :)
Hi
First time trying aio threads on linux, and I am getting this error
[emerg] 19909#0: unknown directive "thread_pool" in
/usr/local/nginx/conf/nginx.conf:7
Line7 reads:
thread_pool testpool threads=64 max_queue=65536;
Everything indicates it was built --with-threads, so I'm not sure where
to go from here
nginx -V:
/usr/local/nginx/sbin/nginx -V
nginx version: nginx/1.9.1 built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1)
configure arguments: --with-debug --with-file-aio --with-threads
from configure output:
Configuration summary
+ using threads
Any help appreciated
Thanks
Richard
Hello,
I'm dealing with a problem. When reloading the nginx configuration,
all keepalived connections receive the TCP reset flag after I send a
HUP signal to the master process. If I comment the line responsible for
enabling the keepalive feature in the configuration, the problem
disappear (nginx version is 0.9.7).
Thanks in advance,
Jocelyn Mocquant
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,197927,197927#msg-197927
I have nginx running in front of apache2/mod_wsgi and I'm not sure how
to resolve this error:
upstream timed out (110: Connection timed out) while reading response
header from upstream
any ideas on where to start?
J
Hello,
I'm trying to avoid caching of small responses from upstreams using map:
map $upstream_http_content_length $dontcache {
default 0;
~^\d\d$ 1;
~^\d$ 1;
}
Unfortunatelly, nginx seems to ignore $upstream* variables at the map
processing stage, hence variables like $upstream_http_content_length or
$upstream_response_length stay empty when map directive is processed (this
can be observed in debug log as "http map started" message). In case I use
non-upstream related variables, a map works as expected.
Question: is there any way to use $upstream* vars inside the map directive,
or maybe someone can offer alternative way to detect small upstream response
in order to bypass cache?
Thank you.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249880,249880#msg-249880
I'm trying to diagnose some strange behavior in my web app, and at the
moment it seems like nginx may be at fault, though I'd be happy to learn
otherwise.
On the client side, I'm using flow.js (https://github.com/flowjs/flow.js) to
upload a file to the server. This library should allow me to upload very
large files by splitting them up into (by default) 1MB chunks, and sending
each chunk as a standard file form upload request.
On the server, I am connecting to a Python WSGI server (gunicorn) via
try_files / proxy_pass. The configuration is very standard:
location / {
root /var/www;
index index.html index.htm;
try_files $uri $uri/ @proxy_to_app;
}
location @proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
The Python code is pretty simple, mainly just opening the file and writing
the data. According to the gunicorn access log, each request takes around
135ms:
127.0.0.1 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.0" 200 -
... 0.135206
127.0.0.1 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.0" 200 -
... 0.136749
127.0.0.1 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.0" 200 -
... 0.137314
But in the nginx access log, the $request_time varies wildly and is usually
very large:
10.0.0.0 - - [17/Jul/2016:05:07:06 +0000] "POST /files HTTP/1.1" 200 ...
0.956
10.0.0.0 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.1" 200 ...
0.553
10.0.0.0 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.1" 200 …
0.888
At first I thought it might be the network itself taking a long time to send
the data, but looking at the network logs in the browser doesn’t seem to
bear this out. Once the socket connection is established, Chrome says that
the request time is often as low as 8ms, with the extra ~.5s-1s spent
waiting for a response.
So the question is, what is nginx doing during all that extra time? On
normal (small) requests, the times in the two logs are identical, but even
dialing down the chunk size to flow.js to 128kb or 64kb results in a delay
in nginx, and it's making it take way too long to upload these files (I
can't just set the chunk size to something super small like 4kb, because the
overhead of making so many requests makes the uploads slower).
I've tried messing with various configuration options including
proxy_buffer_size and proxy_request_buffering, to no effect.
Any ideas on next steps for how I could begin to diagnose this?
Extra info:
CentOS 7, running on AWS
nginx version: nginx/1.10.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx
--with-http_ssl_module --with-http_realip_module --with-http_addition_module
--with-http_sub_module --with-http_dav_module --with-http_flv_module
--with-http_mp4_module --with-http_gunzip_module
--with-http_gzip_static_module --with-http_random_index_module
--with-http_secure_link_module --with-http_stub_status_module
--with-http_auth_request_module --with-http_xslt_module=dynamic
--with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic
--with-http_perl_module=dynamic --add-dynamic-module=njs-1c50334fbea6/nginx
--with-threads --with-stream --with-stream_ssl_module
--with-http_slice_module --with-mail --with-mail_ssl_module --with-file-aio
--with-ipv6 --with-http_v2_module --with-cc-opt='-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268317,268317#msg-268317
My device has multiple interfaces and supports dynamic ips.
SO_BINDTODEVICE looks like it would be used to specify a device in the
listen statement instead of having to update every IP change.
I see someone submitted a patch years ago that wasn't accepted and
there was no follow on. Is there any particular reason other than this
probably isn't a common use case? Is this something that could be
added in the future?
https://forum.nginx.org/read.php?29,234862
Hello All,
I have a map block:
map $http_cookie $myVal {
"~adq_cnv(\d+)=($cmpid[^;]+#(?P<DC>\w{2}))(?:;|$)" $DC;
default "XYZ";
}
The $cmpid used in the left side of a map is to be interpolated from a
location block :
location ~ ^/cnv/(\d+)/ {
set $cmpid $1;
...
}
But that is not working. I tried different variations like : ${cmpid},
\$cmpid . But to no luck.
Any help would be appreciated.
Thanks
Harish