I think your BIND server is suspicious. In Nginx, it's just do the call gethostbyname() when reloading. It's normal call in the glibc.<div><br></div><div>Can you write a simple C code to use the gethostbyname() call for confirmation? <br>
<div><br><div class="gmail_quote">2012/12/7 groknaut <span dir="ltr"><<a href="mailto:nginx-forum@nginx.us" target="_blank">nginx-forum@nginx.us</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
hello --<br>
<br>
nginx will not reload on some of our proxy servers, but does on others. all<br>
are running the same version: nginx/<a href="http://1.0.15." target="_blank">1.0.15.</a> the reload fails with error:<br>
<br>
[emerg] 26903#0: host not found in upstream "webappNNx:8080" in<br>
/etc/nginx/upstream.conf:N<br>
<br>
the issue appears to be related to nginx's ability to resolve a hostname.<br>
our proxy servers use BIND servers that we run ourselves. the BIND servers<br>
are returning answers just fine afaict. and when i reproduce this problem on<br>
a proxy server, i sniff the network and can confirm the proxy is asking the<br>
nameserver for an A record, and gets that answer back successfully.<br>
<br>
<br>
there is a workaround i found, but i would really really rather not resort<br>
to this: putting backend (aka upstream :<) app nodes' into /etc/hosts. i<br>
have also heard suggestions to put the backend nodes' IPs into the proxy<br>
pool file (upstream.conf), but again, i'd rather not because it's not human<br>
readable, especially when firefighting. i'm hoping there is a better<br>
solution out there than these workarounds.<br>
<br>
we are using a thirdparty module:<br>
<a href="https://github.com/yaoweibin/nginx_upstream_check_module" target="_blank">https://github.com/yaoweibin/nginx_upstream_check_module</a>. no i have not<br>
tried to reproduce this problem without the module. i don't know how i would<br>
since we need the functionality that it provides. and yes i will follow up<br>
with the module author.<br>
<br>
<br>
any help? thank you very much in advance. all the gory details follow.<br>
<br>
kallen<br>
<br>
straces available upon request :><br>
<br>
<br>
a proxy server where the problem does occur:<br>
============================================<br>
<br>
i'd like to note that the nginx parent on this server has been running for<br>
about 6 months.<br>
<br>
i try to reload, but the reload will not complete due to the error<br>
<br>
[emerg] 26903#0: host not found in upstream "webapp04a:8080" in<br>
/etc/nginx/upstream.conf:3<br>
<br>
<br>
12/07 01:28[root@proxy2-prod-ue1 ~]# nginx -t<br>
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok<br>
nginx: configuration file /etc/nginx/nginx.conf test is successful<br>
<br>
12/07 01:28[root@proxy2-prod-ue1 ~]# ps wwwwaxuf | grep ngin[x]<br>
root 20569 0.0 0.2 25652 5364 ? Ss Jun20 0:03 nginx:<br>
master process /usr/sbin/nginx -c /etc/nginx/nginx.conf<br>
nginx 3401 0.4 0.8 37056 15960 ? S Dec05 8:39 \_ nginx:<br>
worker process<br>
nginx 3402 0.4 1.1 40916 19836 ? S Dec05 8:36 \_ nginx:<br>
worker process<br>
<br>
12/07 01:29[root@proxy2-prod-ue1 ~]# cat /etc/nginx/upstream.conf<br>
## Tomcat via HTTP<br>
upstream tomcats_http {<br>
server webapp02c:8080 max_fails=2;<br>
server webapp06c:8080 max_fails=2;<br>
server roapp02c:8080 backup;<br>
check interval=3000 rise=3 fall=3 timeout=1000 type=http<br>
default_down=false;<br>
check_http_send "GET /healthcheck/version HTTP/1.0\r\n\r\n";<br>
}<br>
<br>
12/07 01:29[root@proxy2-prod-ue1 ~]# tcpdump -nvv -i eth0 -s0 -X port 53 and<br>
host 10.24.27.66<br>
<br>
12/07 01:30[root@proxy2-prod-ue1 ~]# strace -f -s 2048 -ttt -T -p 20569 -o<br>
nginx-parent-strace<br>
Process 20569 attached - interrupt to quit<br>
<br>
<br>
12/07 01:27[root@proxy2-prod-ue1 ~]# /etc/init.d/nginx reload; tail -f<br>
/var/log/nginx/error.log<br>
Reloading nginx: [ OK ]<br>
2012/12/07 00:05:29 [debug] 12290#0: bind() <a href="http://0.0.0.0:80" target="_blank">0.0.0.0:80</a> #6<br>
2012/12/07 00:05:29 [debug] 12290#0: bind() <a href="http://0.0.0.0:443" target="_blank">0.0.0.0:443</a> #7<br>
2012/12/07 00:05:29 [debug] 12290#0: counter: B7F38080, 1<br>
2012/12/07 01:28:37 [debug] 22928#0: bind() <a href="http://0.0.0.0:80" target="_blank">0.0.0.0:80</a> #6<br>
2012/12/07 01:28:37 [debug] 22928#0: bind() <a href="http://0.0.0.0:443" target="_blank">0.0.0.0:443</a> #7<br>
2012/12/07 01:28:37 [debug] 22928#0: counter: B7F8F080, 1<br>
2012/12/07 01:31:44 [debug] 23383#0: bind() <a href="http://0.0.0.0:80" target="_blank">0.0.0.0:80</a> #6<br>
2012/12/07 01:31:44 [debug] 23383#0: bind() <a href="http://0.0.0.0:443" target="_blank">0.0.0.0:443</a> #7<br>
2012/12/07 01:31:44 [debug] 23383#0: counter: B7F56080, 1<br>
2012/12/07 01:31:44 [emerg] 20569#0: host not found in upstream<br>
"webapp02c:8080" in /etc/nginx/upstream.conf:3<br>
<br>
<br>
as soon as that reload fires, i do see nameservice traffic on the wire. so<br>
it is NOT a matter of DNS service being unavailable. i note that it does ask<br>
for the A record twice. i don't know why.<br>
<br>
01:31:44.426376 IP (tos 0x0, ttl 64, id 30918, offset 0, flags [DF], proto:<br>
UDP (17), length: 72) 10.45.33.82.60723 > 10.24.27.66.domain: [bad udp cksum<br>
799c!] 18875+ A? <a href="http://webapp02c.prod.romeovoid.com" target="_blank">webapp02c.prod.romeovoid.com</a>. (44)<br>
0x0000: 4500 0048 78c6 4000 4011 934e 0af5 2b52 E..Hx.@.@..N..+R<br>
0x0010: 0af4 ed55 ed33 0035 0034 2ed6 49bb 0100 ...U.3.5.4..I...<br>
0x0020: 0001 0000 0000 0000 0977 6562 6170 7030 .........webapp0<br>
0x0030: 3263 0470 726f 6407 7361 6173 7572 6503<br>
2c.prod.romeovoid.<br>
0x0040: 636f 6d00 0001 0001 com.....<br>
01:31:44.427301 IP (tos 0x0, ttl 63, id 42228, offset 0, flags [none],<br>
proto: UDP (17), length: 156) 10.24.27.66.domain > 10.45.33.82.60723: [udp<br>
sum ok] 18875* q: A? <a href="http://webapp02c.prod.romeovoid.com" target="_blank">webapp02c.prod.romeovoid.com</a>. 1/2/2<br>
<a href="http://webapp02c.prod.romeovoid.com" target="_blank">webapp02c.prod.romeovoid.com</a>. A 10.51.23.17 ns: <a href="http://prod.romeovoid.com" target="_blank">prod.romeovoid.com</a>. NS<br>
<a href="http://ns1.prod.romeovoid.com" target="_blank">ns1.prod.romeovoid.com</a>., <a href="http://prod.romeovoid.com" target="_blank">prod.romeovoid.com</a>. NS <a href="http://ns2.prod.romeovoid.com" target="_blank">ns2.prod.romeovoid.com</a>. ar:<br>
<a href="http://ns1.prod.romeovoid.com" target="_blank">ns1.prod.romeovoid.com</a>. A 10.192.83.14, <a href="http://ns2.prod.romeovoid.com" target="_blank">ns2.prod.romeovoid.com</a>. A<br>
10.24.27.66 (128)<br>
0x0000: 4500 009c a4f4 0000 3f11 a7cc 0af4 ed55 E.......?......U<br>
0x0010: 0af5 2b52 0035 ed33 0088 e8c5 49bb 8580 ..+R.5.3....I...<br>
0x0020: 0001 0001 0002 0002 0977 6562 6170 7030 .........webapp0<br>
0x0030: 3263 0470 726f 6407 7361 6173 7572 6503<br>
2c.prod.romeovoid.<br>
0x0040: 636f 6d00 0001 0001 c00c 0001 0001 0000 com.............<br>
0x0050: 003c 0004 0a73 2aab c016 0002 0001 0001 .<...s*.........<br>
0x0060: 5180 0006 036e 7331 c016 c016 0002 0001 Q....ns1........<br>
0x0070: 0001 5180 0006 036e 7332 c016 c048 0001 ..Q....ns2...H..<br>
0x0080: 0001 0000 003c 0004 0ac0 530e c05a 0001 .....<....S..Z..<br>
0x0090: 0001 0000 003c 0004 0af4 ed55 .....<.....U<br>
01:31:44.427420 IP (tos 0x0, ttl 64, id 30918, offset 0, flags [DF], proto:<br>
UDP (17), length: 72) 10.45.33.82.60723 > 10.24.27.66.domain: [bad udp cksum<br>
8c21!] 50344+ A? <a href="http://webapp02c.prod.romeovoid.com" target="_blank">webapp02c.prod.romeovoid.com</a>. (44)<br>
0x0000: 4500 0048 78c6 4000 4011 934e 0af5 2b52 E..Hx.@.@..N..+R<br>
0x0010: 0af4 ed55 ed33 0035 0034 2ed6 c4a8 0100 ...U.3.5.4......<br>
0x0020: 0001 0000 0000 0000 0977 6562 6170 7030 .........webapp0<br>
0x0030: 3263 0470 726f 6407 7361 6173 7572 6503<br>
2c.prod.romeovoid.<br>
0x0040: 636f 6d00 0001 0001 com.....<br>
01:31:44.428050 IP (tos 0x0, ttl 63, id 42229, offset 0, flags [none],<br>
proto: UDP (17), length: 156) 10.24.27.66.domain > 10.45.33.82.60723: [udp<br>
sum ok] 50344* q: A? <a href="http://webapp02c.prod.romeovoid.com" target="_blank">webapp02c.prod.romeovoid.com</a>. 1/2/2<br>
<a href="http://webapp02c.prod.romeovoid.com" target="_blank">webapp02c.prod.romeovoid.com</a>. A 10.51.23.17 ns: <a href="http://prod.romeovoid.com" target="_blank">prod.romeovoid.com</a>. NS<br>
<a href="http://ns2.prod.romeovoid.com" target="_blank">ns2.prod.romeovoid.com</a>., <a href="http://prod.romeovoid.com" target="_blank">prod.romeovoid.com</a>. NS <a href="http://ns1.prod.romeovoid.com" target="_blank">ns1.prod.romeovoid.com</a>. ar:<br>
<a href="http://ns1.prod.romeovoid.com" target="_blank">ns1.prod.romeovoid.com</a>. A 10.192.83.14, <a href="http://ns2.prod.romeovoid.com" target="_blank">ns2.prod.romeovoid.com</a>. A<br>
10.24.27.66 (128)<br>
0x0000: 4500 009c a4f5 0000 3f11 a7cb 0af4 ed55 E.......?......U<br>
0x0010: 0af5 2b52 0035 ed33 0088 6dd8 c4a8 8580 ..+R.5.3..m.....<br>
0x0020: 0001 0001 0002 0002 0977 6562 6170 7030 .........webapp0<br>
0x0030: 3263 0470 726f 6407 7361 6173 7572 6503<br>
2c.prod.romeovoid.<br>
0x0040: 636f 6d00 0001 0001 c00c 0001 0001 0000 com.............<br>
0x0050: 003c 0004 0a73 2aab c016 0002 0001 0001 .<...s*.........<br>
0x0060: 5180 0006 036e 7332 c016 c016 0002 0001 Q....ns2........<br>
0x0070: 0001 5180 0006 036e 7331 c016 c05a 0001 ..Q....ns1...Z..<br>
0x0080: 0001 0000 003c 0004 0ac0 530e c048 0001 .....<....S..H..<br>
0x0090: 0001 0000 003c 0004 0af4 ed55 .....<.....U<br>
01:31:44.428142 IP (tos 0x0, ttl 64, id 30918, offset 0, flags [DF], proto:<br>
UDP (17), length: 72) 10.45.33.82.60723 > 10.24.27.66.domain: [bad udp cksum<br>
1632!] 45086+ A? <a href="http://webapp06c.prod.romeovoid.com" target="_blank">webapp06c.prod.romeovoid.com</a>. (44)<br>
0x0000: 4500 0048 78c6 4000 4011 934e 0af5 2b52 E..Hx.@.@..N..+R<br>
0x0010: 0af4 ed55 ed33 0035 0034 2ed6 b01e 0100 ...U.3.5.4......<br>
0x0020: 0001 0000 0000 0000 0977 6562 6170 7030 .........webapp0<br>
0x0030: 3663 0470 726f 6407 7361 6173 7572 6503<br>
6c.prod.romeovoid.<br>
0x0040: 636f 6d00 0001 0001 com.....<br>
01:31:44.428791 IP (tos 0x0, ttl 63, id 42230, offset 0, flags [none],<br>
proto: UDP (17), length: 156) 10.24.27.66.domain > 10.45.33.82.60723: [udp<br>
sum ok] 45086* q: A? <a href="http://webapp06c.prod.romeovoid.com" target="_blank">webapp06c.prod.romeovoid.com</a>. 1/2/2<br>
<a href="http://webapp06c.prod.romeovoid.com" target="_blank">webapp06c.prod.romeovoid.com</a>. A 10.195.76.80 ns: <a href="http://prod.romeovoid.com" target="_blank">prod.romeovoid.com</a>. NS<br>
<a href="http://ns1.prod.romeovoid.com" target="_blank">ns1.prod.romeovoid.com</a>., <a href="http://prod.romeovoid.com" target="_blank">prod.romeovoid.com</a>. NS <a href="http://ns2.prod.romeovoid.com" target="_blank">ns2.prod.romeovoid.com</a>. ar:<br>
<a href="http://ns1.prod.romeovoid.com" target="_blank">ns1.prod.romeovoid.com</a>. A 10.192.83.14, <a href="http://ns2.prod.romeovoid.com" target="_blank">ns2.prod.romeovoid.com</a>. A<br>
10.24.27.66 (128)<br>
[snip]<br>
<br>
<br>
the workaround, put all backend nodes (in upstream.conf) into /etc/hosts :<<br>
<br>
12/07 01:34[root@proxy2-prod-ue1 ~]# tail -3 /etc/hosts<br>
10.51.23.17 <a href="http://webapp02c.prod.romeovoid.com" target="_blank">webapp02c.prod.romeovoid.com</a> webapp02c<br>
10.195.76.80 <a href="http://webapp06c.prod.romeovoid.com" target="_blank">webapp06c.prod.romeovoid.com</a> webapp06c<br>
10.96.23.87 <a href="http://roapp02c.prod.romeovoid.com" target="_blank">roapp02c.prod.romeovoid.com</a> roapp02c<br>
<br>
and now, it will reload just fine:<br>
<br>
12/07 01:34[root@proxy2-prod-ue1 ~]# /etc/init.d/nginx reload; tail -f<br>
/var/log/nginx/error.log<br>
Reloading nginx: [ OK ]<br>
2012/12/07 01:35:39 [debug] 24076#0: bind() <a href="http://0.0.0.0:80" target="_blank">0.0.0.0:80</a> #6<br>
2012/12/07 01:35:39 [debug] 24076#0: bind() <a href="http://0.0.0.0:443" target="_blank">0.0.0.0:443</a> #7<br>
2012/12/07 01:35:39 [debug] 24076#0: counter: B7FCD080, 1<br>
2012/12/07 01:35:39 [debug] 20569#0: http upstream check, find<br>
oshm_zone:092C6390, opeers_shm: B7451000<br>
2012/12/07 01:35:39 [debug] 20569#0: http upstream check: inherit<br>
opeer:<a href="http://10.51.23.17:8080" target="_blank">10.51.23.17:8080</a><br>
2012/12/07 01:35:39 [debug] 20569#0: http upstream check: inherit<br>
opeer:<a href="http://10.195.76.80:8080" target="_blank">10.195.76.80:8080</a><br>
2012/12/07 01:35:39 [debug] 20569#0: http upstream check: inherit<br>
opeer:<a href="http://10.96.23.87:8080" target="_blank">10.96.23.87:8080</a><br>
2012/12/07 01:35:39 [notice] 20569#0: using the "epoll" event method<br>
2012/12/07 01:35:39 [notice] 20569#0: start worker processes<br>
2012/12/07 01:35:39 [debug] 20569#0: channel 3:5<br>
2012/12/07 01:35:39 [notice] 20569#0: start worker process 24078<br>
2012/12/07 01:35:39 [debug] 20569#0: pass channel s:2 pid:24078 fd:3 to s:0<br>
pid:3401 fd:9<br>
2012/12/07 01:35:39 [debug] 20569#0: pass channel s:2 pid:24078 fd:3 to s:1<br>
pid:3402 fd:11<br>
2012/12/07 01:35:39 [debug] 20569#0: channel 14:15<br>
2012/12/07 01:35:39 [notice] 20569#0: start worker process 24079<br>
2012/12/07 01:35:39 [debug] 20569#0: pass channel s:3 pid:24079 fd:14 to s:0<br>
pid:3401 fd:9<br>
2012/12/07 01:35:39 [debug] 20569#0: pass channel s:3 pid:24079 fd:14 to s:1<br>
pid:3402 fd:11<br>
2012/12/07 01:35:39 [debug] 20569#0: pass channel s:3 pid:24079 fd:14 to s:2<br>
pid:24078 fd:3<br>
2012/12/07 01:35:39 [debug] 20569#0: child: 0 3401 e:0 t:0 d:0 r:1 j:0<br>
2012/12/07 01:35:39 [debug] 20569#0: child: 1 3402 e:0 t:0 d:0 r:1 j:0<br>
2012/12/07 01:35:39 [debug] 20569#0: child: 2 24078 e:0 t:0 d:0 r:1 j:1<br>
2012/12/07 01:35:39 [debug] 20569#0: child: 3 24079 e:0 t:0 d:0 r:1 j:1<br>
2012/12/07 01:35:39 [debug] 20569#0: sigsuspend<br>
2012/12/07 01:35:39 [debug] 24078#0: malloc: 09340600:6144<br>
2012/12/07 01:35:39 [debug] 24079#0: malloc: 09340600:6144<br>
2012/12/07 01:35:39 [debug] 24078#0: malloc: 0931D3E0:102400<br>
<br>
<br>
<br>
<br>
a proxy server where the problem does NOT occur:<br>
================================================<br>
<br>
i'd like to note that the nginx parent on this server has been running for<br>
only about 1 month.<br>
<br>
<br>
12/07 01:04[root@proxy5-prod-ue1 ~]# nginx -t<br>
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok<br>
nginx: configuration file /etc/nginx/nginx.conf test is successful<br>
<br>
12/07 01:40[root@proxy5-prod-ue1 ~]# cat /etc/nginx/upstream.conf<br>
## Tomcat via HTTP<br>
upstream tomcats_http {<br>
server webapp09e:8080 max_fails=2;<br>
server webapp10e:8080 max_fails=2;<br>
server roapp05e:8080 backup;<br>
check interval=3000 rise=3 fall=3 timeout=1000 type=http<br>
default_down=false;<br>
check_http_send "GET /healthcheck/version HTTP/1.0\r\n\r\n";<br>
}<br>
12/07 01:40[root@proxy5-prod-ue1 ~]# grep webapp /etc/hosts<br>
12/07 01:41[root@proxy5-prod-ue1 ~]# # nothing as expected<br>
<br>
12/07 01:42[root@proxy5-prod-ue1 ~]# ps wwwwaxuf | grep ngin[x]<br>
root 4817 0.0 0.3 106184 5528 ? Ss Nov07 0:00 nginx:<br>
master process /usr/sbin/nginx -c /etc/nginx/nginx.conf<br>
nginx 8396 0.6 0.8 116692 15488 ? S 00:36 0:25 \_ nginx:<br>
worker process<br>
nginx 8397 0.6 0.8 116296 15096 ? S 00:36 0:25 \_ nginx:<br>
worker process<br>
<br>
<br>
<br>
12/07 01:42[root@userproxy5-prod-ue1 ~]# /etc/init.d/nginx reload; tail -f<br>
/var/log/nginx/error.log<br>
Reloading nginx: [ OK ]<br>
2012/12/07 01:42:44 [debug] 8396#0: posted event 0000000000000000<br>
2012/12/07 01:42:44 [debug] 8396#0: worker cycle<br>
2012/12/07 01:42:44 [debug] 8396#0: accept mutex locked<br>
2012/12/07 01:42:44 [debug] 8396#0: epoll timer: 399<br>
2012/12/07 01:42:44 [notice] 4817#0: signal 1 (SIGHUP) received,<br>
reconfiguring<br>
2012/12/07 01:42:44 [debug] 4817#0: wake up, sigio 0<br>
2012/12/07 01:42:44 [notice] 4817#0: reconfiguring<br>
2012/12/07 01:42:44 [debug] 4817#0: posix_memalign: 00000000007F1BA0:16384<br>
@16<br>
2012/12/07 01:42:44 [debug] 4817#0: posix_memalign: 000000000081FB60:16384<br>
@16<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 00000000008C1980:4096<br>
2012/12/07 01:42:44 [debug] 4817#0: read: 6, 00000000008C1980, 4096, 0<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 00000000006E0A80:6912<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 00000000007E59C0:4280<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 00000000007A0610:4280<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 0000000000731E00:4280<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 0000000000774AD0:4280<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 0000000000873750:4280<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 0000000000781760:4280<br>
2012/12/07 01:42:44 [debug] 4817#0: posix_memalign: 00000000008D1170:16384<br>
@16<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 00000000007EEA40:4096<br>
2012/12/07 01:42:44 [debug] 4817#0: include /etc/nginx/mime.types<br>
2012/12/07 01:42:44 [debug] 4817#0: include /etc/nginx/mime.types<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 000000000080F300:4096<br>
2012/12/07 01:42:44 [debug] 4817#0: read: 8, 000000000080F300, 3463, 0<br>
2012/12/07 01:42:44 [debug] 4817#0: malloc: 00000000006DCA90:4096<br>
2012/12/07 01:42:44 [debug] 4817#0: posix_memalign: 00000000007642B0:16384<br>
@16<br>
2012/12/07 01:42:44 [debug] 4817#0: posix_memalign: 00000000008B5F40:16384<br>
@16<br>
2012/12/07 01:42:44 [debug] 4817#0: posix_memalign: 000000000075B000:16384<br>
@16<br>
2012/12/07 01:42:44 [debug] 4817#0: posix_memalign: 000000000087E390:16384<br>
@16<br>
2012/12/07 01:42:44 [debug] 4817#0: include upstream.conf<br>
2012/12/07 01:42:44 [debug] 4817#0: include /etc/nginx/upstream.conf<br>
<br>
<br>
<br>
our config<br>
=====================<br>
upstream.conf:<br>
<br>
## Tomcat via HTTP<br>
upstream tomcats_http {<br>
server webapp02c:8080 max_fails=2;<br>
server webapp06c:8080 max_fails=2;<br>
server roapp02c:8080 backup;<br>
check interval=3000 rise=3 fall=3 timeout=1000 type=http<br>
default_down=false;<br>
check_http_send "GET /healthcheck/version HTTP/1.0\r\n\r\n";<br>
}<br>
<br>
nginx.conf:<br>
<br>
user nginx;<br>
worker_processes 2;<br>
syslog local2 nginx;<br>
error_log syslog:warn|/var/log/nginx/error.log;<br>
pid /var/run/nginx.pid;<br>
worker_rlimit_core 500M;<br>
working_directory /var/coredumps/;<br>
events {<br>
worker_connections 1024;<br>
}<br>
http {<br>
include /etc/nginx/mime.types;<br>
default_type application/octet-stream;<br>
proxy_buffers 8 16k;<br>
proxy_buffer_size 32k;<br>
log_format main '$remote_addr - $remote_user [$time_local] "$request" '<br>
'$status $body_bytes_sent "$http_referer" '<br>
'"$http_user_agent" "$http_x_forwarded_for"';<br>
access_log syslog:warn|/var/log/nginx/access.log main;<br>
sendfile on;<br>
keepalive_timeout 65;<br>
gzip on;<br>
server {<br>
listen 80;<br>
server_name _;<br>
# put X-Purpose: preview into the trash. thank you Safari<br>
if ($http_x_purpose ~* "preview") {<br>
return 444;<br>
break;<br>
}<br>
# <a href="http://wiki.nginx.org/HttpStubStatusModule" target="_blank">http://wiki.nginx.org/HttpStubStatusModule</a><br>
location /nginx-status {<br>
stub_status on;<br>
access_log off;<br>
allow <a href="http://10.0.0.0/8" target="_blank">10.0.0.0/8</a>;<br>
allow 127.0.0.1;<br>
deny all;<br>
}<br>
location /upstream-status {<br>
check_status;<br>
access_log off;<br>
allow <a href="http://10.0.0.0/8" target="_blank">10.0.0.0/8</a>;<br>
allow 127.0.0.1;<br>
deny all;<br>
}<br>
error_page 404 /404.html;<br>
location = /404.html {<br>
root /usr/share/nginx/error;<br>
}<br>
error_page 403 /403.html;<br>
location = /403.html {<br>
root /usr/share/nginx/error;<br>
}<br>
error_page 500 502 504 /500.html;<br>
location = /500.html {<br>
root /usr/share/nginx/error;<br>
}<br>
error_page 503 /503.html;<br>
location = /503.html {<br>
root /usr/share/nginx/error;<br>
}<br>
set $global_ssl_redirect 'yes';<br>
if ($request_filename ~ "nginx-status") {<br>
set $global_ssl_redirect 'no';<br>
}<br>
if ($request_filename ~ "upstream-status") {<br>
set $global_ssl_redirect 'no';<br>
}<br>
if ($global_ssl_redirect ~* '^yes$') {<br>
rewrite ^ https://$host$request_uri? permanent;<br>
break;<br>
}<br>
}<br>
## Keep upstream defs in a separate file for easier pool membership<br>
control<br>
include upstream.conf;<br>
server {<br>
listen 443;<br>
server_name _;<br>
# put X-Purpose: preview into the trash. thank you Safari<br>
if ($http_x_purpose ~* "preview") {<br>
return 444;<br>
break;<br>
}<br>
ssl on;<br>
ssl_certificate certs/wildcard_void_com.crt;<br>
ssl_certificate_key certs/wildcard_void_com.key;<br>
ssl_protocols SSLv3 TLSv1;<br>
ssl_ciphers HIGH:!ADH:!MD5;<br>
ssl_session_cache shared:SSL:10m;<br>
ssl_session_timeout 10m;<br>
set_real_ip_from <a href="http://10.0.0.0/8" target="_blank">10.0.0.0/8</a>;<br>
real_ip_header X-Forwarded-For;<br>
add_header Cache-Control public;<br>
## Tomcat via HTTP<br>
location / {<br>
proxy_pass <a href="http://tomcats_http" target="_blank">http://tomcats_http</a>;<br>
proxy_connect_timeout 10s;<br>
proxy_next_upstream error invalid_header http_503 http_502 http_504;<br>
proxy_set_header Host $host;<br>
proxy_set_header X-Server-Port $server_port;<br>
proxy_set_header X-Server-Protocol https;<br>
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;<br>
proxy_set_header Strict-Transport-Security max-age=315360000;<br>
proxy_set_header X-Secure true;<br>
proxy_set_header Transfer-Encoding ""; # OPS-475 remove if/when we<br>
update/punt Tomcat<br>
if ($request_uri ~* "\.(ico|css|js|gif|jpe?g|png)") {<br>
expires 365d;<br>
break;<br>
}<br>
}<br>
error_page 404 /404.html;<br>
location = /404.html {<br>
root /usr/share/nginx/error;<br>
}<br>
error_page 403 /403.html;<br>
location = /403.html {<br>
root /usr/share/nginx/error;<br>
}<br>
error_page 500 502 504 /500.html;<br>
location = /500.html {<br>
root /usr/share/nginx/error;<br>
}<br>
error_page 503 /503.html;<br>
location = /503.html {<br>
root /usr/share/nginx/error;<br>
}<br>
}<br>
}<br>
<br>
Posted at Nginx Forum: <a href="http://forum.nginx.org/read.php?2,233661,233661#msg-233661" target="_blank">http://forum.nginx.org/read.php?2,233661,233661#msg-233661</a><br>
<br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Weibin Yao<br>Developer @ Server Platform Team of Taobao<br>
</div></div>